entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
24
167
authors
sequencelengths
1
661
primary_category
stringclasses
111 values
categories
sequencelengths
1
8
text
stringlengths
2
383k
http://arxiv.org/abs/2406.09391v1
20240613175906
A More Practical Approach to Machine Unlearning
[ "David Zagardo" ]
cs.LG
[ "cs.LG", "cs.AI" ]
references.bib @articlecao2015towards, title=Towards efficient and privacy-preserving computing in big data era, author=Cao, Yingjie and Yang, Bo and Rong, Yu and Yang, Jian, journal=IEEE Transactions on Big Data, volume=1, number=1, pages=49–64, year=2015, publisher=IEEE @inproceedingsguo2020certified, title=Certified data removal from machine learning models, author=Guo, Chuan and Goldstein, Tom and McAuley, Julian, booktitle=International Conference on Machine Learning, pages=3832–3842, year=2020, organization=PMLR @inproceedingsbourtoule2021machine, title=Machine unlearning, author=Bourtoule, Loïc and Chandrasekaran, Varun and Choquette-Choo, Christopher A and Jia, Haoran and Travers, Alexandre and Zhang, Baiwu and Lie, David and Papernot, Nicolas, booktitle=2021 IEEE Symposium on Security and Privacy (SP), pages=141–159, year=2021, organization=IEEE @inproceedingsginart2019making, title=Making ai forget you: Data deletion in machine learning, author=Ginart, Alex and Guan, Melody Y and Valiant, Gregory and Zou, James, booktitle=Advances in Neural Information Processing Systems, volume=32, pages=113–124, year=2019 @inproceedingsneel2021descent, title=Descent-to-delete: Gradient-based methods for machine unlearning, author=Neel, Seth and Rothblum, Guy N and Ullman, Jonathan, booktitle=Advances in Neural Information Processing Systems, volume=34, pages=17319–17330, year=2021 @inproceedingsthudi2022unrolling, title=Unrolling SGD: Understanding Factors Influencing Machine Unlearning, author=Thudi, Aditya and Kapoor, Satyen and Goldstein, Tom and Arora, Sanjeev, booktitle=International Conference on Learning Representations, year=2022 A More Practical Approach to Machine Unlearning David Zagardo, dave@greenwillowstudios.com June 2024 =============================================== § ABSTRACT Machine learning models often incorporate vast amounts of data, raising significant privacy concerns. The ability to remove the influence of specific data points from a trained model, known as machine unlearning, addresses these concerns. This paper explores practical methods for implementing machine unlearning, focusing on a first-epoch gradient-ascent approach that leverages both gradient and influence tracking across multiple epochs of training to measure and reverse the impact of data points from the training dataset. Key findings include: 1. Single vs. Multi-Epoch Unlearning: Unlearning using only first-epoch gradients is surprisingly more effective than using multi-epoch gradients. 2. Layer-Based Unlearning: The embedding layer in GPT-2 is crucial for effective gradient unlearning. Surprisingly, the gradients from the output layers (layers 11 and 12) had absolutely no impact on unlearning effect in these experiments. Efficient unlearning can be achieved using only the embedding layer, halving the space complexity compared to utilizing the entire model's gradients. 3. Influence Functions & Scoring: Techniques like Hessian Vector Product and the dot product of activations and tensors are explored for quantifying unlearning. 4. Gradient Ascent Considerations: Careful calibration is necessary to avoid overexposing the model to specific data points during the unlearning process. Without appropriate application, one might terminate the unlearning process prematurely and find their model in an optimum that has knowledge of data points one wishes to remove. 5. Fuzzy Matching Compared to Iterative Unlearning: We compare fuzzy matching removal techniques (heuristic) to iterative unlearning techniques (unbiased), finding that fuzzy matching unlearning is capable of shifting the model to a new optimum, and iterative unlearning may provide a more complete unlearning modality. Our empirical evaluation confirms that first-epoch gradient ascent for machine unlearning is more statistically more effective than whole-model gradient ascent. These results highlight the potential of machine unlearning for enhancing data privacy and compliance with regulations such as GDPR and CCPA. The study underscores the importance of formal methods to comprehensively evaluate the unlearning process. § INTRODUCTION Machine learning models are often trained on vast amounts of data, including potentially sensitive information. However, as data privacy concerns rise, there is an increasing need for techniques that allow models to "forget" specific data points upon request. This process is known as machine unlearning. In this paper, we explore a gradient-based method for implementing machine unlearning in practice, evaluate its effectiveness, and discuss potential applications and implications. § RELATED WORK §.§ Overview of Machine Unlearning Machine unlearning refers to the process of removing the influence of specific data points from a trained machine learning model. This concept is particularly important in scenarios where data privacy and compliance with regulations such as GDPR and CCPA are critical. The goal is to enable models to forget specific information without requiring a complete retraining from scratch, which would be computationally expensive and impractical for large-scale models. §.§ Existing Techniques and Approaches Existing approaches to machine unlearning can be categorized into several techniques, including certified data removal, gradient-based unlearning, and other algorithmic methods. §.§.§ Certified Data Removal Certified data removal aims to provide formal guarantees that a model has indeed forgotten the specific data points. Cao et al. (2015) discuss the importance of efficient and privacy-preserving computing in the big data era, which lays the groundwork for understanding the need for data removal techniques <cit.>. Guo et al. (2020) introduce methods for certified data removal from machine learning models, which ensure that the influence of certain data points can be provably removed <cit.>. §.§.§ Gradient-Based Unlearning Gradient-based unlearning methods involve reversing the influence of data points by applying gradients computed during training. Bourtoule et al. (2021) formalize the concept of machine unlearning and propose several practical algorithms for removing the influence of data points from trained models <cit.>. Neel et al. (2021) present Descent-to-Delete, a gradient-based method for machine unlearning that effectively undoes the impact of specific data points on the model's parameters <cit.>. Wang et al. (2024) propose a novel Reverse KL-Divergence-based Knowledge Distillation (RKLD) method for unlearning personal information in large language models, demonstrating the importance of balancing forget quality with model utility <cit.>. Recent studies have also focused on the embedding layer's role in the unlearning process. Jang et al. (2022) highlight the critical function of the embedding layer in representing input tokens, making it an effective focal point for unlearning operations <cit.>. Eldan and Russinovich (2023) further explore the potential of embedding-layer unlearning, finding that targeting this layer can efficiently reduce the influence of specific data points without significantly impacting the model's overall performance <cit.>. §.§.§ Algorithmic Methods Algorithmic methods for machine unlearning focus on designing model architectures and training procedures that facilitate easy removal of data. Ginart et al. (2019) explore techniques for making AI systems forget specific data, focusing on the feasibility of data deletion in machine learning models <cit.>. Thudi et al. (2022) discuss unrolling stochastic gradient descent (SGD) to understand factors influencing machine unlearning, providing insights into the theoretical and practical aspects of the process <cit.>. §.§ Machine Unlearning for Large Language Models With the development of large language models (LLMs), there is an increased focus on privacy risks and the need to remove certain data influences. Several methods and techniques have been explored for this purpose: §.§.§ Privacy and Safety Lu et al. (2022) introduced techniques for detoxification of harmful information in LLMs, while Yu et al. (2023) explored methods to debias LLMs and remove unwanted biases <cit.> <cit.>. §.§.§ Techniques and Strategies Ilharco et al. (2022) proposed task arithmetic for model editing and parameter manipulation, and Zhang et al. (2023) further explored task arithmetic in the context of unlearning <cit.> <cit.>. Pawelczyk et al. (2023) utilized prompt engineering to achieve unlearning goals, and Chen and Yang (2023) presented fine-tuning methods to eliminate the impact of specific data <cit.> <cit.>. Wang et al. (2023) proposed additional fine-tuning strategies tailored for unlearning <cit.>. §.§.§ Challenges in Unlearning The primary challenge in model unlearning is thoroughly forgetting data samples to make the model behave as if it was never trained on them, while maintaining model utility. Existing methods like gradient ascent often impair the model’s ability to comprehend sentences in generation tasks, leading to incomplete forgetting and loss of utility <cit.> <cit.>. §.§ Influence Functions in Large Language Models Influence functions have been applied to large language models to understand their generalization patterns. Grosse et al. (2023) used an approximation method called Eigenvalue-corrected Kronecker-Factored Approximate Curvature (EK-FAC) to make influence function calculations feasible for models with up to 52 billion parameters <cit.>. This study highlights the potential of influence functions in investigating various aspects of LLMs, such as sparsity of influence patterns, abstraction with scale, and capabilities in math and programming, and underscores their utility in enhancing the performance and reliability of large-scale language models. § METHODOLOGY We chose to use GPT2 and two custom datasets. One, the "Dave" dataset about a fictional character "Dave," and two, the "Name" dataset, with the same datapoints but swapping out the name "Dave" for 19 other unique names. §.§ Model and Dataset Description This section describes the GPT-2 model and the custom "Dave" dataset used in our experiments. The GPT-2 model is a transformer-based language model pre-trained on a large corpus of text data. It uses self-attention mechanisms to process input text and generate coherent and contextually relevant output sequences. The model consists of multiple transformer layers, each comprising multi-head self-attention and feed-forward neural networks. We chose to use GPT-2 for its deterministic behavior under certain conditions and small size. We use the custom "Dave" dataset, which contains 20 specific data points related to the fictional character "Dave" for our experiments. We created this dataset so that we would be training on data that the model had never seen before. §.§ Influence Tracking We employ influence tracking mechanisms to measure the impact of individual data points on the model's outputs. Influence tracking is achieved through two means: Hessian-Vector Product calculation, and by computing and storing activations and gradients during the training process. §.§.§ Activation and Gradient Storage To capture activations and gradients, we compute and store them during the training process. activation_i = f(𝐖_i𝐡_i-1 + 𝐛_i) gradient_i = ∂ℒ/∂𝐖_i where f is the activation function, 𝐖_i and 𝐛_i are the weights and biases of layer i, 𝐡_i-1 is the input to layer i, and ℒ is the loss function. §.§ Unlearning Mechanism Our unlearning mechanism involves computing and applying gradients to reverse the influence of specific data points. The process can be broken down into several steps: §.§.§ Gradient Computation During training, we compute the gradients of the loss function with respect to the model parameters. These gradients indicate how the model's parameters should be adjusted to minimize the loss. ∇_θℒ(𝐱, y) where θ represents the model parameters, 𝐱 is the input data point, and y is the corresponding label. §.§.§ Storing Gradients We accumulate the computed gradients for each data point in the gradient storage dictionary, indexed by the data point's unique identifier. Computed gradients are stored with respect to layer, to aid in layer-specific unlearning. Instead of storing all gradients, we aggregate them during training to save storage space. §.§.§ Applying Gradients for Unlearning To unlearn a specific data point, we apply the stored gradients in the opposite direction (gradient ascent) with respect to each layer. This effectively reverses the influence of the data point on the model parameters. θ←θ + η∇_θℒ(𝐱, y) where η is the learning rate. §.§.§ Unlearning Data Point The unlearning process involves identifying the target data point, retrieving its stored gradients, and applying these gradients to the model parameters to reverse the data point's influence. §.§ Fuzzy Matching for Unlearning To determine the data points to unlearn, we use fuzzy matching to find the closest match for a generated text in the dataset. This ensures effective and thorough unlearning. §.§.§ Fuzzy Matching with difflib We use the `difflib` library to find the closest match for the generated text in the dataset. The 'find closest match' function takes the dataset, generated text, and tokenizer as inputs and returns the input IDs of the closest match and the text itself. §.§ Mathematical Formulation of Influence Computation To measure the influence of a data point on the model's output, we compute the dot product of the normalized token activations and the stored gradients. influence_i,j = 𝐚_i·𝐠_j/𝐚_i𝐠_j where 𝐚_i is the activation vector for token i, 𝐠_j is the gradient vector for data point j, and · denotes the Euclidean norm. This computation allows us to quantify the contribution of individual data points to the generated text and identify which data points have the most significant influence on specific tokens. §.§ Experimental Setup §.§.§ Dataset Preparation We first load and preprocess the custom "Dave" dataset. The dataset is tokenized and formatted for PyTorch. §.§.§ Training Procedure The training procedure involves fine-tuning the GPT-2 model with influence tracking enabled. The optimizer used is Adam with a learning rate of 2 × 10^-5, and the model is trained for 5, 10, 15, and 20 epochs with a batch size of 1. §.§.§ Influence Functions using Hessian-Vector Product This approach is inspired by classical statistical applications for influence scoring, and relies heavily on the work done by Pang Wei Koh and Percy Liang <cit.>. To track the influence scores, we first compute the gradients of the loss with respect to the model parameters: grads = ∂ℒ/∂θ Next, we compute the Hessian-Vector Product: hvp = ∇^2_θℒ· v The inverse Hessian-Vector Product is approximated iteratively. Given the damping factor λ and scaling factor α, the update rule is: ĥ_i+1 = v + (1 - λ) ·hvp/α Normalized at each step: ĥ_i+1 = ĥ_i+1/ĥ_i+1 + ϵ Finally, the influence of each training point on the test loss is computed as: influence = -∑_i=1^N ( ∇_θℒ_train(z_i) ·IHVP) §.§.§ Fuzzy Matching Unlearning This approach involves identifying the closest match for a generated text in the dataset to ensure effective and thorough unlearning. We utilize the library for fuzzy matching. The process is as follows: * Finding the Closest Match: We compare the generated text with all texts in the dataset using the function. This function returns the closest match based on the similarity score. * Input IDs Retrieval: Once the closest match is identified, we retrieve its corresponding input IDs from the dataset using a custom function . * Unlearning: The retrieved input IDs are then used to adjust the model parameters by applying gradient updates in the opposite direction, effectively unlearning the influence of the target data point. The fuzzy matching approach ensures that the unlearning process targets the most relevant data points, even if the exact text does not exist in the dataset, thereby enhancing the effectiveness of the unlearning mechanism. §.§.§ Iterative Removal Approach The iterative removal approach is designed to unlearn data points incrementally, ensuring a thorough and systematic process that allows for efficient monitoring of the datapoints. This method involves targeting a specific data point for unlearning based on predefined criteria rather than similarity measures, which can introduce biases, or lead to incomplete unlearning requirements. The key steps in the iterative removal approach are as follows: * Target Data Point Identification: Identify the specific data point to be unlearned based on the provided target_text. * Input IDs Retrieval: Based on target_text, we retrieve its corresponding input IDs from the dataset. * Parameter Adjustment: Adjust the model parameters by applying the accumulated gradients in the opposite direction. * Re-evaluation: Recompute the influence scores after each iteration of unlearning. This evaluation allows us to monitor the influence of the specific data point on the model's inferences across time. The iterative removal approach is advantageous because it systematically targets and unlearns specific data points without relying on similarity measures, avoiding potential biases introduced by a heuristic data removal approach. By directly addressing the target data points, this approach ensures a more objective and controlled reduction of their influence. §.§ Evaluation Metrics To evaluate the effectiveness of the unlearning mechanism, we use the following metrics: * Influence scores: Quantifying the impact of specific data points on the model's outputs * Unlearning verification: Checking if the influence of the target data point has been effectively removed using fuzzy matching * Perplexity: Measuring the model's predictive performance before and after fine-tuning, and after unlearning §.§.§ Perplexity Experiments We conducted experiments to evaluate the perplexity of the model in three stages: before fine-tuning, after fine-tuning, and after unlearning. Perplexity is a measure of how well a probability distribution or probability model predicts a sample <cit.>. Lower perplexity indicates better understanding of the data. § RESULTS Our statistical evaluation confirms that gradient-based First-Epoch Unlearning is significantly more effective than both Embedding-Layer and Model-Based Unlearning techniques. * Embedding-Layer Unlearning: Demonstrated substantial reduction in influence scores, highlighting the effectiveness of targeting the embedding layer for unlearning while maintaining computational efficiency. * Whole-Model Unlearning: Effective but more computationally intensive compared to embedding-layer unlearning. * First-Epoch Gradient Ascent Unlearning: Achieved effective unlearning with a balance between computational cost and efficacy. * Optimal Unlearning Duration: Early stopping may be an area for exploring in future research. §.§ Comparison of Unlearning Approaches We conducted a series of paired t-tests to compare the effectiveness of embedding-layer unlearning, whole-model unlearning, and first-epoch gradient ascent unlearning methods. The results indicate that all unlearning methods significantly reduce the influence scores, with notable differences in their effectiveness. §.§.§ Statistical Significance and Practical Implications To evaluate the statistical significance of the unlearning methods, paired t-tests were performed comparing influence scores before fine-tuning and after the application of various unlearning techniques. The results are summarized in Table <ref>, showing that each method significantly reduces the influence of the target data points. The analysis reveals the following key findings: * Embedding-Layer Unlearning: This method demonstrated a substantial reduction in influence scores, with a mean difference of 467.81 and a high Cohen's d of 3.66, indicating a strong effect size. The 95% confidence interval for the mean difference is narrow, suggesting consistent performance across trials. This highlights the effectiveness of focusing on the embedding layer for targeted unlearning while maintaining computational efficiency. * Whole-Model Unlearning: While effective, whole-model unlearning showed a slightly lower mean difference of 423.99 and a Cohen's d of 3.36. Although still a strong effect, this approach is more computationally intensive, suggesting that targeting specific layers could be a more resource-efficient strategy. * First-Epoch Gradient Ascent Unlearning: This method achieved the highest t-statistic (15.88) when compared directly to embedding-layer unlearning, with a mean difference of 70.94 and a Cohen's d of 3.55. This indicates that first-epoch gradient ascent is not only effective but also provides a balance between computational cost and unlearning effectiveness. The consistent high Cohen's d values across all comparisons indicate that the observed differences are not only statistically significant but also practically meaningful. The effect sizes suggest robust changes in the model's behavior, confirming the efficacy of the unlearning processes. §.§.§ Robustness of Statistical Analysis The statistical analysis employed rigorous methods to ensure robustness: * Paired t-tests: These tests accounted for the dependent nature of the data, with extremely low p-values (e.g., 8.79e-14) confirming the significant impact of unlearning methods. * Confidence Intervals: Narrow 95% confidence intervals for mean differences indicated precise estimates, supporting consistent reduction in influence scores. * Effect Sizes (Cohen's d): High Cohen's d values (1.94 to 4.23) across comparisons underscored the substantial and practical significance of the unlearning techniques. §.§ Influence Tracking Mechanism We explored two influence tracking mechanisms. One at the token level, and another at the sentence level. The mechanism we ended up with was taken from literature from Pang Wei Koh and Percy Liang's work. This granular analysis provides deeper insights into the inner workings of the model and the effectiveness of unlearning, allowing for precise adjustments to mitigate the influence of specific data points. Given the figure below, if we take the influence score tracking at surface level, we can see that there is a significant change for all 3 categories - before unlearning, after unlearning for embedding, all-layer, and first-epoch. §.§.§ Generation Text Examples Before and After Unlearning with Fuzzy Matching Pipeline Table <ref> shows the generated text before and after the unlearning process at different epochs. Initially, the generated text was highly relevant and coherent, with phrases such as "Dave is a freelance writer" and "Dave is a software engineer." After unlearning, there were noticeable changes in the generated text. For instance, at 10 epochs, the text shifted from "Dave is a software engineer" to "Dave's favorite books are 'The Hitchhiker's Guide to the Galaxy by George R.R. Martin, and The Hitchhiker's Guide to the Galaxy by Isaac Asimov.' Follow @TheHitchHiker on Twitter." Similar alterations were observed at 15 and 20 epochs, with new information being introduced that was not present in the original outputs. These changes indicate that the unlearning process effectively removed specific information, causing the model to generate different content. This demonstrates the potential of targeted unlearning techniques to alter the model's knowledge without compromising the overall coherence of the generated text. §.§ Layer-Specific Unlearning * Using only gradients from layers 11 and 12 (the last transformer layers) was ineffective for removing data or altering the model's outputs. * Including layer 0, GPT-2's embedding layer, along with layers 11 and 12 allowed the model to modify data and change its outputs each time unlearning was performed. * Using only layer 0 was sufficient to remove data points, significantly reducing memory usage from 15GB to 7GB. This indicates the importance of the embedding in the unlearning process and its potential for efficient memory usage. These findings suggest that embedding layers play a crucial role in the unlearning process, and focusing on this layer can lead to more effective and efficient unlearning operations. §.§.§ Layer-Based Unlearning Compared to All-Layer Unlearning after 15 Epochs of Training The results of our experiments highlight the differences in unlearning effectiveness when using gradients from only the embedding layer versus using gradients from all layers of the model. §.§ Unlearning Duration and Influence Scores Our experiments have identified a critical insight into the unlearning process: there is an optimal number of unlearning iterations that is likely dependent upon a data point's initial influence score. This is likely related to climbing an optimization hill. §.§ Perplexity Score Results Table <ref> presents the perplexity scores across different stages of training and unlearning. Initially, the model had a high perplexity of 29129.39, indicating poor performance on the custom "Dave" dataset. Fine-tuning significantly improved the score to 1.07. After unlearning, the scores slightly increased to 1.12 (embedding layer), 1.16 (whole model), and 1.45 (first epoch), but remained close to 1.0. This indicates effective unlearning with minimal impact on overall performance, demonstrating the potential of targeted unlearning techniques to maintain model utility while enhancing data privacy. §.§ Interpretation of Influence Score Results Paired t-tests reveal statistically significant differences between influence scores before and after unlearning, with very low p-values providing strong evidence against the null hypothesis. Comparisons demonstrate the effectiveness of the first-epoch gradient ascent method, showing significant improvements over embedding-layer and whole-model unlearning approaches. §.§ ROUGE Scores Analysis The ROUGE scores were evaluated at various intervals during the unlearning process to assess the impact on model performance. Table <ref> presents the ROUGE-1, ROUGE-2, and ROUGE-L scores at different iterations as evaluated over the Dave dataset. Figure <ref> illustrate the trends in ROUGE scores for the primary and additional datasets, respectively. Initially, there is a significant drop in scores, indicating effective unlearning. The scores then stabilize, demonstrating the model's adaptation to the unlearning process. § DISCUSSION §.§ Influence Scores The analysis reveals key findings: * Both all-layer and embedding-only gradient ascent effectively reduce the impact of specific training data, while Layer 11 does not. Embedding-only unlearning is more cost-effective and promotes the adoption of machine unlearning techniques. * First-epoch gradient ascent is as effective as multi-epoch gradient ascent but requires more iterations (approximately 10 percent increase in unlearning duration). Unlearning the least influential data point ("Steve’s favorite movies are...") caused other data points to gain influence, raising questions about the benefits of modifying a model to remove minimally influential data. Conversely, unlearning the most influential data point ("Eve likes to play guitar.") was fast, suggesting an optimization hill for influential data points. These observations indicate that data points closer to their optimum require fewer iterations to unlearn due to stronger gradients. §.§ Perplexity Scores Perplexity scores provide additional insights: * High perplexity before fine-tuning indicates the pre-trained model's difficulty in predicting the "Dave" dataset. * Drastic reduction in perplexity after fine-tuning shows significant improvement. * Negligible change in perplexity after unlearning suggests effective removal of targeted data points without adversely affecting overall performance. Consistently low perplexity scores highlight the model's confidence in its responses despite data removal, suggesting the need for continuous monitoring and iterative unlearning. §.§ Single-Epoch Gradient Ascent for Targeted Machine Unlearning Single-Epoch Gradient Ascent shows promising results: * Comparable Performance: Single-epoch gradients are more effective then multi-epoch gradients, simplifying the unlearning process while boosting effectiveness. * Efficiency: Using gradients from a single epoch reduces computational overhead, making this approach practical for production. Single-epoch gradient ascent can be effective, but precise applications may benefit from multi-epoch gradients for stronger signals. §.§ Optimal Unlearning Duration For the data point "Dave is working on building a custom guitar." influence scores initially decrease with unlearning iterations, indicating successful unlearning, but rise beyond the 100th iteration, suggesting either re-learning of data signatures or successful unlearning at the inflection point. Identifying the optimal number of iterations is crucial to prevent re-exposure and reinforcement of data points, or reduction in model utility. §.§ Discussion of ROUGE Scores The ROUGE scores reveal several key insights into the unlearning process and its impact on model performance. Initially, the ROUGE scores for Dataset 1 decrease significantly, demonstrating the effectiveness of the unlearning process. This decrease indicates that the influence of specific data points was successfully reduced, leading to lower overlap with the reference summaries. The scores stabilize after around 100 iterations, suggesting that the model reaches a new equilibrium state post-unlearning. Interestingly, there is a slight increase in ROUGE scores around 210 iterations, peaking at 260 iterations, before stabilizing again. This suggests that the model undergoes a recovery phase, potentially adapting to the changes introduced by the unlearning process. In contrast, the additional dataset shows minimal impact from the unlearning process. The initial ROUGE scores are perfect, and while there is a slight decrease around 30 to 100 iterations, the scores quickly recover to their original values. This indicates that the unlearning process is less effective for this dataset, or that the model is highly resilient and able to maintain performance. These observations highlight the variability in unlearning effectiveness depending on the dataset. The differing responses suggest that the characteristics of the dataset play a crucial role in how unlearning affects model performance. It emphasizes the need for robust and adaptable unlearning techniques tailored to specific datasets. Overall, the ROUGE scores provide valuable insights into the dynamics of the unlearning process, revealing both its potential and limitations. Further research is needed to understand the underlying factors that influence the variability in unlearning effectiveness and to develop more robust methods that can ensure lasting impact across different datasets. §.§ Implications of Findings Our first-epoch-based unlearning approach enhances data privacy and regulatory compliance (e.g., GDPR and CCPA) by allowing models to forget specific data points effectively and more easily than whole-model, embedding-layer, or full-epoch gradient ascent. This is particularly beneficial in industries with sensitive information. §.§.§ Effectiveness and Efficiency Embedding layers for unlearning demonstrate notable efficiency gains. Embedding-only unlearning is effective and less expensive than all-layer unlearning, crucial for resource-constrained environments. Single-epoch unlearning performed the best, while requiring a 10 percent increase in the number of iterations required. §.§ Limitations and Future Work Despite promising results, several limitations warrant further investigation: §.§.§ Scalability of Unlearning Techniques Scaling gradient-based unlearning to large datasets and models remains a challenge. Future work should explore more efficient algorithms and optimizations to enhance scalability. §.§.§ Comprehensive Layer Analysis A comprehensive analysis across all layers could provide deeper insights into effective layers for influence reduction, refining the unlearning process. §.§.§ Evaluation on Diverse Datasets Evaluating unlearning techniques on diverse datasets and tasks would provide a more comprehensive understanding of their generalizability and effectiveness across domains. §.§.§ Long-Term Model Stability Ensuring long-term stability after multiple unlearning operations is critical for deployment in dynamic environments. Future research should focus on this aspect. §.§.§ Formal Verification of Unlearning Developing formal methods to verify the effectiveness of unlearning operations is important. Providing guarantees that a model has forgotten specific data points would enhance trust and reliability in unlearning techniques. § CONCLUSION §.§ Summary of Findings Our experiments validated the effectiveness of first-epoch gradient ascent for machine unlearning. By applying the stored gradients in the opposite direction, we successfully reduced the influence of targeted data points and modified the output. However, it is unclear as to whether or not this is statistically insignificant compared to the baseline or enough to be considered "Certified Data Removal." Single-layer gradient-tracking proved to be as effective as whole-model gradient-tracking in the unlearning process, indicating the importance of a model's embeddings to its predictions. First-epoch gradient storage was more successful than multi-epoch gradient storage when performing gradient ascent. §.§ Final Remarks Machine unlearning represents a crucial advancement in addressing data privacy concerns and regulatory compliance. Our approach provides a more practical solution for ensuring that models can forget specific information without requiring complete retraining. Storing the gradients for our target data point over a single epoch of training is significantly more feasible than storing all gradients over all epochs. Our work validates the importance of embedding layers in the unlearning process. This focus allows for efficient and effective unlearning with reduced computational overhead. The influence tracking mechanism we incorporated provides a granular understanding of how specific data points affect model outputs, facilitating precise unlearning actions. While our results are promising, further research is necessary to enhance the scalability of unlearning techniques. Future work should explore more efficient algorithms, comprehensive layer analysis, and evaluation across diverse datasets to ensure the broad applicability and effectiveness of machine unlearning methods. Overall, our study underscores the potential of first-epoch gradient ascent for machine unlearning to improve data privacy and compliance, offering a viable path forward for dynamic and privacy-sensitive applications in machine learning. § REFERENCES 10 bourtoule2021machine Loïc Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Haoran Jia, Alexandre Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021. cao2015towards Yingjie Cao, Bo Yang, Yu Rong, and Jian Yang. Towards efficient and privacy-preserving computing in big data era. IEEE Transactions on Big Data, 1(1):49–64, 2015. chen2023unlearn Jiaao Chen and Diyi Yang. Unlearn what you want to forget: Efficient unlearning for llms. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12041–12052, 2023. eldan2023harry Ronen Eldan and Mark Russinovich. Who’s harry potter? approximate unlearning in llms. arXiv preprint arXiv:2310.02238, 2023. ginart2019making Alex Ginart, Melody Y Guan, Gregory Valiant, and James Zou. Making ai forget you: Data deletion in machine learning. In Advances in Neural Information Processing Systems, volume 32, pages 113–124, 2019. guo2020certified Chuan Guo, Tom Goldstein, and Julian McAuley. Certified data removal from machine learning models. In International Conference on Machine Learning, pages 3832–3842. PMLR, 2020. ilharco2022editing Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. In The Eleventh International Conference on Learning Representations, 2022. jang2022knowledge Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, and Minjoon Seo. Knowledge unlearning for mitigating privacy risks in language models. arXiv preprint arXiv:2210.01504, 2022. lu2022quark Ximing Lu, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. Quark: Controllable text generation with reinforced unlearning. In Advances in neural information processing systems, 35:27591–27609, 2022. neel2021descent Seth Neel, Guy N Rothblum, and Jonathan Ullman. Descent-to-delete: Gradient-based methods for machine unlearning. In Advances in Neural Information Processing Systems, volume 34, pages 17319–17330, 2021. pawelczyk2023incontext Martin Pawelczyk, Seth Neel, and Himabindu Lakkaraju. In-context unlearning: Language models as few shot unlearners. arXiv preprint arXiv:2310.07579, 2023. thudi2022unrolling Aditya Thudi, Satyen Kapoor, Tom Goldstein, and Sanjeev Arora. Unrolling sgd: Understanding factors influencing machine unlearning. In International Conference on Learning Representations, 2022. wang2023kga Lingzhi Wang, Tong Chen, Wei Yuan, Xingshan Zeng, Kam-Fai Wong, and Hongzhi Yin. Kga: A general machine unlearning framework based on knowledge gap alignment. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13264–13276, 2023. wang2024rkld Bichen Wang, Yuzhe Zi, Yixin Sun, Yanyan Zhao, and Bing Qin. RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models. arXiv preprint arXiv:2406.01983, 2024. yu2023unlearning Charles Yu, Sullam Jeoung, Anish Kasi, Pengfei Yu, and Heng Ji. Unlearning bias in language models by partitioning gradients. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6032–6048, 2023. zhang2023composing Jinghan Zhang, Junteng Liu, Junxian He, et al. Composing parameter-efficient modules with arithmetic operation. In Advances in Neural Information Processing Systems, 36:12589–12610, 2023. grosse2023influence Roger Grosse, Derek Hoiem, Andreas Madsen, Jonas Mueller, Shiori Sagawa, Ludwig Schmidt, Tobias Weyand, Zico Kolter, and David Forsyth. Evaluating model generalization with influence functions. arXiv preprint arXiv:2310.04212, 2023. § SPECIAL THANKS Special thanks to Aman Priyanshu, Tushar Vatsa, and Trevor Kann for fielding ideas, listening to me ramble about gradient ascent, and offering feedback. § EXPERIMENTAL RESULTS This section contains the high-level insights from our experimental results including perplexity scores and generated texts during the unlearning process. §.§ Perplexity Scores The perplexity scores were used to evaluate the model's performance before and after fine-tuning, and after each unlearning step. The lower the perplexity score, the better the model's performance. Below are the key insights from the perplexity scores: * Initial Perplexity: Before fine-tuning, the perplexity of the model was extremely high at 31197.14, indicating poor performance in predicting the specific content of the "Dave" dataset. * Post-Fine-Tuning Perplexity: After fine-tuning, the perplexity drastically reduced to 1.04, showing significant improvement in the model's predictive capability. * Post-Unlearning Perplexity: After each unlearning step, the perplexity remained consistent around 1.06, indicating that the unlearning process effectively removed the influence of the targeted data point without adversely affecting the overall model performance. §.§ Generated Texts During the unlearning process, the model repeatedly generated certain phrases. Below are the insights from the generated texts: * The model frequently generated the phrase "Dave is developing a custom guitar.", which was not present in the dataset, indicating a fallback behavior. * The consistent generation of similar phrases suggests a limitation in the diversity of responses post-unlearning, highlighting the need for further refinement. § DAVE DATASET The following table contains the sentences about the fictional characters used in our experiments.
http://arxiv.org/abs/2406.09350v1
20240613173459
Quantum statistics in the minimal scenario
[ "Victor Barizien", "Jean-Daniel Bancal" ]
quant-ph
[ "quant-ph" ]
Université Paris Saclay, CEA, CNRS, Institut de physique théorique, 91191 Gif-sur-Yvette, France Université Paris Saclay, CEA, CNRS, Institut de physique théorique, 91191 Gif-sur-Yvette, France § ABSTRACT In any given experimental scenario, the rules of quantum theory provide statistical distributions that the observed outcomes are expected to follow. The set formed by all these distributions contains the imprint of quantum theory, capturing some of its core properties. So far, only partial descriptions have been known for this set, even in the simplest scenarios. Here, we obtain the analytical description of a complete set of quantum statistics in terms of extremal points. This is made possible by finding all bipartite quantum states and pairs of binary measurements which can be self-tested, i.e. identified from statistics only. Our description provides a direct insight into the properties and limitations of quantum theory. These are not expressed in terms of Hilbert spaces, but rather directly in terms of measurement observation statistics. Quantum statistics in the minimal scenario Jean-Daniel Bancal June 17, 2024 ========================================== Quantum theory is surprising in many regards, one of which being its intrinsic inability to predict the exact results of experiments, such as the outcome that is going to be observed when a measurement is performed on a physical system. Indeed, quantum physics only foretells the statistical distribution of these outcomes and is, as such, a fundamentally probabilistic theory. Whereas one might expect this to be a limitation of the theory, it turns out that the probabilistic predictions of quantum theory often exceed classical and deterministic ones, leading to new possibilities. The puzzle that a probabilistic theory could somehow be more powerful than a deterministic one has stimulated deep questions <cit.>. Today, the unpredictability of quantum theory is considered a resource <cit.> and in light of this new perspective, a natural question arises: what are the fundamental limits of quantum theory's probabilistic predictions? Addressing this question requires distinguishing probabilistic predictions that can admit a quantum explanation from those which don't. Interestingly, because quantum statistics can violate Bell inequalities <cit.>, determining whether a given set of probability distributions is compatible with quantum theory turns out to be a highly nontrivial task. In fact, this problem amounts to inverting Born's rule <cit.>, one of quantum theory's cornerstones. This rule expresses in simple terms the probability assigned by quantum theory to the potential outcomes of a measurement as a function of a system's realization – including its state and measurement operators. While the statistics produced by a quantum realization is unique, a set of quantum statistics can admit many quantum realizations simultaneously, rendering their characterization particularly challenging. Recently, several efforts have highlighted the wide-ranging consequences of determining the quantum set – the set of quantum statistical predictions. Beyond allowing to test whether experimental observations admit a quantum description or not, it was suggested that by touching upon the limits of quantum theory itself this description would give access to fundamental principles satisfied by the theory. Similarly to the constancy of the speed of light in relativity, such principles could open the way for a fully principle-based formulation of quantum theory <cit.>. Several information principle candidates have been proposed, such as no-signaling, information causality, macroscopic locality or local orthogonality <cit.>. So far, none of these have succeeded in reproducing quantum predictions. Thus, although being satisfied by the theory, the principles identified until now fall short in providing a physical explanation for why quantum correlations are limited the way they are and the search for such a quantum principle remains open. Knowledge of the quantum set also has direct implications for quantum applications. Indeed, by inverting Born's rule without any a priori on the underlying Hilbert space or on the description of the quantum system at hand, this approach provides a way of accomplishing tasks that is fundamentally trustful, in the sense that it is independent of the precise modeling of the physical devices at hand <cit.>. This device-independent approach has been shown to present an interest in entanglement detection and quantification <cit.> and is now a standard framework in numerous tasks. The security of adversarial protocols particularly benefits from assessments that are insensitive to the implementations details <cit.>. Since device-independent information processing analyses rely on observed statistics, they depend directly on the characterization of the quantum set. More generally, it has been shown that points in the quantum set are pertinent to a wide range of topics, including the study of correlations in many-body systems <cit.> or of quantum computing advantage <cit.>. Determining the limits of this set is thus crucial to identify new possibilities and limitations in quantum information science. Considering this problem already in the 1980s, Tsirelson was the first to obtain bounds on the quantum set <cit.>. Several results followed, notably from Landau and Masanes <cit.>, until a major progress was achieved by Navascués, Pironio and Acín (NPA) in the form of a hierarchy of semidefinite programming <cit.>. This hierarchy is now a central tool of quantum information science <cit.>, with an impact reaching optimization theory <cit.>. Concretely, the hierarchy defines a family of problems of increasing complexity which approximate better and better the quantum set from the outside and guarantees convergence as the level of the hierarchy goes to infinity. At a fixed hierarchy level, this technique allows deriving necessary conditions for the quantum set <cit.>, thus excluding that some behaviors admit a quantum representation. However, since the NPA technique can generally not guarantee that specific statistics are quantum, its implications remain elusive on the boundary of the quantum set. Recently, fresh insight was gained on the quantum set by analytical studies which showed that it admits flat nonlocal boundaries <cit.>, as well as pointy nonlocal extremal points <cit.>. New curved regions of the set's boundaries were also identified analytically <cit.> and several conjectures were formulated on the boundary of the quantum set <cit.>. In this work, we consider the question of identifying the limits of the quantum set from the perspective of self-testing <cit.>. Namely, a set of statistics, or behavior, is said to self-test a quantum realization with state ρ and measurements {M} iff it can only be obtained through Born's rule by quantum realizations (ρ',{M'}) which are related to (ρ,{M}) by local isometries <cit.>. Self-testing is a powerful tool to analyze quantum behaviors: it has played an important role in solving Connes' embedding problem, a major conjecture of operator algebra <cit.>, and is a central part of several quantum protocols such as the certification of quantum devices <cit.> or delegated quantum computing <cit.>. So far, numerous families of states have been shown to be self-testable through some of their behaviors <cit.>. However, only the statistics obtained when measuring a maximally entangled state in the minimal scenario, where two parties each equipped with two binary measurements, have been fully characterized by self-testing <cit.>. These behaviors correspond to the boundary of the quantum set in this scenario with vanishing marginals, which was described earlier by Tsirelson, Landau and Masanes, see <cit.>. Here, we obtain all the self-tests in this scenario. This reveals previously unknown boundaries of the quantum set. In turn, it allows us to identify all extremal points and their corresponding quantum realizations, thus providing a complete description of the quantum set in the minimal scenario. We derive our results in the bipartite setting depicted in <ref>, in which two users, Alice and Bob, each have access to a shared quantum state ρ_AB∈ L(ℋ_A⊗ℋ_B), where ℋ_A/B denotes their respective Hilbert space[Here, L(ℋ) refers to the space of linear applications on the Hilbert space ℋ.]. In the scenario we consider, each party can perform one of two possible measurements, denoted by x,y=0,1, each with two possible outcomes, denoted ,=̱± 1, and described by the POVM elements P_,x^A, P_,̱y^B respectively. To each POVM, one can associate hermitian operators, A_x∈ L(ℋ_A) and B_y∈ L(ℋ_B), with eigenvalues in [-1,1], defined as A_x = P_1,x^A-P_-1,x^A and B_y = P_1,y^B-P_-1,y^B. In this setting, a behavior is fully determined according to Born's rule by the eight real parameters A_x =(ρ_AB A_x⊗ 𝕀 ), B_y =(ρ_AB 𝕀 ⊗ B_y), A_x B_y =(ρ_AB A_x⊗ B_y), which encode a vector, or point, P ∈ℝ^8. The set of all such vectors P form the quantum set 𝒬 of interest. It should be stressed that all states and measurements in arbitrary Hilbert spaces ℋ_A and ℋ_B, including infinite dimensional Hilbert spaces, can be considered here. Hence, this set encodes all statistics that can be achieved in this setting within the framework of quantum theory. The fact that the Hilbert space dimension is unbounded allows simplifying the analysis of 𝒬 in two ways. First, it ensures that the quantum set is convex and therefore that it is fully described as the convex hull of its extremal points: 𝒬= Conv(Ext(𝒬)) <cit.>. We can thus focus on the description of these extremal points, which are known to be infinitely many. Second, it ensures by Naimark's dilation theorem that points of 𝒬 can be realized in terms of projective measurements acting on pure states. The description of extremal points can then be further simplified by Jordan's lemma, which guarantees that they can be realized on Hilbert spaces of dimension two whenever the parties hold two binary measurements, i.e. Ext(𝒬)⊂𝒬_2. A recent result by Mikos-Nuszkiewicz and Kaniewski <cit.> further guarantees that such realizations can be parametrized in the simple real form |ϕ_θ⟩ =cosθ|00⟩+sinθ|11⟩, A_x =cos a_x σ_z + sin a_x σ_x, B_y =cos b_y σ_z + sin b_y σ_x. Considering the symmetries of the set 𝒬 allows us to finally restrict our attention to parameters in the range θ∈ [0,π), 0 ≤ a_0 ≤ b_0 ≤ b_1 < π, a_0 ≤ a_1 <π, see <ref> for more details on these restriction steps. Our first result concerns the set 𝒬_2 of statistics that can be obtained by measuring quantum systems of dimension 2. Due to the dimension restriction, this set is non-convex <cit.>, which makes it hard to characterize. Defining the correlators [ A_x^ B_y] = ⟨ A_x B_y ⟩+⟨ B_y ⟩/ 1+⟨ A_x ⟩ for all ∈{± 1}, x, y∈{0,1}, we have the following necessary condition for pure realizations in 𝒬_2. [Necessary condition for pure projective realizations of local dimension 2] Any quantum point obtained from projective measurements on a pure entangled two-qubit state verifies [ -π≤-asin [ A_0^ B_0] +asin [ A_1^ B_0] +asin [ A_0^ B_1]+asin[ A_1^ B_1]≤π; -π≤asin [ A_0^ B_0] -asin [ A_1^ B_0] +asin [ A_0^ B_1]+asin[ A_1^ B_1] ≤π; -π≤asin [ A_0^ B_0] +asin [ A_1^ B_0] -asin [ A_0^ B_1]+asin[ A_1^ B_1]≤π; -π≤asin [ A_0^ B_0] +asin [ A_1^ B_0] +asin [ A_0^ B_1]-asin[ A_1^ B_1]≤π ] for all ,∈{±1}. The main idea of the proof consists in considering the quantum state |ϕ_θ⟩ as a non-linear map T_θ that brings an arbitrary pure entangled qubit behavior in relation with 4 distributions issued from the maximally entangled state, see <ref>. Constraints on maximally entangled statistics derived by Masanes <cit.> then imply the necessary constraints on the original statistics issued from a non-maximally entangled state that are given here, see <ref> for more details. Note that the correlators [ A_x^ B_y] are well-defined, except in the case where some of Alice's marginals are equal to ± 1. This can only happen when θ≡ 0 [π/2], which gives rise to local points. Note also that the labeling of the subsystems is arbitrary here, so additional necessary conditions can be obtained by exchanging Alice and Bob. One easily verifies that when the marginal of both parties are zero, the inequalities above reduce to the 8 Masanes inequalities. These inequalities are convex and identify the boundary of the quantum set in this restricted CHSH scenario <cit.>. In the general case, however, the inequalities <ref> do not describe a convex set. Therefore, they cannot describe the set 𝒬. Indeed, measurements on a high-dimensional state or on a mixed two-qubit state can result in quantum points that do not satisfy the above inequalities. It has been shown that the saturation of the Masanes inequalities can only be achieved by self-testing behaviors, which identify a singlet state and real unitary measurements <cit.>. It is thus natural to ask whether the points saturating some of the above conditions could lead to self-testing as well. It turns out that saturating a single inequality is not sufficient. Similarly, one can find non-extremal points saturating two inequalities. However, whenever three of the conditions are met for different values of (s,t), the fourth one is also. Indeed, out of the above 32 conditions, only 24 are linearly independent. In this special case, we show that the resulting statistics self-test a qubit realization. Any nonlocal behavior which satisfies 0.9asin [ A_0^ B_0] +asin [ A_1^ B_0] - asin [ A_0^ B_1]+asin[ A_1^ B_1] =π for all four (,) ∈{± 1}^2, self-tests a quantum realization of local dimension 2. In particular, all realizations in the range <ref> for which the measurement angles fully alternates, i.e. verify 0 ≤ [a_0^]_π≤ b_0 ≤ [a_1^]_π≤ b_1 < π satisfy, up to relabelling, the condition of <ref>. As such, those realizations are self-tested by their associated quantum point, which is therefore extremal in 𝒬 <cit.>. Note that conversely, satisfying <ref> for all (,)∈{± 1}^2 self-tests realizations that fully alternate (up to relabeling). The proof of <ref>, given in <ref>, strongly relies on the steering transformation. This time, the nonlinear transformation is used to map self-testing statements on a maximally entangled state into a self-testing statement on a partially entangled one. This is made possible by strong geometrical relations between the vectors A_x^|ϕ^+⟩ and B_y |ϕ^+⟩ issued from the steered realization when <ref> holds. Note that the condition of nonlocal behavior implies that the marginals cannot take the value ±1, which guarantees that the correlators [ A_x^ B_y] are well-defined. Indeed, any vector with a single marginal probability equal to ± 1 would have more than two zeros of probabilities on the same line or column of the probability table, ensuring that the quantum point is local <cit.>. <ref> identifies many extremal points of the quantum set and provides self-testings for all qubit partially-entangled states with a large family of measurement settings. However, it does not say whether points which do not satisfy the equality case are extremal in 𝒬. In order to address this, we consider the complementary question of showing that realizations which do not satisfy the full alternation condition <ref> lead to behaviors that are non-extremal. As a first step in this direction, we prove that in the absence of full alternation, the statistics are non-exposed, i.e. that they do not uniquely reach the Tsirelson bound of any Bell expression. Note that since non-exposed points include both decomposable points and some extremal points, this result in itself does not allow concluding yet. Consider a quantum realization with state |ϕ_θ⟩ and measurements satisfying <ref>. If the following series of inequalities does not hold: ∀, , 0 ≤ [a_0^]_π < b_0 < [a_1^]_π < b_1 < π, where a^_x = atan(tan(a_x/2)tan(θ)^), and [α]_π:≡α[π], then the corresponding point is non-exposed in 𝒬. The idea that we employ in <ref> to prove that these points are not exposed is to consider all possible Bell expressions, and to show for each of them that the value achieved by the considered quantum point can be also achieved by a distinct quantum point. In order to prove this, we first find necessary conditions that Bell expressions maximized by the considered statistics must satisfy. This allows reducing the dimension of the search space for Bell expressions from 8 to 3. For all remaining Bell expressions, we then provide a local point which gives the same Bell value as the considered quantum point. Equipped with Lemmas 1 and 2, let us consider a qubit realization of the form <ref>. If the measurement angles satisfy the alternating condition <ref>, then <ref> ensures that the quantum realization is self-tested and therefore the point is extremal. This shows that the alternating property <ref> is constitutive of these points: no point obtained with non-alternating settings can be obtained with other, alternating settings. Using this property together with <ref> ensures that all realizations that do not verify <ref> correspond to points on the interior of the set of non-exposed points of 𝒬. However, Straszewicz's theorem states that all extremal points are limits of exposed points <cit.>. This ensures that non-alternating realizations lead to non-extremal points of 𝒬. This can be summarized in the following theorem. [Characterization of Ext(𝒬)] * A nonlocal point in the CHSH scenario is extremal in 𝒬 iff ∀ 𝗎∈{± 1}^2, ∑_x,yϵ_xy [ A_x^𝗎_x B_y] = π for some ϵ_xy∈{± 1} such that ∏_x,yϵ_xy = -1. * A quantum realization leads to a nonlocal extremal point iff it can be mapped by local channels and relabelings to a quantum realization on the entangled state |ϕ_θ⟩ with measurements satisfying <ref> s.t. ∀ (,) ∈{± 1}^2, 0 ≤ [a_0^]_π≤ b_0 ≤ [a_1^]_π≤ b_1 < π. Note that the full alternance condition is equivalent whether the steering transformation is applied on Alice's measurements (as done here) or on Bob's ones, see <ref>. Therefore, exchanging the role of Alice and Bob in conditions <ref>, with modified angles b̃_y^$̱ and correlators[A_x B̃_y^]̱, identifies the same extremal point. Together with the 8 deterministic strategies <cit.>, which are local extremal points of𝒬, <ref> gives a complete characterization of the extremal point of the quantum set in the minimal scenario. It also describes the quantum realizations behind them in terms of measurements on two-qubit states, as illustrated in <ref>. This description implies that all nonlocal extremal points in this scenario self-test a two-qubit realization. Note that the analytical description of the quantum set given by <ref> provides new insight on its geometry. Indeed, it implies that the 8-dimensional convex set is generated by a 5-dimensional sub-manifold of extremal points. Indeed, as discussed above, whenever Alice's marginals are not zero, three out of the four equations above are linearly independent. Furthermore, by <ref>, extremal points admitting a probability equal to zero verify[A_x^ B_y]=±1for some,x,yand are thus non-exposed. We thank Valerio Scarani for feedback on the manuscript. § SUPPLEMENTAL MATERIAL In this supplemental material, we explain the technical details underlying the results presented in the paper. In particular, <ref> focuses on the parametrization of the extremal points in the CHSH scenario, <ref> on the steering transformationT_θ, while <ref> provide detailed proofs of Lemma 1 and 2 respectively. § SUFFICIENT PARAMETRIZATION FOR THE EXTREMAL POINTS OF THE QUANTUM SET IN THE CHSH SCENARIO Quantum correlations are defined as the statistics that can be observed upon performing arbitrary measurements on an arbitrary quantum state. As such, describing the quantum set exhaustively requires in principle considering all possible realizations (states and measurements) in every Hilbert space. Thankfully, it is known that several realizations give rise to identical statistics, and therefore a subset of all quantum realizations is sufficient to reproduce all quantum correlations in a given Bell scenario. In turn, realizations can be even further restricted when considering extremal quantum correlations only. In this appendix, we discuss these restrictions in detail. This leads us to the explicit parametrization given in <ref>, which is sufficient to reproduce all extremal points in the CHSH scenario. This appendix is organised as follows. First, we discuss up to <ref> the way that restrictions can be made on measurement operators without affecting the set of generated statistics. Then, we incorporate restrictions on the state as well as those due to discrete symmetries of the Bell scenario. While we focus on the case of the CHSH scenario, several of the results described here also apply to general Bell scenarios. Also, most of the steps described in this appendix were already discussed in earlier works, see e.g. <cit.>, however some of our proofs are more concise. We include a proof of every restriction step for completeness. A binary measurement– a quantum measurement with two possible outcomes – is described by an operator A∈ L(ℋ) such that A^† = A and -𝕀≼ A ≼𝕀. Furthermore, the measurement is projective iff A^2 = 𝕀, in which case the only possible eigenvalues of A are ± 1 and A is unitary, i.e. A^† A=A A^†=𝕀. Quantum correlations in the CHSH scenario are the correlation vectors P ∈ℝ^8 with components A_x =(ρ_AB A_x⊗ 𝕀 ), B_y =(ρ_AB 𝕀 ⊗ B_y), A_x B_y =(ρ_AB A_x⊗ B_y), where ρ_AB∈ L(ℋ_A⊗ℋ_B) is a quantum state (satisfying ρ_AB≽ 0, (ρ_AB)=1) and A_x∈ L(ℋ_A), B_y∈ L(ℋ_B) with x,y∈0,1 are binary measurement operators. Each such vector encodes the statistics corresponding to measuring the quantum state ρ_AB with the corresponding two pairs of binary measurements. It is convenient to arrange the components of P in a table with elements indexed by x,y∈{-1,0,1}. Namely, defining A_-1 = 𝕀, B_-1 = 𝕀, we can write { P}_xy = (ρ_AB(A_x⊗ B_y)), where the first element of this table, { P}_-1,-1=(ρ_AB)=1, is a constant. This definition is equivalent to the definition of quantum correlations in terms of full probabilities, which identifies distributions of the form p(|̱xy)=(ρ_AB(P_|x^A ⊗ P_|̱y^B)), where ρ_AB is the measured quantum state and P_|x^A, P_|̱y^B are POVM elements on subsystems A and B, corresponding to the measurement choices x and y and outcomes and $̱. Indeed, any binary outcome POVMP_|tverifyingP_|t≽ 0andP_0|t+P_1|t=𝕀is in one-to-one correspondence with a binary measurement operatorX_t = P_0|t-P_1|tverifying-𝕀≼ X_t ≼𝕀. Therefore, it is possible to express the statistics in terms of the components of the correlation vector Pas p(|̱xy)=1 + A_x + B_y + A_x B_y/4. Note that the measurement operatorX_tis unitary iff the corresponding POVM is a projector. [Naimark's dilation theorem for binary measurement operators] For any binary measurement operator A acting on a Hilbert space ℋ, there exists a Hilbert space ℋ, an isometry V:ℋ→ℋ and a unitary binary measurement operator A s.t. A = V^†A V. Consider the tensor product Hilbert space ℋ = ℋ⊗ℂ^2 and introduce the following isometry and unitary binary measurement operator: V = ∑_i=0^1 √(1+(-1)^i A/2)⊗|i⟩_ℂ^2, A = 𝕀_ℋ⊗σ_z. On can easily verify that the first term is well-defined as both 1 ± A/2 are positive operators, and V^† V = 𝕀_ℋ so V is an isometry, that A^2 = 𝕀_ℋ and that property <ref> is satisfied. It is well known that Naimark's dilation theorem also applies to measurements involving an arbitrary number of measurements outcomes k≥ 1, see e.g. <cit.>. [Naimark's dilation for two measurements] For any two binary measurement described by operators A_0, A_1 acting on a Hilbert space ℋ, there exists a Hilbert space ℋ, an isometry V:ℋ→ℋ and unitary binary measurement operators A_x s.t. ∀ x∈{0,1}, A_x = V^†A_x V. We can apply Naimark's dilation for the operator A_0. Therefore, there exists a Hilbert space ℋ_1, an isometry V_1:ℋ→ℋ_1 and a unitary operator A_0' s.t. A_0 = V_1^† A_0' V_1. Then one can define a measurement operator A_1'= V_1 A_1 V_1^† on ℋ_1, verifying -𝕀≼ A_1' ≼𝕀. One can thus apply Naimark's dilation to A_1'. This guarantees the existence of a Hilbert space ℋ_2, an isometry V_2:ℋ_1→ℋ_2 and a unitary operator A_1 s.t. A_1' = V_2^†A_1 V_2. Let's define A_0 = V_2 A_0' V_2^† + V_2 V_2^† - 𝕀_ℋ_2. Since V_2^† V_2 = 𝕀_ℋ_1, we have A_0^2 = 𝕀_ℋ_2, so A_0 is a unitary operator. Now, we can define V:=V_2∘ V_1, verifying ∀ x∈{0,1}, A_x = V^†A_x V where A_x describe projective measurements. Naimark's dilation can be further extended to an arbitrary family of m≥ 1 measurements by repeating the above construction iteratively, see e.g. <cit.>. [Naimark's dilation for the CHSH scenario] Any quantum correlation in the CHSH scenario can be realized by projective measurements, i.e. for all P ∈𝒬, there exists a quantum state ρ_AB and unitary binary measurement operators (A_x, B_y) such that P = {(ρ_AB (A_x ⊗B_y))}_xy. Consider a vector P ∈𝒬. It admits a realization involving a shared state σ_AB and binary measurement operators (A_x, B_y). Using <ref>, one can define isometries V_A(B) : ℋ_A(B)→ℋ_A(B) such that for all x,y, A_x = V_A^†A_x V_A and B_y = V_B^†B_y V_B with A_x and B_y unitary binary measurement operators. Then the coefficients of P are given by { P}_xy = (σ_AB(A_x⊗ B_y)) = (σ_AB(V_A^†A_x V_A ⊗ V_B^†B_y V_B)) = ((V_A ⊗ V_B)σ_AB(V_A^†⊗ V_B^†)(A_x ⊗B_y)) = (ρ_AB (A_x ⊗B_y)), where we defined the quantum state ρ_AB:=(V_A ⊗ V_B)σ_AB(V_A ⊗ V_B)^† on the Hilbert space ℋ_A⊗ℋ_B. Due to the previous extension, <ref> can be expanded to an arbitrary Bell scenario, in which n parties each have a m_i measurement settings and each measurement has k_i,j_i possible outcomes, where i=1,…,n and j_i=1,…,m_i. At this stage, one could also apply a dilation to the state ρ_AB to show that any point P ∈𝒬 can be realized by measuring a pure state with projective measurements. [Jordan's lemma] For any pair of projective binary measurements A_0, A_1 acting on a Hilbert space ℋ, there exists an orthogonal basis of ℋ such that A_0 and A_1 are block-diagonal with blocks of size at most two. Furthermore, in this basis, A_0 and A_1 are real operators. Consider the hermitian operator R = A_0 + A_1. For any eigenvector |ψ⟩ of R of eigenvalue λ∈ℝ, one can consider the subspace V_ψ = Vect⟨|ψ⟩, A_0 |ψ⟩⟩. Then, one of the two following cases happens: Case 1 :V_ψ = 1. Then there exist μ∈ℂ s.t. A_0 |ψ⟩ = μ|ψ⟩. As A_0 is hermitian, μ has to be real. Then, A_1 |ψ⟩ = R|ψ⟩ - A_0 |ψ⟩=(λ-μ) |ψ⟩. So |ψ⟩ is an eigenvector of both A_0 and A_1 of real eigenvalue (block of size 1). Case 2 :V_ψ = 2. Then, since A_0^2=𝕀, V_ψ is stabilized by A_0 and in this subspace A_0|_V_ψ = σ_x. Furthermore, A_1 |ψ⟩ = λ|ψ⟩ - A_0 |ψ⟩ belongs to V_ψ and since A_1^2 = 𝕀, we have that A_1(A_0|ψ⟩) = λ A_1 |ψ⟩ - |ψ⟩ also belongs to V_ψ. Finally, V_ψ is stabilized by both A_0 and A_1 (block of size 2) and all have real coefficients in the sub-basis. The rest of the proof goes on finite recursion on the eigenvectors of R. At each step, one has to find an eigenvector |ψ'⟩ orthogonal to all previous subspaces, which is always possible since R is hermitian. [Jordan's lemma for the CHSH scenario] Every quantum correlations in the CHSH scenario can be obtained as a convex combination of realizations involving local Hilbert spaces of dimension 2 and real projective measurements. For P∈𝒬, we consider a projective realization thanks to <ref>. Furthermore, one can assume, up to trivial dilation, that the dimensions of ℋ_A(B) are even. We can then apply Jordan's lemma for each party and assume that all block are of size 2. Thus, there exists local projections Π_A^i, Π_B^j on ℋ_A(B) respectively such that A_x^(i) := Π_A^i A_x Π_A^i are (2,2)-real matrices and A_x = ⊕_i A_x^(i), likewise for Bob. As such { P}_xy = (ρ_AB(A_x ⊗ B_y)) = (ρ_AB(⊕_i A_x^(i)⊗⊕_j B_y^j)) = ∑_ij((Π_A^i ⊗Π_B^j) ρ_AB (Π_A^i ⊗Π_B^j)(A_x^(i)⊗ B_y^j)) = ∑_ij p^ij{ P_ij}_xy, where we defined p^ij:=((Π_A^i ⊗Π_B^j) ρ_AB (Π_A^i ⊗Π_B^j)), { P_ij}_xy=(ρ_AB^ij(A_x^(i)⊗ B_y^j)) and ρ_AB^ij:=(Π_A^i ⊗Π_B^j) ρ_AB (Π_A^i ⊗Π_B^j)/p^ij, which verify (ρ_AB^ij)=1, ρ_AB^ij≽ 0, p^ij≥ 0 and ∑_ij p^ij=1. <ref> generalizes straightforwardly to Bell scenarios involving n parties with two binary measurements per party. [Jordan's lemma for extremal correlations in the CHSH scenario] Every extremal quantum correlations in the CHSH scenario can be realized by measuring a single two-qubit state with real projective measurements. For an extremal point P ∈Ext(𝒬), we can apply the previous result to decompose P as a convex combination of correlations P_ij obtained by measuring two qubit states with real unitary operators. The extremality of P ensures that for all i,j, P_ij = P and as such any qubit realization of a given P_ij provides a satisfying realization for P. [Extremal points realization in the CHSH scenario] Every extremal quantum correlations in the CHSH scenario can be realized by measuring a real, pure, two-qubit state with real projective measurements. For P ∈Ext(𝒬), one can apply <ref> to obtain a realization of the form (ρ_AB, A_x, B_y) with A_x, B_y real unitary operators and local Hilbert spaces of dimension 2. If ρ_AB is not equal to its complex conjugate ρ_AB, then one can also realize P by measuring the real state ρ_AB = (ρ_AB+ρ_AB)/2, because all measurement operators are real. Therefore, we can assume that ρ_AB is real. A real density matrix ρ_AB can always be written as ρ_AB = ∑_i λ_i |ψ_i⟩⟨| with λ_i ∈ℝ_+, ∑_i λ_i = 1, and where |ψ_i⟩ are real two-qubit states. Therefore, one has P = ∑_i λ_i P_i, where P_i is obtained by measuring |ψ_i⟩⟨$| on(A_x, B_y). The extremality of Pensures that for alli, P_i = P, and therefore the realization for anyiis a valid realization for P. An alternative proof of <ref> was recently given in <cit.>. For the study of extremal point only, one does not necessarily need Naimark's dilation to impose that the measurement operators are unitary (see <cit.> for an alternative argument). Corollaries <ref>, <ref> and <ref> can be directly extended to Bell scenarios in which n parties each have two binary outcome measurements, proving that in those scenarios all extremal quantum correlations can be achieved by measuring real n-qubit states with real unitary measurements. [Extremality under relabelings] Consider a convex set 𝒦⊂ℝ^n stabilized by a linear involution S:ℝ^n →ℝ^n, i.e. such that S^2 = 1 and S(𝒦) = 𝒦. Then, any point P ∈𝒦 is extremal iff S( P) is extremal. Suppose that P is extremal. For any convex decomposition S( P) = λ P_1 + (1-λ) P_2 of S( P), we have P = λ S( P_1) + (1-λ) S( P_2), because S is linear and is an involution. By extremality of P, we have P = S( P_1) = S( P_2), and as such S( P) = P_1 = P_2 so S( P) is extremal. Conversely, one can use that P = S(S( P)). The quantum set is invariant under the following involutions: relabeling of parties: S_part : A_x ⟷ B_x, relabeling of inputs: S_in : A_0 ⟷ A_1, relabeling of outputs: S_out : A_0 ⟶ -A_0. As such, the study of whether a quantum correlation P is extremal can be reduced to the study of the extremality of any correlations of the form g· P where g is an arbitrary element of the Bell group G generated by S_part,S_in, S_out (a group of order 32). [Sufficient parametrization of extremal points in the CHSH scenario] The study of the extremal points of 𝒬 can be reduced to the study of the extremality of correlations admitting a realization of the form: state: |ϕ_θ⟩ = cos(θ)|00⟩ + sin(θ)|11⟩, measurements: A_x = cos(a_x) σ_z + sin(a_x) σ_x, B_y = cos(b_y) σ_z + sin(b_y) σ_x. where the real parameters θ, a_x, b_y can be reduced, upon relabeling symmetries, to the range θ, a_x, b_y ∈ [0,π), a_0 ≤ a_1, b_0 ≤ b_1, a_0 ≤ b_0. Thanks to <ref>, all extremal points of 𝒬 admit a realization of the form <ref> for arbitrary real parameters θ, a_x, b_y. The periodicity of the corresponding correlations allows reducing those to the interval θ∈ [0,π) and a_x, b_y ∈ [0,2π). Using <ref>, one can study the nature of such correlations by applying arbitrary symmetries of 𝒬. In particular, one can use relabeling of outcomes to ensure that a_x, b_y ∈ [0,π), relabeling of inputs to have a_0 ≤ a_1 and b_0 ≤ b_1 and relabeling of parties for a_0 ≤ b_0. § STEERING OF QUANTUM REALIZATIONS In this appendix, we describe in more details the steering map introduced in <ref> of the main text. In the first section, we consider its action on vectors ofℂ^2. In the second section, we discuss the action of the map on qubit measurements and relate the statistics before and after the transformation. In both cases, our analysis applies to arbitrary qubit states and measurement settings (i.e. not necessarily real ones). §.§ Steering of states The non-linear steering mapT_θ:ℂ^2→ℂ^2is defined by the action T_θ:|u⟩↦√(2) (⟨ϕ_θ|u⟩)|ϕ^+⟩/||⟨ϕ_θ|u⟩|| on state|u⟩∈ℋ_A=ℂ^2. In particular, in the canonical parametrization |u⟩=cos(a/2) |0⟩ + e^iψsin(a/2) |1⟩, the steering transformation returns T_θ(|u⟩) = cos(θ) cos(a/2) |0⟩ + e^iψsin(θ) sin(a/2) |1⟩/√(cos(θ)^2 cos(a/2)^2 + sin(θ)^2 sin(a/2)^2) = ϵ_θ(cos(a/2) |0⟩ + e^iψsin(a/2) |1⟩) whereϵ_θ := sgn(cos(θ))andαis the transformed state angle. Note that the denominator can only cancel ifθ∈{0, π/2}anda=2θ±π. Whenever this happens, we defineT_θ(|u⟩)to be the null vector (since this only happens when|ϕ_θ⟩is a product state, we won't be interested in this case). The angle of the image state in the canonical basis is given by α:= 2 atan(tan(a/2)tan(θ)) ∈ [-π, π]. Note that in all cases the image vectors satisfy⟨ϕ_θ | u ⟩ = √(2) ||⟨ϕ_θ|u⟩|| ⟨ϕ^+ | T_θ(|u⟩) ⟩, and as such ∀ B ∈ L(ℋ_B), ⟨ϕ_θ|(|u⟩⟨⊗|B )|ϕ_θ⟩ = 2 ||⟨ϕ_θ|u⟩||^2⟨ϕ^+|(|v⟩⟨⊗|B ) |ϕ^+⟩, where for clarity we denoted|v⟩=T_θ(|u⟩). Finally, one can notice that the inverse of the steering function is easy to compute. Indeed, we have: T_π/2 - θ (T_θ(|u⟩)) = |u⟩. §.§ Steering of correlations and realizations When considering binary non-degenerate projective measurements acting on a pure state with local Hilbert spaces of dimension 2, the measured state can always be written as|ϕ_θ⟩in the Schmidt basis. In this basis, the projective measurements can be written as A_x = cos(a_x) σ_z + sin(a_x) (cos(α_x) σ_x + sin(α_x) σ_y), B_y = cos(b_y) σ_z + sin(b_y) (cos(β_y) σ_x + sin(β_y) σ_y), and can be decomposed as the difference of two orthogonal projectors: A_x = |u_+,x⟩⟨-||u_-,x⟩⟨,| B_y = |w_+,y⟩⟨-||w_-,y⟩⟨,| where⟨u_,x|u_',x⟩=δ_',⟨w_,̱x|w_'̱,x⟩=δ_̱̱'and|u_,x⟩, |w_,̱y⟩∈ℂ^2are in general complex vectors of the form <ref> witha∈(-π,π]. The expected probabilitiesp(|̱xy) = 1/4(1+A_x+B_y+A_x B_y)corresponding to this realization are given by p(|̱xy) = ⟨ϕ_θ|(|u_,x⟩⟨⊗||w_,̱y⟩⟨|) |ϕ_θ⟩. Applying the steering map on the eigenstates|u_,x⟩of Alice's measurements and using <ref>, we obtain: p(|̱xy) = 2p(|x)⟨ϕ^+|(|v_,x⟩⟨⊗||w_,̱y⟩⟨|) |ϕ^+⟩, where|v_,x⟩=T_θ(|u_,x⟩)for all,x. Note that the vectors|v_,x⟩are units or null but in full generality|v_1,x⟩and|v_-1,x⟩are not orthogonal anymore. We thus define a new unitary operator on Alice's Hilbert space for each vector|v_,x⟩: A^_x = (2|v_,x⟩⟨v_,x| - 𝕀). These new operators verify that for all,x,y: ϕ_θ | A_x B_y | ϕ_θ + ϕ_θ | B_y | ϕ_θ/ 1+ϕ_θ | A_x | ϕ_θ = ⟨ϕ^+|A^_x B_y |ϕ^+⟩ =: [A^_x B_y], which is well-defined as long asA_x≠±1, i.e. at least wheneverθ∉{0,π/2}. Upon non-linear transformationT_θ, the original correlation vector P, obtained with a pure and projective two qubit realization, can thus be interpreted as correlations of a quantum point Qrealized by the state|ϕ^+⟩, four measurementsA^_xfor Alice and two measurementsB_yfor Bob: P = [ ⟨ B_0⟩ ⟨ B_1⟩; ⟨ A_0⟩ ⟨ A_0 B_0⟩ ⟨ A_0 B_1⟩; ⟨ A_1⟩ ⟨ A_1 B_0⟩ ⟨ A_1 B_1⟩ ] Q = [ 0 0; 0 [A^+_0 B_0] [A^+_0 B_1]; 0 [A^-_0 B_0] [A^-_0 B_1]; 0 [A^+_1 B_0] [A^+_1 B_1]; 0 [A^-_1 B_0] [A^-_1 B_1] ] Note that all marginal terms of this new quantum point are obtained using the property that marginals correlations are0when measuring|ϕ^+⟩:⟨ϕ^+|A^_x |ϕ^+⟩ = ⟨ϕ^+| B_y |ϕ^+⟩ = 0, therefore both behaviors have the same number of degrees of freedom. The point Qadmits a realization that can be parameterized by state: |ϕ^+⟩ = 1/√(2)(|00⟩+|11⟩), measurements: A^_x = cos(a^_x) σ_z + sin(a^_x) (cos(α_x) σ_x + sin(α_x) σ_y), a^_x ∈ [0,2π), B_y = cos(b_y) σ_z + sin(b_y) (cos(β_y) σ_x + sin(β_y) σ_y), b_y ∈ [0,2π), where if one assumes thata_x ∈ [0,π)the new measurement angles are given by a^_x = 2 atan(tan(a_x/2)tan(θ)^). Note that when the initial state is the singlet state (θ=π/4), then the modified measurements are the same, i.e.a^_x = a_xfor all, x. The limitθ→ 0is well-defined and gives modified angles a_x^-1=0whena_x=0and a_x^-1=πotherwise. Furthermore, whenθ∈ [0, π/4], we have 0 ≤a_x^+ ≤ a_x ≤a_x^- ≤π, a_x ⟶ a_x^+ is a increasing function of θ, a_x ⟶ a_x^- is an decreasing function of θ. and whenθ∈ (π/4, π/2): 0 ≤a_x^- ≤ a_x ≤a_x^+ ≤π, a_x ⟶ a_x^+ is a decreasing function of θ, a_x ⟶ a_x^- is an increasing function of θ. §.§ Steering on Alice vs Bob Everything in this appendix about the steering transformation could be done on Bob's side as well. As such, it is possible to associate to each measurementB_ytwo modified measurementsB̃_y^$̱ on Bob's Hilbert space such that one can map the initial realization on a partially entangled state to a realization on a maximally entangled state. Those modified measurements would also have modified angles with respect to σ_z, following the same transformation b^_y = 2 atan(tan(b_y/2)tan(θ)^), for all ,̱y. The realization associated to these measurements (A_x, B̃_y^)̱ with angles a_x and b̃_y^$̱ is in general different from the one involving(Ã_x^, B_y)where the steering is applied on Alice's side: the value of the correlators[A_x B̃_y^]̱take different values than the[Ã_x^ B_y]. However, the fact that the modified measurements angles are fully alternating is unchanged by whether the transformation is applied on Alice or Bob's side. Indeed, for a fixed entanglement parameterθ, the steering transformation of the measurements angles verify the following: * It is monotonous, i.e. for all ∈{±1 }, a ≤ b ⟹ [ã^]_π≤ [b̃^]_π. * Modifying an angle twice in a row with opposite outcome sign gives back the original angle, as 2atan(tan([2atan(tan(a/2)tan(θ))]/2)tan(θ)^-1)=a. With those two properties, one can verify that ∀ a,b ∈ [0,π), [a^]_π≤ b ⟺ a ≤ [b^-]_π by application of the steering transformation. As such, the fully alternating condition when the steering is applied on Alice's measurement angles is equivalent to the full alternating condition after application of the steering condition on Bob's side: ( ∀,, 0 ≤ [a_0^]_π≤ b_0 ≤ [a_1^]_π≤ b_1 < π) ⟺( ∀,, 0 ≤ a_0 ≤ [b_0^]_π≤ a_1 ≤ [b_1^]_π < π). §.§ Proof of Proposition 1 A subset of𝒬that is very well characterized is the subspace of zero marginals distributions. In this case, the following statement holds: [Masanes Theorem, <cit.>] In the CHSH scenario, the set of quantum points with zero marginals is exactly the set of points verifying the following eight inequalities: [ -π≤-asin⟨ A_0B_0 ⟩+asin⟨ A_1B_0⟩+asin⟨ A_0B_1⟩+asin⟨ A_1B_1 ⟩≤π; -π≤asin⟨ A_0B_0⟩-asin⟨ A_1B_0⟩+asin⟨ A_0B_1⟩+asin⟨ A_1B_1 ⟩≤π; -π≤asin⟨ A_0B_0⟩+asin⟨ A_1B_0⟩-asin⟨ A_0B_1⟩+asin⟨ A_1B_1⟩≤π; -π≤asin⟨ A_0B_0⟩+asin⟨ A_1B_0⟩+asin⟨ A_0B_1⟩-asin⟨ A_1B_1⟩≤π. ] Since any point obtained by measuring a maximally entangled two qubits state|ϕ^+⟩= (|00⟩+|11⟩)/√(2)has zero marginals, it must verify the above inequalities. For any point P ∈𝒬 which admits a pure projective two qubit realization, the steering transformation allows one to consider a non-linear transformation of P granting a vector of correlation Q = {[A_x^ B_y]}_,x,y with zero marginals, in a scenario with four binary measurements for Alice and two binary measurement for Bob. Thus, for each pair (A_0^, A_1^) of Alice's new measurements, <ref> holds and grants: [ -π≤-asin [A_0^ B_0] +asin [A_1^ B_0] +asin [A_0^ B_1]+asin[A_1^ B_1]≤π; -π≤asin [A_0^ B_0] -asin [A_1^ B_0] +asin [A_0^ B_1]+asin[A_1^ B_1] ≤π; -π≤asin [A_0^ B_0] +asin [A_1^ B_0] -asin [A_0^ B_1]+asin[A_1^ B_1]≤π; -π≤asin [A_0^ B_0] +asin [A_1^ B_0] +asin [A_0^ B_1]-asin[A_1^ B_1]≤π ] for all ,∈{-1,1}, where: [A_x^ B_y] = ⟨ A_x B_y ⟩+⟨ B_y ⟩/ 1+⟨ A_x ⟩ § PROOF OF <REF> This section is organized in the following way: in the first two subsections we prove some preliminary results. In particular, we prove in <ref> that the conditions of <ref> along with the assumption that the underlying realization is pure, projective and and nondegenerate in𝒬_2is sufficient to fully identify a unique realization. The third subsection is dedicated to the proof of <ref>. To prove extremality, we consider convex decompositions into sub-correlations. We make use of the steering transformation, introduced in <ref>, to identify four underlying realizations on four different states. We then use geometrical considerations to understand how the conditions <ref> relate these different realizations to each other. Finally, we transpose these conclusions onto the original quantum realization and infer its properties. §.§ Preliminary Geometrical Considerations Let us recall some elements of geometry. Here, we consider normalized states|ψ_1⟩,|ψ_2⟩,|ψ_3⟩in a real Hilbert spaceℋ=ℝ^d. The angular distance between |ψ_1⟩ and |ψ_2⟩ is defined as the angle θ∈[0,π] s.t. cosθ=⟨ψ_1|ψ_2⟩. The angular distanceθis 0 iff|ψ_1⟩=|ψ_2⟩,πiff|ψ_1⟩=-|ψ_2⟩, andπ/2iff|ϕ_1⟩and|ψ_2⟩are orthogonal. The pairwise angular distance between three vectors |ψ_1⟩, |ψ_2⟩, |ψ_3⟩ satisfies the triangle inequality |θ_12-θ_23| ≤θ_13≤θ_12+θ_23, with equality only if the vectors are coplanar. Clearly, two vectors define a plane, and therefore if either pair of states are colinear (θ=0,π), then all vectors are coplanar. Furthermore, whenever θ_ij=0 for some i≠ j, then θ_ik=θ_jk (k≠ i,j) and all versions of the inequalities <ref> are verified. Similarly, whenever θ_ij=π for some i≠ j, then θ_ik=π-θ_jk (k≠ i,j) and all versions of the inequalities <ref> are verified. We can thus assume that the angular distance between every pair of states is strictly contained in the interval 0<θ_ij<π. In this case, |ψ_1⟩ and |ψ_2⟩ define a plane. In particular, there exist two orthonormal vectors |ϕ_1⟩, |ϕ_2⟩ such that |ψ_2⟩ =|ϕ_1⟩ |ψ_1⟩ =cosθ_12|ϕ_1⟩ + sinθ_12|ϕ_2⟩. Without loss of generality, the third vector can then be written |ψ_3⟩ = cosθ_23|ϕ_1⟩ + sinθ_23cosα|ϕ_2⟩ + sinθ_23sinα|ϕ_3⟩ for |ϕ_3⟩ orthogonal to |ϕ_1⟩ and |ϕ_2⟩ and an angle α∈[0,π]. The angular distance θ_13 between |ψ_1⟩ and |ψ_3⟩ then satisfies cosθ_13 = cosθ_12cosθ_23+sinθ_12sinθ_23cosα. In order to find the extremal values of θ_13, notice that setting the derivative of the rhs with respect to α to zero yields the values cos(θ_12-θ_23)=cos(|θ_12-θ_23|) for α=0 and cos(θ_12+θ_23) for α=π. Using |θ_12-θ_23|<θ_12+θ_23 and the fact that the cosine function is strictly decreasing on [0,π], one obtains the lower and upper bounds of <ref>. Moreover, since these extremal values are only reached for α=0,π, saturation of <ref> is only possible when all vectors belong to the plane spanned by |ϕ_1⟩ and |ϕ_2⟩. Consider normalized vectors m⃗_𝖺,x, n⃗^_x,y∈ℝ^12 for all 𝖺∈{-1,1}, x,y ∈{0,1}. Suppose that: * ∀ (𝗌, 𝗍)∈{-1,1}^2, asinm⃗_𝗌,0|n⃗^_0,0 +asinm⃗_𝗍,1|n⃗^_1,0 -asinm⃗_𝗌,0|n⃗^_0,1+asinm⃗_𝗍,1|n⃗^_1,1 =π * ∃λ_x ∈ (0,1), l ∈ℝ s.t. λ_x n⃗^+_x,0|n⃗^+_x,1 + (1-λ_x) n⃗^-_x,0|n⃗^-_x,1 = l (doesn't depend on x). Then all triples of vectors m⃗_𝖺,x, n⃗^_x,0, n⃗^_x,1 lie in the same plane and n⃗^_x,0|n⃗^_x,1 doesn't depend on 𝖺,x. Let's consider the angular distances γ_x,y^, θ_𝖺,x∈[0,π] defined as: cos(γ_x,y^) = m⃗_𝖺,x|n⃗^_x,y, cos(θ_𝖺,x) = n⃗^_x,0|n⃗^_x,1. Using asin = π/2 - acos, the first assumption of the lemma then becomes: ∀ (𝗌, 𝗍)∈{-1,1}^2, γ^_0,0 + γ^_1,0 - γ^_0,1 + γ^_1,1 = 0 Moreover, <ref> applied to n⃗^_x,0, n⃗^_x,1 and m⃗_,x grants: |γ_x,0^ - γ_x,1^ | ≤θ_,x≤γ_x,0^ + γ_x,1^. Let us now consider the orderings of θ_,x which are compatible with the second requirement of the lemma. Note that this implies that there exist convex combinations of cos(θ_,x) for =± that doesn't depend on x. In particular, this implies that it is impossible to have θ_,1 < θ_,0 for all ,. Without loss of generality, let's consider that θ_+,0≤θ_+,1 (the other cases are treated similarly). We obtain from <ref>: 0 ≤γ_1,0^+ + γ_1,1^+ = γ_0,1^+ - γ_0,0^+ ≤θ_+,0≤θ_+,1≤γ_1,0^+ + γ_1,1^+, which implies that θ_+,0=θ_+,1. The second hypothesis of the lemma then implies that either θ_-,0, θ_-,1≤θ_+,0=θ_+,1 or θ_-,0, θ_-,1≥θ_+,0=θ_+,1. Let's consider the first case, where we have θ_-,0≤θ_+,1 (again, the other one is similar). We obtain from <ref>: 0 ≤γ_1,0^+ + γ_1,1^+ = γ_0,1^- - γ_0,0^- ≤θ_-,0≤θ_+,1≤γ_1,0^+ + γ_1,1^+, and thus θ_-,0=θ_+,1. Since three of the thetas match, hypothesis 2 implies that all four are equal, and that 0 ≤γ_1,0^- + γ_1,1^- = γ_0,1^- - γ_0,0^- ≤θ_-,0= θ_-,1≤γ_1,0^- + γ_1,1^-, The saturation of the triangle inequalities in <ref> implies that m⃗_𝖺,x, n⃗_0,𝖺,x and n⃗_1,𝖺,x must be co-planar for all ,x. Furthermore, we obtained θ_,x=θ_a',x' for all ,',x,x', i.e. that n⃗_0,𝖺,x|n⃗_1,𝖺,x doesn't depend on ,x. §.§ Preliminary Quantum Considerations <cit.> In the CHSH scenario, suppose that a correlation vector verifies the following conditions: * It is extremal in 𝒬, * Up to local unitaries, there exists a unique pure, projective and non-degenerate realisation (|ϕ_θ⟩, A_x, B_y) of local dimension two giving those correlations. Then this correlation vector self-tests the realisation (|ϕ_θ⟩, A_x, B_y). The fact that the correlation vector is supposed extremal is key to the previous Lemma, which of course does not hold if it's not the case. <cit.> Any correlation verifying A_x = B_y = 0 for all x,y, asinA_0 B_0 +asinA_1 B_0-asinA_0 B_1+asinA_1 B_1 =π and such that at most one A_xB_y is equal to ± 1, self-tests the maximally entangled two qubit state |ϕ^+⟩ along with alternating measurements on the (σ_z, σ_x)-plane. If one obtains correlations verifying the conditions of the lemma with a realization of local dimension 2, self-testing ensures the existence of unitaries U_A, U_B ∈ L(ℂ^2) s.t: (U_A ⊗ U_B) |ψ⟩ = |ϕ^+⟩, U_A A_x U_A^† = cos(a_x) σ_z + sin(a_x) σ_x, U_B B_1 U_B^† = cos(b)σ_z+ sin(b) σ_x, U_B B_0 U_B^† = σ_z. where 0≤ a_0 ≤ b ≤ a_1 ≤π are fully determined by A_x B_y. Consider a nonlocal correlation of the set 𝒬, then: * A_x≠± 1 and as such the correlators [A^_x B_y] = A_xB_y+B_y/1+A_x are well-defined * There is always at least two equations in the four equations of <ref> in which the number of correlators [A^_x B_y] equal to ± 1 is at most 1. Moreover, the number of correlators equal to ± 1 in the other two equations is at most 2. The proof mainly relies on a result from Ref. <cit.> where the authors prove that if a quantum correlation has two zero of probabilities for a fixed input choice x of Alice, then it is local. In terms of correlations, it means that if two equations of the form 1+A_x+B_y+A_x B_y=0 are satisfied for the same (,x) or the same (,̱y), then the correlation is local. They also prove that a non-local point can have at most three zeros of positivities. Therefore, one can verify that if A_x=± 1, non-signaling condition imply that (̱B_y∓A_x B_y) ≥ 0 for all $̱ and thus thatB_y∓A_x B_y=0. Then both equations1∓A_x+B_y∓A_x B_y=0for=̱±1are satisfied, and the point is local. Conversely, if the point is non-local, thenA_x≠± 1. For the second point, we have that[A^_x B_y]=± 1iff1 +A_x∓B_y∓A_x B_y=0, and as such verifying[A^_x B_y]=± 1is equivalent to having a zero of probability for tuples(,x)and(=̱∓, y). Let's consider a non-local point. It can have at most three zeros of probabilities. We suppose that this is the case (other cases with fewer zeros can be treated similarly). Since a non-local point cannot have two zeros of probability for a fix tuple(,x), two zeros occur for a givenxand both values ofand one for the other input valuex'and a single output value'. Without loss of generality, let's consider that two zeros happen forx=0and both∈{± 1}and the last one forx'=1and'=+1. Now, we obtain that[A^+_0 B_y_+,0],[A^-_0 B_y_-,0],[A^+_1 B_y_+,1]are equal to± 1for some given inputsy_+,0,y_-,0,y_+,1. Considering the four equations of <ref>, both equations for tuple(,+)have two terms s.t.[A^_x B_y]=± 1([A^_0 B_y_,0]and[A^+_1 B_y_+,1]), and both equations for tuple(,-)only have one ([A^_0 B_y_,0]). Since no other zeros of probabilities can happen, no other terms[A^_x B_y]can be equal to±1. For any pure, projective and non-degenerate realization of local dimension two (|ψ⟩, A_x, B_y) verifying the assumptions of <ref>, there exists local unitaries U_A, U_B ∈ L(ℂ^2) such that: (U_A ⊗ U_B) |ψ⟩ = |ϕ_θ⟩, U_A A_x U_A^† = cos(a_x) σ_z + sin(a_x) σ_x, U_B B_y U_B^† = cos(b_y) σ_z + sin(b_y) σ_x, and the parameters θ, a_x, b_y are fully determined by the correlators A_x, B_y and A_x B_y. Up to local unitaries U_A, U_B ∈ L(ℂ^2), one can map any realization (|ψ⟩, A_x, B_y) to: state: |ϕ_θ⟩, θ∈ [0,π/4], measurements: A_x = cos(a_x) σ_z + sin(a_x) (cos(α_x) σ_x + sin(α_x) σ_y), B_0 = cos(b_0) σ_z + sin(b_0) (cos(β) σ_x + sin(β) σ_y), B_1 = cos(b_1) σ_z + sin(b_1) σ_x. The case where the marginals A_x = B_y = 0 vanish is due to the self-testing result <cit.> (see <ref>) so we can exclude the zero marginals case. Since the correlations are supposed non-local, we have θ≠ 0,π/4. One can apply the steering transformation introduced in <ref> to ensure that all [A^_x B_y] are correlators measured on a singlet state |ϕ^+⟩, where: A^_x = cos(a^_x) σ_z + sin(a^_x) (cos(α_x) σ_x + sin(α_x) σ_y), B_0 = cos(b_0) σ_z + sin(b_0) (cos(β) σ_x + sin(β) σ_y), B_1 = cos(b_1) σ_z + sin(b_1) σ_x. Note that since the correlations are non-local, we have a_0 a_1 [π] and the fact that the state is not maximally entangled ensures that either a^+_0≠a^-_0 or a^+_1≠a^-_1. The fact that correlations are taken non-local ensures by <ref> that one can apply the self-testing result <ref> for at least two pairs of (,), certifying the angle of measurements B_0, B_1 with respect to (A_,0,A_,0) in those cases. Note that if <ref> doesn't apply for the other pairs, it means that some correlators are equal to ± 1, i.e. [A^_xB_y] = ± 1. Since the vector A^_x|ϕ^+⟩ and B_y |ϕ^+⟩ are unit and |ϕ^+⟩ is full rank, this would mean that A^_x=B_y, fully identifying the missing measurements. Finally, since the initial state is already |ϕ^+⟩, there exists U∈ L(ℂ^2) s.t: UA^_x U^† = cos(w_,x) σ_z + sin(w_,x) σ_x, U B_0 U^† = cos(b) σ_z + sin(b) σ_x, U B_1 U^† = σ_z where 0≤ w_,0≤ b ≤ w_,1≤π for all , are determined by the correlators [A^_xB_y]. In particular, we obtain that all vectors A^_x|ϕ^+⟩, B_y|ϕ^+⟩ lie in the same plane. If one consider x such that A^+_x≠A^-_x, then A^+_x|ϕ^+⟩ and A^-_x|ϕ^+⟩ form a plane of angle α_x, with respect to σ_x - σ_y. This implies that it is always possible to perform unitaries V_A = diag(1, e^-iα_x) and V_B = V_A^† such that measurements A_x, B_y are real and the state is unchanged ((V_A⊗ V_B) |ϕ_θ⟩= |ϕ_θ⟩). We are now left with only real matrices and thus we can parametrize U as a real unitary matrix of the form: U = [ cos(γ) - ϵ_u sin(γ); sin(γ) ϵ_u cos(γ); ] where ϵ_u ∈{-1,1} and γ∈ [0,2π). Since a global sign flip doesn't change the measurement, one can assume γ∈ [-π/2,π/2]. Likewise, the sign ϵ_u can be absorbed by applying a σ_z on both parties (which preserves the state |ϕ_θ⟩). Therefore, for the rest of the proof, we assume that U is a rotation of angle γ. Let us write the initial measurements A_x = |u_+,x⟩⟨-||u_-,x⟩⟨$| and denote|v_,x⟩:= T_θ(|u_,x⟩). Since the vectors|v_,x⟩correspond to the eigenvectors ofA^_xof eigenvalue, they are fixed by <ref>: |v_+,x⟩ = U^† (cos(w_+,x/2) |0⟩ + sin(w_+,x/2) |1⟩):=U^†|W_+,x⟩, |v_-,x⟩ = U^† (sin(w_-,x/2) |0⟩ - cos(w_-,x/2) |1⟩):= U^†|W_-,x⟩. Note that in <ref>, we showed that|u_,x⟩= T_π/2-θ(|v_,x⟩). As such the measurementsA_xare fully determined by the knowledge ofθandγ. More precisely: |u_,x⟩∝ (sin(θ)0|U^†|W_,x|0⟩ + cos(θ)1|U^†|W_,x|1⟩). where the normalization constants are given by⟨u_,x|=⟩1. Since the original measurementsA_xare unitary, we haveu_+,x|u_-,x=0which implies that for allx: sin^2(θ)0|U^†|W_+,x0|U^†|W_-,x + cos^2(θ)1|U^†|W_+,x1|U^†|W_-,x = 0. The lhs can be simplified by introducingγ_x = γ - (w_+,x+w_-,x)/4, leading to 1/2[cos(2θ)cos(2γ_x)-sin(w_+,x+w_-,x/2)]=0. By linear combination of those equations, one obtains tan(γ) = sin(w_+,0-w_-,0/2)cos(w_+,1+w_-,1/2)-sin(w_+,1-w_-,1/2)cos(w_+,0+w_-,0/2)/sin(w_+,0-w_-,0/2)sin(w_+,1+w_-,1/2)-sin(w_+,1-w_-,1/2)sin(w_+,0+w_-,0/2), where we allow both sides to be infinite. This equation fixes a uniqueγ∈[-π/2,π/2]and <ref> fixesθto be eitherθ^⋆orπ-θ^⋆, and only one of those two solutions can belong to(0,π/4). §.§ Proof of <ref> Let us consider non-local correlations in the CHSH scenario satisfying <ref>. <ref> reduces the search for extremal points to the ones admitting a pure real realization of local dimension 2 and since every correlation is a convex combination of extremal correlations, one can write: ∀ x,y ∈{-1,0,1}, A_x B_y = ∑_i=1^N p_i A_x^(i) B_y^(i) wherep_i ≥ 0,∑_i p_i =1,A_-1=B_-1=𝕀and on each blockithe correlation vector{A_x^(i) B_y^(i)}_x,yis in𝒬_2. To prove that the initial correlations are extremal in𝒬, we need to prove that the sub-correlations{A_x^(i) B_y^(i)}_x,ydon't depend on the registeri. Such a decomposition allows considering an overall underlying realization with local Hilbert spaces encoding a qubit space and a classical register:ℋ_A(B)= ℛ^2 ⊗𝐍(where𝐍is the set of natural numbers from1toN). The overall state and measurements are: state: |ψ⟩ = ∑_i=1^N p_i |ψ^i⟩⊗|ii⟩, measurements: A_x = ∑_i=1^N A_x^(i)⊗|i⟩⟨,| B_y = ∑_i=1^N B_y^(i)⊗|i⟩⟨.| For eachione can find unitariesU_A^i,U_B^isuch that the underlying state is|ϕ_θ^i⟩, withθ^i ∈ [0,π). Considering the local unitariesU_A(B)=∑_i U_A(B)^i ⊗|i⟩⟨$|, the overall state can then be written (U_A⊗ U_B) |ψ⟩ = ∑_i p_i |ϕ_θ^i⟩⊗|ii⟩. Let us now consider the steering transformation T_θ^i introduced in <ref> on each block. We know that it allows to consider new operators A^ (i)_x ∈ L(ℝ^2) (see <ref>) such that ϕ_θ^i | A_x^(i) B_y^(i) | ϕ_θ^i + ϕ_θ^i | B_y^(i) | ϕ_θ^i = ( 1+ϕ_θ^i | A_x^(i) | ϕ_θ^i)⟨ϕ^+|A^ (i)_x B_y^(i)|ϕ^+⟩, Let us now consider that 1+A_x = 1 + ∑_i=1^N p_i ϕ_θ^i | A_x^(i) | ϕ_θ^i = ∑_i=1^N p_i ( 1+ϕ_θ^i | A_x^(i) | ϕ_θ^i) . Since the overall correlation is non-local, <ref> ensures that 1+A_x≠ 0. One can therefore define α_,x^i = p_i ( 1+ϕ_θ^i | A_x^(i) | ϕ_θ^i) /1+A_x which verifies α_,x^i ≥ 0 and ∑_i α_,x^i = 1. Now using <ref> we have: A_x B_y + B_y/1+A_x = ∑_i=1^N α_,x^i ⟨ϕ^+|A^ (i)_x B_y^(i)|ϕ^+⟩. We now define a four new overall states and measurements operators as |Φ^+_,x⟩ =∑_i√(α_,x^i)|ϕ^+⟩⊗|ii⟩, A^_x = ∑_iA^_x^i⊗|i⟩⟨,| B_y = ∑_i B_y^(i)⊗|i⟩⟨.| Denoting expectations values on |Φ^+_,x⟩ as ⟨·⟩_,x, we obtain: ⟨A^_x ⟩_,x = 0, ⟨ B_y ⟩_,x = 0, A_x B_y + B_y/1+A_x = A^_x B_y_,x. Note that the marginals are always 0 because for every classical index i the measured state |ϕ^+⟩ is maximally entangled. Since the measurement operators in <ref> are real and unitary, and the state |Φ_,x^+⟩ is real and normalized, all vectors A^_x |Φ_,x^+⟩, B_y |Φ_,x^+⟩ are real and normalized. Notice that we have: ⟨ B_0 B_1 ⟩_,x=∑_iα_,x^i ⟨ϕ^+|B_0^i B_1^i|ϕ^+⟩, (1+A_x)α_+,x^i + (1-A_x)α_-,x^i = 2p_i Therefore, with λ_x = (1+A_x)/2 ∈ (0,1) and utilizing the assumption of <ref>, we obtain: * ∀ (,) ∈{-1,1}^2, asin A_0^ B_0_,0 +asin A_1^ B_0_,1 -asin A_0^ B_1_,0+asin A_1^ B_1_,1 =π * λ_x ⟨ B_0B_1 ⟩_+,x + (1-λ_x)⟨ B_0B_1 ⟩_-,x = ∑_i p_i ⟨ϕ^+|B_0^i B_1^i|ϕ^+⟩ doesn't depend on x. We can thus use <ref> with m⃗_,x= A_x^|Φ_,x^+⟩ and n⃗^_x,y= B_y|Φ_,x^+⟩ to obtain that the following triples of vectors must be coplanar for all ,x: B_0|Φ_,x^+⟩, B_1|Φ_,x^+⟩, A_x^|Φ_,x^+⟩. and that ⟨ B_0B_1 ⟩_,x:=C doesn't depend on ,x. This allows us to write for all ,x: A^_x|Φ^+_,x⟩ = r_,x^0 B_0|Φ^+_,x⟩ + r_,x^1 B_1|Φ^+_,x⟩ where r_,x^y ∈ℝ. By projecting on each classical register |i⟩⟨$|, we get (A^_x)^i|ϕ^+⟩= r_,x^0 B_0^i|ϕ^+⟩ + r_,x^1 B_1^i|ϕ^+⟩ Now we can sum over all classical registers but with weights√(α_',x'^i)to obtain A^_x|Φ^+_',x'⟩= r_,x^0 B_0|Φ^+_',x'⟩ + r_,x^1 B_1|Φ^+_',x'⟩ And finally we can compute ⟨A^_x B_y ⟩_',x' = r_,x^y ⟨ B_y^2 ⟩_',x' + r_,x^0 r_,x^1 ⟨ B_0 B_1⟩_',x' = r_,x^y + r_,x^0 r_,x^1 C Since the right-hand term doesn't depend on',x'neither can the left-hand one. Thus, we have proven that⟨ A_x ^ B_y ⟩_',x'doesn't depend on the indexes',x'. Notably, we can rewrite the assumption of <ref> for a single arbitrary state|Φ^+⟩:= |Φ^+_+,0⟩: ∀ (,)∈{-1,1}^2, asin A_0^ B_0_Φ^+ +asin A_1^ B_0_Φ^+ -asin A_0^ B_1_Φ^++asin A_1^ B_1_Φ^+ =π For all(,), we are now dealing with a realization on the state|Φ^+⟩, with two measurements A_0^, A_1^andB_0,B_1for each party. The corresponding quantum correlations can be decomposed as a convex mixture ofNsub-correlations due to the classical register encoded in|Φ^+⟩. Since it has zero marginals and verifies <ref>, the work of <cit.> ensures that the overall correlation is extremal for all,and thus all sub-correlations are equals. Therefore: {⟨ϕ^+|A^ (i)_x B_y^(i)|ϕ^+⟩}_,x,y doesn't depend on the register i. Moreover, it means that every initial sub-correlations, obtained by measuring state|ϕ_θ^i⟩with real measurements(A_x^(i), B_y^(i)), satisfy the condition of <ref>. Since <ref> ensures that such a realization is determined by the values of⟨ϕ^+|A^ (i)_x B_y^(i)|ϕ^+⟩, we obtain thatθ^iand(A_x^(i), B_y^(i))don't depend oni. This implies that the sub-correlations{A_x^(i) B_y^(i)}_x,ydon't depend oniand thus the extremality of the initial correlations. The self-testing part is then obtained combining extremality and <ref> together with <ref>. § PROOF OF <REF> The correlation distribution corresponding to a realization <ref> in𝒬_2is: P⃗_θ,a_x,b_y = [ cos(2θ)cos(b_0) cos(2θ)cos(b_1); cos(2θ)cos(a_0) 2c2*cos(a_x)cos(b_y)+sin(2θ)sin(a_x)sin(b_y); 1-1cos(2θ)cos(a_1) 2c ], θ,a_x,b_y ∈ℝ While <ref> guarantees that every extremal point in𝒬is of this form, it is not granted that every such point is extremal in𝒬. In the following, we express a condition on the parametersθ, a_x, b_yfor such a point to be non-exposed, i.e. not to be the unique maximizer of any Bell expression. In all generality, a Bell expression in the CHSH scenario can be denoted by a real vectorβ∈ℝ^8. The value of the Bell expression on Pis given by the scalar productβ· Pand thus Pis non-exposed in𝒬iff ∀β∈ℝ^8, ∃P'∈𝒬, s.t β·P'≥β·P, P'≠P If we denote by𝒞_Pthe set of all Bell expressions for which Preaches their maximal quantum value, i.e.𝒞_P = {β∈ℝ^8 | β·P = max_P'∈𝒬β·P'}. Then the condition for Pto be non-exposed reduces to ∀β∈𝒞_P, ∃P'∈𝒬, s.t β·P' = β·P, P'≠P. The proof of Theorem 3 is divided in two main parts, which are developped in the sections below. In the first one, we find necessary conditions on Bell expressions to be in the subspace𝒞_P. In the second one, we identify a point verifying <ref> for every Bell expression in this subset. In what follows, we fix a choice of parametersθ, a_x, b_yin the region <ref> and denote by Pthe distribution P_θ, a_x, b_y. Furthermore, since all realizations withθ=0only give local correlations, we assumeθ >0. §.§ Identification of Bell expressions maximized by P⃗ For a given choice of measurements we introduce the measurement vector M = { A_0, A_1, B_0, B_1, A_0 B_0, A_1 B_0, A_0 B_1, A_1 B_1}∈ L(ℋ_A⊗ℋ_B)^8. For every Bell expressionβwith{β}_-1,-1=0, one can construct the Bell operator associated with this measurement choice asS = β· M, which is an hermitian operator. For any state|ψ⟩, the correlation distribution is given byP⃗'⃗ = ⟨ψ| M |ψ⟩and the value of the Bell expression is β·P⃗'⃗ = ⟨ψ| S |ψ⟩. As such, if the pointP⃗is to give the maximal quantum value ofβ, then the state|ϕ_θ⟩must be an eigenstate ofS(of maximal eigenvalue). This implies that for all vector|ψ^⟩orthogonal to|ϕ_θ⟩,|ψ^⟩is also orthogonal toS |ϕ_θ⟩, and as such 0 = ⟨ψ^| S |ϕ_θ⟩ = β·⟨ψ^| M |ϕ_θ⟩ = β· T_|ψ^⟩ where we denoted T_|ψ^⟩ = ⟨ψ^| M |ϕ_θ⟩. Note that this condition was recently used for solving optimization problems and prove that specific quantum points are non-exposed, see <cit.>. Another condition to get a maximal violation is that for any small variation of the parametersθ, a_x, b_y, the value of the Bell expression should be non-increasing. At first order, this gives 0 = β⃗·∂P⃗/∂θ= β⃗·∂P⃗/∂ a_x = β⃗·∂P⃗/∂ b_y We now consider a (possible) subset of the necessary conditions <ref> and <ref> by considering only the three orthogonal states|ψ_θ⟩=sin(θ) |00⟩ - cos(θ)|11⟩,|01⟩,|10⟩, and variations along measurement anglesa_0andb_0. This allows us to say that if Pmaximizesβ, then β∈ V^, where V = Vect⟨ T_|ψ_θ⟩, T_|01⟩, T_|10⟩, ∂P⃗/∂ a_0, ∂P⃗/∂ b_0⟩ This allows us to conclude that𝒞_P ⊂ V^and to reduce the range of Bell expressions to this linear subspace for the rest of the argument. §.§ Non-exposed sufficient condition Let's suppose there exists a vectorv⃗∈ Vand a local vectorL⃗∈𝒬such thatL⃗ = P⃗ + v⃗, then ∀β∈ V^, β⃗·L⃗ = β⃗·P⃗ Therefore if such a decomposition exists, andL⃗≠P⃗, condition <ref> is satisfied and the pointP⃗is non-exposed in𝒬. A sufficient condition to our problem is therefore to find a decomposition of the form L = P + v, L ∈𝒬, v ∈ V-{0}. We further restrict the choice of Lto be included in one of the following four subspaces: ∀, ∈{-1,+1}, ℒ_ = {[ α_0 α_1; α_0 α_1; α_0 α_1; ], α_0, α_1 ∈ℝ} These spaces have nice properties. First, one can verify that the positivity constraints imply thatα_0, α_1 ∈ [-1,1], and that under these conditions, the value of all variants of the CHSH expression is upper bounded by 2. Therefore, the local, quantum and non-signaling sets coincide in these subspaces, and we can express the condition for a point L ∈ℒ_to be local (or quantum) simply asα_0, α_1 ∈ [-1,1], or equivalently as1-α_x^2 ≥ 0for allx. Second, due to the fact thatθ >0, we have|⟨ A_x⟩|<1for allxand thus P ∉ℒ_for all, . This means that finding a decomposition of the form <ref> forL⃗∈ℒ_always ensures thatv⃗≠ 0. As any point inVcan be written as the linear combination v_x,y,z,a,b = x T_|ψ_θ⟩ + y T_|01⟩ + z T_|10⟩ + a ∂P⃗/∂ a_0 + b ∂P⃗/∂ b_0 wherex,y,z,a,bare five real parameters, finding a decomposition L = P + v⃗whereL⃗∈ℒ_can be translated to the following linear system { { P+ v_x,y,z,a,b}_0,-1 = { P+ v_x,y,z,a,b}_1,-1 = { P+ v_x,y,z,a,b}_-1,0 = { P+ v_x,y,z,a,b}_0,0 = { P+ v_x,y,z,a,b}_1,0 { P+ v_x,y,z,a,b}_-1,1 = { P+ v_x,y,z,a,b)_0,1 = { P+ v_x,y,z,a,b}_1,1 ., where we recall that P_x,y=⟨ A_x B_y ⟩forx,y∈{-1,0,1}with the convention thatA_-1=𝕀_ℋ_A,B_-1=𝕀_ℋ_B. We first focus on the case whereθ≤π/4. In this case, the linear system admits a solution for any parameters verifying <ref> and(,)≠ (-1,1), given by: x = -cos(a_0+1-/2π+(a_1+1-/2π)/2) sin(2θ) /D_, y = sin(a_0+1-/2π/2) cos(a_1+1-/2π/2) sin(θ) /D_, z = cos(a_0+1-/2π/2) sin(a_1+1-/2π/2) cos(θ) /D_, a = -sin(a_0+1-/2π-(a_1+1-/2π)/2)/D_, b = 0, where: D_ = cos(a_0+1-/2π-(a_1+1-/2π)/2) + cos(a_0+1-/2π+(a_1+1-/2π)/2) cos(2θ). This solution gives a decomposition of the form L = P + v⃗whereL∈ℒ_is given by the two parameters: α_y^ = ((cos(a_0) + cos(2θ)) sin(b_y) - cos(b_y) sin(a_0) sin(2θ)) ((cos(a_1) + cos(2θ)) sin(b_y) - cos(b_y) sin(a_1) sin(2θ))/D_. Note that for the parameters we chose here (0 < θ≤π/4,0≤ a_0≤ a_1<π) the denominatorD_is never0. To verify <ref>, we now need to look at when the point Lbelongs to the quantum set. As we said, this is equivalent to ask that their exist some(,)such that-1≤α_y^≤ 1fory∈{0,1}, or equivalently that ∀ y ∈{0,1}, 1-(α_y^)^2 ≥ 0. The validity of this inequality is unchanged by multiplication with a positive scalar and as such one can look at the sign of Δ_ = D_^2/(1+⟨ A_0 ⟩)(1+⟨ A_1⟩)(1-(α_y^)^2). This quantity can be written simply as Δ_ = sin(a^_0-b_y)sin(a^_1-b_y). by considering the following change of variables: a_0 ⟶a^_0, a_1 ⟶a^_1, wherea^_xare defined in <ref>. Thus, the positivity conditions for Lto be local now become equivalent to ∀ y∈{0,1}, Δ_ = sin(a^_0-b_y)sin(a^_1-b_y)≥ 0. We now think by contradiction and look for conditions that ensure that there is no choice of(,)such that the above is verified: * The choice of parameters that we made allows us to state that b_y ≤ a_0 ≤a^+_0 for all y. Therefore, sin(a^+_0 - b_y) ≤ 0 and the above condition for (,)=(1,1) is verified whenever a^+_1≤ b_0. We thus impose a^+_1≥ b_0. * Now, the condition for (,)=(1,-1) is verified whenever sin(a^-_1 - b_y) ≥ 0 for both y. As θ≤π/4, a^-_1≥a^+_1≥ 0 and thus it is always true for y=0. Therefore, we need to have a^-_1≤ b_1. * Last, the condition for (,)=(-1,-1) and y=1 is now verified as a^-_0≤a^-_1≤ b_1. Then the inequality for y=0 does not hold only when a^-_0≤ b_0. Finally, we can conclude that none of the conditions <ref> is verified only when the parameters verify: 0 ≤a^_0≤ b_0 ≤a^_1≤ b_1 < π for all,∈{-1,1}, i.e. when then modified angles on Alice's sidea^_xand the angles on Bob's sideb_yalternate for all choices of(,). Conversely, one can conclude that when this full alternating property is not verified, there exists a solution to the problem <ref> and as such the point Pis non-exposed in𝒬. The casesθ∈ [π/4,π/2),θ∈ (π/2,3π/4]andθ∈ [3π/4,π)were left aside, but the proof goes exactly as the previous case, but considering the solutions of all three linear systems for(,) ≠ (1,-1),(,) ≠ (1,-1)and(,) ≠ (-1,1)respectively.
http://arxiv.org/abs/2406.08115v1
20240612115144
Resource Allocation and Workload Scheduling for Large-Scale Distributed Deep Learning: A Survey
[ "Feng Liang", "Zhen Zhang", "Haifeng Lu", "Chengming Li", "Victor C. M. Leung", "Yanyi Guo", "Xiping Hu" ]
cs.DC
[ "cs.DC", "cs.AI" ]
fliang@smbu.edu.cn 0000-0002-8542-9871 Artificial Intelligence Research Institute, Shenzhen MSU-BIT University China 518107 Guangdong-Hong Kong-Macao Joint Laboratory for Emotional Intelligence and Pervasive Computing, Shenzhen MSU-BIT University China 518107 zhangzhen19@lzu.edu.cn 0009-0007-9955-0916 luhf18@lzu.edu.cn 0000-0003-0155-8447 Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University China 730000 Guangdong-Hong Kong-Macao Joint Laboratory for Emotional Intelligence and Pervasive Computing, Shenzhen MSU-BIT University China 518107 licm@smbu.edu.cn 0000-0002-8542-9871 Artificial Intelligence Research Institute, Shenzhen MSU-BIT University China 518107 Guangdong-Hong Kong-Macao Joint Laboratory for Emotional Intelligence and Pervasive Computing, Shenzhen MSU-BIT University China 518107 vleung@ieee.org 0000-0003-3529-2640 Artificial Intelligence Research Institute, Shenzhen MSU-BIT University China 518107 Department of Electrical and Computer Engineering, The University of British Columbia Canada V6T 1Z4 Corresponding authors. guoyy@smbu.edu.cn 0009-0000-7682-6667 Frontier Cross Disciplinary Research Institute, Shenzhen MSU-BIT University China School of Mechanical and Electrical Engineering, Beijing Institute of Technology China 10081 [1] huxp@smbu.edu.cn 0000-0002-4952-699X Artificial Intelligence Research Institute, Shenzhen MSU-BIT University China 518107 Guangdong-Hong Kong-Macao Joint Laboratory for Emotional Intelligence and Pervasive Computing, Shenzhen MSU-BIT University China 518107 School of Medical Technology, Beijing Institute of Technology China 10081 § ABSTRACT With rapidly increasing distributed deep learning workloads in large-scale data centers, efficient distributed deep learning framework strategies for resource allocation and workload scheduling have become the key to high-performance deep learning. The large-scale environment with large volumes of datasets, models, and computational and communication resources raises various unique challenges for resource allocation and workload scheduling in distributed deep learning, such as scheduling complexity, resource and workload heterogeneity, and fault tolerance. To uncover these challenges and corresponding solutions, this survey reviews the literature, mainly from 2019 to 2024, on efficient resource allocation and workload scheduling strategies for large-scale distributed DL. We explore these strategies by focusing on various resource types, scheduling granularity levels, and performance goals during distributed training and inference processes. We highlight critical challenges for each topic and discuss key insights of existing technologies. To illustrate practical large-scale resource allocation and workload scheduling in real distributed deep learning scenarios, we use a case study of training large language models. This survey aims to encourage computer science, artificial intelligence, and communications researchers to understand recent advances and explore future research directions for efficient framework strategies for large-scale distributed deep learning. <ccs2012> <concept> <concept_id>10010520.10010521.10010537</concept_id> <concept_desc>Computer systems organization Distributed architectures</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003033.10003099.10003100</concept_id> <concept_desc>Networks Cloud computing</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computer systems organization Distributed architectures [500]Computing methodologies Machine learning [500]Computing methodologies Artificial intelligence [500]Networks Cloud computing Resource Allocation and Workload Scheduling for Large-Scale Distributed Deep Learning: A Survey Xiping Hu June 17, 2024 =============================================================================================== § INTRODUCTION With the rapid increase in the sizes of datasets and deep learning (DL) models, distributed DL <cit.> has become the state-of-the-art practice for various artificial intelligence technologies, federated learning <cit.> and smart Internet of Things <cit.>. In contrast to traditional single-node DL that works on a single computing node or even a single GPU, distributed DL can leverage multiple GPUs and computing nodes to handle massive training and inference workloads and improve learning throughput. Notably, in the era of extremely large models with tens of billions of parameters, distributed DL enables efficient large-model training <cit.> across hundreds of computing nodes with thousands of GPUs in the data center. However, distributed DL faces numerous critical challenges related to efficient framework strategies for resource allocation and workload scheduling in large-scale environments. Firstly, with a large number of computational and communication devices in the data center for distributed DL, managing and allocating resources efficiently for distributed DL workloads to utilize resources fully becomes challenging. This challenge is amplified in heterogeneous resource environments, where GPUs have various computational capacities and networks have various communication capacities and topologies. Secondly, distributed DL workloads exhibits more complicated characteristics than those of single-node DL. On the one hand, various parallelism modes of distributed DL workloads give rise to new communication patterns involving significant communication overhead for data transfer and model synchronization. On the other hand, the combination of many computational and communication tasks in distributed DL workloads complicates the execution dependency paradigm, which allows for significant optimization. Thirdly, the exponential increase in large model sizes raises concerns about the cost of computational and communication resources and efficiency of distributed training in a large scale. Tackling these challenges is urgent and requires researchers in the fields of computer sciences, artificial intelligence, and communications to understand critical problems in this domain systematically. Several existing surveys <cit.> have touched on some topics of efficient resource allocation and workload scheduling strategies for distributed DL. For example, Ye et al. have introduced scheduling distributed training and inference workloads on GPUs at the job level. However, these surveys lack a systematic exploration of distributed DL framework strategies for scheduling computational and communication resources and workloads at various granularity levels in large-scale environments. Researchers in the fields of computer science, artificial intelligence, and communications need a comprehensive understanding of representative and critical challenges for framework strategies in large-scale distributed DL environments. To fill the gap in existing surveys on distributed DL framework strategies, this survey systematically investigates critical challenges and efficient distributed DL strategies for resource allocation and workload scheduling. We review the literature over the period mainly between the year 2019 and 2024. The discussion covers various resource types, scheduling granularity levels, and performance goals. For resource allocation strategies, we discuss GPU sharing technologies applying different approaches and network bandwidth-sharing technologies working at different granularity levels. For workload scheduling strategies, we categorize the technologies based on various performance goals and scheduling granularities. Both sets of strategies organized primarily based on the application stage: distributed training and inference. Focusing on efficient strategies for large-scale distributed DL, we highlight key challenges for each topic and provide insights about crucial research outputs. To illustrate how to apply these efficient framework strategies practically in real life, we conduct a case study on large-model distributed training, a rapidly trending and probably long-lasting research topic and one of the best application scenarios that require efficient distributed DL framework strategies. We also provide outlooks for future research directions. The major contribution of this survey is summarized as follows. * We thoroughly and comprehensively survey up-to-date resource allocation and workload scheduling framework strategies for large-scale distributed DL. * We highlight critical challenges for each topic of these framework strategies. * We use a case study on large-model training to illustrate how to apply efficient framework strategies in practice. Y>X §.§ Related Surveys Existing surveys on distributed DL lack systematic coverage over strategies for resource allocation of various resource types and workload scheduling at various levels. Table <ref> compares our survey with other related surveys on the covered topics in the domain of distributed DL framework strategies, including resource allocation based on the GPU and network bandwidth and workload scheduling based on the job, pipeline, and network flow. Some surveys focus on scheduling distributed jobs on GPU data centers. Both Mayer and Jacobsen <cit.> and Ye et al. <cit.> have conducted surveys on job-level GPU allocation and workload scheduling for distributed training and inference in the data center environment. However, these surveys concern only about job-level strategies focusing on the overall performance of the entire data center but not finer-grained strategies focusing on individual job performance. They also only investigate technologies related to the single resource type, the GPU, but not the network bandwidth, which is a significant performance factor for distributed DL with communication as the bottleneck. In contrast to the GPU-centric surveys, some works focus on communications technologies and network bandwidth-allocation strategies for distributed DL. Both Shi et al. <cit.> and Cao et al. <cit.> have reviewed the literature on bandwidth allocation strategies for federating learning over wireless networks. Liang et al. <cit.> have not only investigated bandwidth-allocation strategies on general networks but also studied network-flow-scheduling strategies on different network layers. However, covering communications-only technologies does not reveal the whole picture of efficient scheduling in distributed DL, which requires the joint optimization of computation and communication. For finer-grained workload scheduling in distributed DL, some works <cit.> explore pipeline-level scheduling strategies for overlapping computation and communication workloads to improve throughput. However, they do not highlight primary challenges related to this topic and lack the investigation into job-level resource allocation and workload scheduling strategies, which can orchestrate with these pipeline-level strategies as a synthesis framework solution for distributed DL in data centers. Our survey fills the gap in existing surveys. We instigate resource allocation strategies for both computational and communication resources, primarily the GPU and network bandwidth, to match the resource requirements of distributed DL workloads. We also explore workload scheduling strategies at the job, pipeline, and network flow levels to improve both overall data center throughput and individual job efficiency. The systematic literature study involving various resource types and scheduling granularity levels makes this article a comprehensive survey of up-to-date technologies in the distributed DL framework domain. §.§ Survey Organization Fig. <ref> outlines the organization of the remaining sections in this survey. Section <ref> provides fundamental knowledge about distributed DL and the resource-management and workload scheduling framework. Sections <ref> and <ref> introduce various framework strategies for resource allocation and workload scheduling, respectively. These framework strategies are categorized primarily based on their application scenarios, including distributed training and inference, and secondarily based on resource types, approaches, or performance goals. We discuss the insights at the end of each section. We use a case study of distributed large-model training to show how to apply these framework strategies practically in real-life data centers in Section <ref>. In Section <ref>, we conclude this survey and present outlooks of future research directions. § FUNDAMENTALS OF DL AND DISTRIBUTED DL In this section, we introduce the fundamental knowledge of DL, distributed DL, and the resource allocation and workload scheduling framework for distributed DL. Table <ref> lists frequently used abbreviations used in this survey. §.§ Deep Learning DL is a subfield of machine learning that utilizes deep artificial neural networks, also known as deep neural networks (DNN), to extract complex patterns from training data in a hierarchical manner. The trained DNN is capable to recognize/predict patterns in unseen data. DL has been used in various fields, including NLP <cit.>, computer vision <cit.>, and biomedical engineering <cit.>. §.§.§ DL models A DNN consists of multiple hidden layers. Each layer is comprised of neurons, which are typically activated by non-linear functions. Based on the connections between neurons within and between layers, there can be various types of DNN models. In this survey, when referring to models or DL models, we mean DNNs unless the context otherwise specifies. Fig. <ref> illustrates three basic DNN models: the fully connected DNN, convolutional neural network (CNN), and recurrent neural network (RNN). ∙ Fully connected DNN: The fully connected DNN, also known as the feedforward neural network, constitutes a dense network with an input layer, a number of hidden layers, and an output layer, as depicted in Fig. <ref>. Neurons in a preceding layer connect to all neurons in the subsequent layer, and each connection has a learnable weight parameter indicating the strength of the connection. This architecture enables the fully connected DNN to capture complex relationships within data, finding extensive application in tasks such as classification <cit.>, regression <cit.>, and feature representation embedding <cit.>. ∙ CNN: CNN stands as a prevalent model designed for feature extraction and classification, primarily tailored for image and video data. As depicted in Fig. <ref>, CNN comprises a stack of convolutional layers and pooling layers for context feature extraction. Unlike the fully connected layer, which assigns a weight parameter to each neuron connection, the convolutional layer substantially reduces the number of weight parameters by utilizing a number of kernels, each containing shared weights for feature extraction. The feature extraction process of the convolutional layer's feature extraction process is empowered by the convolution operation, wherein kernels traverse the receptive fields of an image, extracting new features through weighted summations followed by a non-linear activation function. The pooling layer downsamples the data in the convolutional layer to reduce feature dimensions and alleviate overfitting issues. CNN has found widespread applications in various computer vision tasks, including image classification <cit.>, semantic segmentation <cit.>, and object detection <cit.>. ∙ RNN: RNN, a DL model that deals with sequential data like time-series data, natural language, and speech audio, is illustrated in Fig. <ref>. The general architecture of RNN includes hidden units that capture and propagate temporal context from the input sequence to subsequent hidden unites. It updates continuously and utilizes the temporal context based on the current input and previous temporal context to make predictions. To address the challenge of capturing long-range temporal dependencies, two common variants of RNNs, known as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have been developed, providing a trade-off between modeling such dependencies and reducing computation complexity effectively. Common applications of RNN include tasks such as time series forecasting <cit.>, NLP <cit.>, and automated planning <cit.>. §.§.§ Training and inference The training of a DL model is the process of optimizing its parameters to minimize the prediction error on a training dataset, as determined by a specified loss function, or objective function. Loss functions can be either convex or non-convex, leading to convex or non-convex optimization problems. Training can be decomposed into two key processes: feedforward and backpropagation. In the feedforward process, training data are passed into the model's input layer, and the output prediction is computed by forwarding data through the network using the current model parameters. In the backpropagation process, the prediction error and gradients are calculated with respect to the loss function, and trainable parameters are updated iteratively in a backward manner, optimizing the model for the minimum loss. Common optimizers for backpropagation updating include mini-batch Stochastic Gradient Descent (SGD) <cit.>, SGD with momentum <cit.>, Adagrad <cit.>, and Adam <cit.>. The training process usually operates on batches of training data iteratively over multiple epochs until the model converges. A model is said to have converged when the training error settles to within a predefined error range, and additional training will not further decrease the error. After completing the training process, the weight parameters in the DL model are learned and fixed. Following the training process, there is typically a validation process for validating the performance of the trained model, providing information for fine-tuning hyperparameters and retraining the model for better performance. The inference process passes forward unseen data through the trained DL model to make predictions. Depending on the specific requirements of an application, the resulting prediction can be extracted either either from the output layer or from the predicted latent representation in an intermediate hidden layer. For example, in the context of network traffic analysis, an end-to-end DNN model may be trained to classify traffic types directly <cit.>, Alternatively, an encoder-decoder model trained on traffic data can utilize the latent representation generated by the encoder for subsequent tasks such as attack detection <cit.>. Computing tasks related to a specific portion of the DL model for specific epochs during the training or inference process are generally referred to as DL tasks in this survey, when it is not necessary to distinguish training tasks and inference tasks in the context. §.§ Distributed DL Parallelism Modes Distributed DL partitions data and models into multiple processing units (typically GPUs) for parallel execution to leverage the computational capacity of many computing nodes in a cluster. As illustrated in Fig. <ref>, distributed DL has three basic parallelism modes: data parallelism, model parallelism, and pipeline parallelism. §.§.§ Data parallelism As illustrated in Fig. <ref>, data-parallel training partitions the entire training dataset into several splits and distributes them across many GPUs for parallel training <cit.>. Each GPU has a replicate of a whole model with an identical structure and trains it on a specific dataset partition. Throughout the distributed training process, these local models share their knowledge to update a global model using a specific model synchronization mechanism, usually via a parameter server (PS). §.§.§ Model parallelism As illustrated in Fig. <ref>, model-parallel training divides an entire DL model into submodels and distributes them onto many GPUs within a cluster when the model exceeds the capacity of a single GPU or computing node <cit.>. This parallelism mode concerns the model division strategy focusing on workload balance and the submodel placement strategy focusing on communication overhead between submodels. §.§.§ Pipeline parallelism As illustrated in Fig. <ref>, pipeline-parallel training enhances DL parallelism by ordering different stages of distributed training in a pipeline and preventing computational and communication resources from idling <cit.>. Pipeline-parallel training can be considered a special case of model-parallel training by decomposing the training of submodels layer by layer into subtasks and overlapping their computation of different stages across different GPUs. Computational and communication tasks can also overlap in the pipeline. This mode is typically applicable in the domains of LLM training <cit.>, edge computing <cit.>, and the Internet of Things <cit.>, where devices have heterogeneous computational and communication capabilities to handle various distributed DL subtasks. In practice, pipeline parallelism can work with other parallelism modes to tackle complex distributed DL workloads with large model structures <cit.>. §.§ Resource Allocation and Workload Scheduling for Distributed DL Resource allocation and workload scheduling strategies for high-performance distributed DL are typically integrated in cluster-level and distributed-DL-level frameworks. Fig. <ref> illustrates the procedure of resource allocation and workload scheduling mechanisms for large-scale distributed DL within a cluster. This procedure comprises six major steps. (1) For a queue of distributed DL jobs, the task scheduler conducts job profiling based on various workload characters, such as resource-utilization status and working progress. (2) The resource manager allocates GPU and network resources distributed within the cluster for jobs based on their characteristic profiling. The resources can be represented in a physical or virtual manner, and virtual resources can be encapsulated in virtual machines or containers. (3) The job-level scheduler determines job-execution priorities based on resource constraints and job performance estimation. (4) The pipeline-level scheduler divides the job into subtasks and locates them onto available resources for the pipeline execution of the subtasks, aiming to increase task parallelism and overlap computational and communication workloads. (5) The network-flow-level scheduler optimizes the network flows or coflows of numerous subtasks by considering the relation and dependency of network flows. (6) Scheduled jobs, pipelines, and network flows run efficiently on the allocated GPU and network resources within the cluster. § RESOURCE ALLOCATION In this section, we introduce resource allocation strategies for both distributed training and inference, which have different workload characteristics and performance requirements. Table <ref> summarizes these strategies with related challenges Cx, highlighted in box texts in the coming sections. Resource allocation strategies for distributed training are classified into two categories: GPU sharing and network bandwidth sharing. GPU sharing strategies are further classified into five approaches based on techniques they applied, including workload profiling, context switching, performance estimating, elastic scaling, and special considerations for hyperparameter tuning workloads. Network bandwidth sharing strategies are further classified based on the targeting granularity of resource, including the job, gradient block task, and coflow. Resource allocation strategies for distributed inference are classified into three categories based on sharing patterns of GPUs, including spatial, temporal, and hybrid sharing. §.§ Resource Allocation for Distributed Training The training process of distributed DL requires an intensive consumption of computational power and memory of GPUs and network communication bandwidth across GPUs. Therefore, GPU and network bandwidth sharing is the focus of the discussion on resource allocation for distributed training. §.§.§ GPU sharing: Although GPUs have found extensive applications in distributed DL, a prevalent issue of underutilization is observed in production clusters. The recorded GPU utilization typically ranges from 25% to below 50% <cit.>. This concern is particularly noteworthy in large-scale distributed computing environments. To address this issue, various distributed technologies have been developed to enable DL tasks to run efficiently on numerous devices <cit.>. GPU sharing strategies typically leverage partial resource allocation through virtualization to mitigate the challenge of low GPU utilization in large-scale distributed DL. NVIDIA, acknowledged as the leading GPU provider, introduces Multiple Process Sharing (MPS) <cit.> that offers an operating-system-level virtualization solution. Nevertheless, its implementation requires application-specific expertise to define resource limits for ensuring performance isolation. Moreover, MPS lacks compatibility with various DL frameworks. To address the performance isolation issue with MPS, another NVIDIA technology, Multi-Instance GPU (MIG) <cit.>, enables the partitioning of a GPU into multiple discrete instances, each with dedicated resources. However, as MIG cannot dynamically adjust the partitions for GPU sharing to fit the GPU requirement of online workloads, it must initially allocate peak GPU resources for online workloads initially and retain them during the entire execution life cycle, leading to a significant waste of GPU resources. To address the problems with efficient performance isolation for online workloads in MPS and MIG, Muxflow <cit.> proposes a two-level protection mechanism to guarantee GPU isolation for online workloads. The workload level protection resides between the CUDA <cit.> drive layer and CUDA runtime layer and controls the offline workloads to protect online workloads. The GPU level protection monitors the GPU device status to enable dynamic adjustment of the GPU memory quota for offline workloads. ByteDance has successfully deployed Muxflow in clusters with more than 20,000 GPUs. Challenge [C1]: Utilizing representative distributed DL workloads and profiling general characteristics so that the profiling result accurately reflects the workload characteristics of the working environment for the GPU-allocation strategy. ∙ Workload profiling: Some solutions leverage the profiling of complex distributed DL workloads of production large-scale clusters or clouds to instruct the GPU allocation strategies, tackling Challenge C1. Gandiva <cit.> leverages the profiles of distributed DL tasks and addresses the issue of GPU underutilization in three key ways. Initially, Gandiva allows incoming jobs to time-share GPUs with existing jobs when overloaded. Then, it permits time-sliced jobs to migrate to other GPUs. Lastly, it supports elastic GPU capacity, increasing the number of GPUs during idle times and reducing the number of GPUs as the load grows dynamically, thereby utilizing idle GPUs effectively. The performance of Gandiva is demonstrated on production clusters at Microsoft. AntMan <cit.> is a production solution for distributed DL clusters at Alibaba. It analyzes the cause of GPU underutilization in distributed DL clusters for production use in three aspects: hardware, cluster scheduling, and job behavior. Exploiting the profiles of fluctuating resource demands from distributed training jobs, AntMan co-designs the cluster scheduler and distributed DL framework with dynamic scaling mechanisms for GPU resources during job execution. This approach ensures jobs' service-level objectives (SLOs) in large-scale clusters while enhancing cluster utilization through opportunistic scheduling. Leveraging the analysis of the production trace at Alibaba, Fragmentation Gradient Descent (FGD) <cit.> addresses severe GPU fragmentation in large clusters. FGD minimizes GPU fragmentation growth through task packing to achieve maximum GPU allocation rates. TGS <cit.> provides transparent GPU sharing at OS layer for distributed DL tasks in production clusters of containers. TGS addresses challenges of the lack of application profiling knowledge and the potential oversubscription of GPU memory during the sharing of GPU resources. It tackles the first challenge by monitoring and controlling the rate of sending GPU kernels to the GPU for each container adaptively, aiming to maximize the rate of opportunistic jobs while not affecting that of production jobs. It tackles the second challenge by unifying GPU memory and host memory in a single address space via CUDA unified-memory allocation <cit.> that enables both performance isolation and transparency of GPU memory allocation. Oversubscribed memory of opportunistic jobs is evicted to the host memory automatically, ensuring the performance of production jobs. Strati et al. <cit.> suggest that DNN workloads have numerous data-dependent operators with distinct computation and memory requirements. These individual operators can saturate the GPU computation units or memory bandwidth but often leave other resources underutilized. To address the issue of imbalanced utilization of GPU and other resources, they propose a fine-grained and interference-aware GPU allocator, named Orion, to co-schedule GPU kernels based on the computation and memory profiles of DNN workloads to optimize overall resource utilization. Challenge [C2]: Reducing the latency of GPU context switching for distributed DL workloads, which include offloading and loading of models and data ∙ Context switching: Some work utilizes fast context switching to reduce GPU latency, tackling Challenge C2. GPU context switching refers to a GPU switching between processes when it is executing multiple training jobs or tasks in parallel or sequence. Salus <cit.> achieves low switching latency via a fine-grained GPU sharing policy that exposes two GPU sharing primitives: fast job switching and memory sharing. The former enables rapid preemption and efficient time sharing for the currently active DL job on a GPU, whereas the latter packs smaller distributed DL tasks on the same device to ensure high memory utilization and prevent memory fragmentation. In contrast, PipeSwitch <cit.> supports fast-context switching for pipelines of distributed DL jobs. PipeSwitch optimizes the context switching overhead through model-aware grouping for pipelines and proactive allocating of GPU memory. The model-aware grouping of layers aims to minimize the overhead of transferring the model between CPUs and GPUs during context switching. The proactive allocation of GPU memory for standby workers before it should be active expedites the speed of context switching. To prevent job interference, PipeSwitch enforces process-level isolation, by initialing a new separate process for each active-worker task. To minimize the overhead of loading an application from the memory pool to a GPU, DistMind <cit.> exposes the abstractions of the GPU pool and memory pool and incorporates three-stage pipelining, cache-aware load balancing, and DNN-aware sharding in the GPU scheduler to achieve low application loading overhead and high GPU efficiency. G-Safe <cit.> focuses on the safety problem of GPU sharing in multi-tenant environments. It constrains the GPU kernels of each application to stay within the memory partition allocated to them during context switching. Challenge [C3]: Ensuring that the intermediately defined performance goal used for guiding the resource allocation strategy leads to a straightforward improvement in the actual performance goal when estimating the performance of distributed DL jobs in the GPU-allocation strategy. ∙ Performance estimating: Some works employ the performance-estimate-guided approach to enhance GPU-resource allocation, tackling Challenge C3. Both the performance goal and performance-estimation method can vary in these approaches. To illustrate Challenge C3, when a strategy aims to reduce the average job completion time but uses GPU utilization as an intermediate performance estimate, it must ensure that a higher GPU utilization leads to a reduced average job completion time. Optimus <cit.> introduces a dynamic allocation algorithm based on marginal gains, estimating the remaining execution time of a distributed DL task. In this greedy policy, a job with a larger marginal gain will be allocated a higher quota of GPU resources. Harmony <cit.> uses a deep reinforcement learning (DRL) algorithm to place distributed DL jobs on GPU resources that lead to the minimum training or inference time. The learning rewards for unseen placements are guided by historical allocation samples. Horus <cit.> builds a model to predict GPU utilization of heterogeneous distributed DL tasks from computation graph features. It identifies GPU utilization as a general proxy metric for making optimal placement decisions. GPARS <cit.> leverages spatiotemporal correlations among jobs and adopts graph attention networks for precise job duration prediction. It designs a dynamic objective function to allocate suitable GPU types for newly submitted jobs. Challenge [C4]: Determining the timing and quota for expanding resources in elastic distributed training, which requires monitoring runtime performance statistics. ∙ Elastic training: Elastic training, which involves expanding and shrinking resource capacity dynamically, is an important strategy to improve resource utilization and save costs for distributed DL in the cloud environment. Many studies tackle Challenge C4 and focus on elastic GPU memory allocation. For example, Pollux <cit.> adjusts GPU resources available to distributed DL jobs dynamically, aiming to maximize the overall training goodput within the cluster. To improve the efficiency of GPU memory sharing, Zico <cit.> monitors the memory-usage patterns of individual distributed DL jobs by tracking computational progress during training. Based on the monitoring statistics, Zico allocates and deallocates memory among concurrent jobs automatically, ensuring no exceeding of the memory budget. AFS <cit.> points out that handling future jobs requires proactive preparation of resources based on current share calculations. When the GPU scheduler estimates that the GPU contention will be heavy in the future, it allocates more resources to long-lasting jobs; otherwise it allocates more resources to short jobs. EasyScale <cit.> utilizes a thread abstraction called EasyScaleThread to preserve consistent training accuracy when the number of workers changes in data-parallel training and proposes intra-job and inter-job GPU schedulers to scale in or out GPUs for workers dynamically. The intra-job scheduler proposes online GPU allocation proposals to the inter-job scheduler to maximize distributed training throughput of a specific distributed DL job, and the inter-job scheduler approves or declines proposals based on marginal speedup and workload balancing considerations. EasyScale can improve GPU utilization in heterogeneous GPU clusters with such two-level elastic GPU scheduling. In contrast, some studies focus on elastic container resources. For instance, FlowCon <cit.> introduces a container placement strategy based on growth efficiency and dynamic resource configuration for elastic allocation and withdrawal of resources during runtime. Challenge [C5]: Improving GPU utilization for batches of multiple hyperparameter tuning jobs, which have mostly homogeneous workloads among different jobs to improve overall training throughput. ∙ Hyperparameter tuning: Hyperparameter tuning workloads represent a batch of distributed jobs with highly similar workload characteristics, which can thus be leveraged for resource allocation. Several studies explore strategies for improving GPU utilization during hyperparameter tuning in distributed DL clusters, tackling Challenge C5. Fluid <cit.> is a distributed DL hyperparameter tuning execution engine that abstracts the hyperparameter tuning process as a sequence of trial groups. It employs a water-filling approach to expedite the hyperparameter tuning process to enhance GPU utilization. Titan <cit.> adopts a heuristic approach by consolidating multiple fine-tuning workloads into one, which is particularly advantageous considering that multiple fine-tuning workloads often share the same model parameters. DISC <cit.> leverages adaptive scaling to adjust the size of GPU time slices occupied by hyperparameter-tuning jobs at runtime. This dynamic allocation of GPU time slices for each hyperparameter tuning job is based on its potential to create a steep increase in the model accuracy. Hydro <cit.> addresses cluster-wise resource utilization and tuning efficiency by incorporating a heterogeneity-aware allocation strategy. This method extends the resources of hyperparameter-tuning workloads by interleaving them with pipeline-enabled large-model training tasks. By effectively utilizing idle time intervals on each node caused by the gaps between the forward and backward processing of micro-batches, Hydro enhances overall resource utilization and tuning efficiency in large-scale distributed DL clusters. Challenge [C6]: Network bandwidth allocation for distributed DL must be based on sufficient knowledge of the workload and coordination of bandwidth resource across the application and network layers. §.§.§ Network bandwidth sharing: In large-scale distributed environments, where communication is often the performance bottleneck, network bandwidth is another significant factor determining the efficiency of distributed training. To tackle Challenge C6, network layers, e.g., the transport layer, usually work collaboratively with the application layer, and the network bandwidth allocator be implemented on either layer based on the level of tasks required to allocate network bandwidth. ∙ Job: Some work focuses on network bandwidth sharing for multiple distributed training jobs. For instance, Liquid <cit.> proposes a computational and communication-resource-estimation algorithm and a network-efficient job-placement strategy for distributed training jobs. The resource-estimation algorithm models resource requirements of distributed training jobs, including GPU computing power, GPU memory, and network bandwidth requirements. The job-placement strategy assigns distributed training jobs to a cluster of computing nodes and containers, finding a best-fit job placement solution that satisfies the estimated computational and communication-resource requirements and exhibits less GPU fragmentation and network communication cost across containers. ∙ Gradient block task: A distributed training job can be break down into multiple training tasks of gradient blocks. Some work focuses on network bandwidth sharing at the granularity of the gradient block task. For instance, Prophet <cit.> groups into certain gradient blocks based on the profiled time interval and models the distributed training time in terms of the network bandwidth and order of network transfers of gradient blocks. Based on this model, Prophet searches for an optimal order of network transfers of gradient blocks, aiming to minimize the distributed training time. This optimal order of gradient block transfers optimizes both the network bandwidth sharing among gradient blocks and the overlapping between network transfers and GPU computation. ∙ Coflow: A coflow is an abstraction several network flows related to a specific communication task, e.g., several or a fraction of gradient transfers, and is usually scheduled on the transport layer. Some work focuses on network bandwidth sharing for coflows. For instance, Parrot <cit.> perceives the communication pattern of a distributed training job as a series of dependent coflows and estimates the remaining processing time of distributed training jobs based on the amount of information carried per coflow. Parrot allocates network bandwidth to active coflows of concurrent jobs within the cluster, so that the effective completion time of coflows of the job with a shorter remaining processing time has a higher priority to be minimized. Challenge [C7]: Satisfying SLOs, such as latency and throughput, for distributed inference jobs of various complexity within specific resource constraints. §.§ Resource Allocation for Distributed Inference In contrast to distributed training that caters to long-term offline workloads, distributed inference typically demands real-time execution with more stringent requirements on latency and accuracy. This difference in demands requires resource allocation solutions to address the distinct characteristics of inference workloads effectively. In the distributed inference process, GPU sharing is the focus of research, which primarily faces Challenge C7. To tackle this challenge, various resource allocation methods can be divided into three major categories: spatial, temporal, and hybrid sharing. In the context of multiple distributed DL jobs, the spatial sharing of GPUs involves the sharing of GPU space partitions while the temporal sharing involves the sharing of computation time slices of an entire GPU. Hybrid approaches combine techniques from these two categories. ∙ Spatial Sharing: Many existing works exploit spatial sharing of GPUs to optimize the performance of distributed inference tasks. GSLICE <cit.> introduces an inference system that achieves safe and efficient GPU sharing through spatial GPU multiplexing systematically. It utilizes MPS <cit.>, a GPU spatial-multiplexing framework with virtualization, to handle various inference requests. iGniter <cit.> employs an inference performance model to calculate an appropriate batch size and the lower bound of allocated GPU resources. Subsequently, it allocates GPU resources for each inference workload by employing a greedy approach to identify the placement GPU devices that can achieve minimal performance interference. The SLO-aware ML Inference Framework <cit.> designs a resource auto-scaling strategy in the cloud by leveraging rich and precise workload-specific metrics, with a special consideration of the heterogeneity in the GPU computational capability. This effective and elastic management of resources ensures meeting the SLO for diverse inference workloads in the cloud. Tackling the problem that large models may not be deployed on a single GPU, AlpaServe <cit.> utilizes queuing theory to mathematically verify the benefits of model parallelism and searches for a partitioning strategy that minimizes the stage imbalance for inter-operator model parallelism. ∙ Temporal Sharing: Recent temporal-sharing approaches designed for specific distributed inference systems have shown improvements in GPU utilization, especially in cloud environments shared by numerous tenants. Nexus <cit.> employs a heuristic approach to select requests for co-location on the same GPU. Initially, it determines the most suitable batch size to meet throughput and SLO requirements for the existing inference workloads. Subsequently, Nexus identifies all possible combinations within a GPU's duty cycle on a single GPU in a best-fit manner, maximizing utilization without violating latency requirements. Focusing on inference services in the cloud, INFaaS <cit.> addresses the problem of co-location interference arising from shared hardware resources. It allocates available resources to interfered instances through workload migration or virtual-machine-level scaling, aiming to reduce monetary costs through GPU sharing while meeting latency requirements via virtual-machine-level scaling. Cocktail <cit.> scales the virtual machine resources for various inference models in the cloud automatically and proactively based on the predicted workload and popularity of these models. This approach enhances the efficiency of resource allocation in distributed DL inference systems with a specific set of supported inference models. ∙ Hybrid Sharing: Several works study the hybrid GPU sharing approaches, considering both spatial and temporal sharing. Gpulet <cit.> supports spatial sharing of GPUs via the abstraction of virtual GPUs that are split partitions derived from physical GPUs. Given allocated virtual GPU resources, Gpulet supports temporal sharing by scheduling the batch sizes of inference jobs of multiple tenants, with a goal to guarantee the SLO. This hybrid design enables cost-effective cloud-resource allocation for the inference of numerous heterogeneous DL models. FaST-GShare <cit.> utilizes spatial and temporal sharing of GPUs to maximize inference function throughput in the Function-as-a-Service serverless architecture for distributed DL. It supports auto-scaling of inference resources in the cloud based on the profiling of function throughput and resource allocation, maximizing GPU utilization while ensuring the SLO. §.§ Discussion ∙ Fine-grained and elastic GPU allocation strategies are critical for improving GPU utilization. Coarse-grained or even exclusive-access GPU allocation for individual distributed DL jobs is common in small clusters but can lead to extremely low GPU utilization in the data center environment. Fine-grained GPU allocation strategies for diverse distributed training and inference workloads, which share GPU computational resources for multiple jobs and subtasks, are crucial for improving GPU utilization, reducing memory fragmentation, and ensuring performance isolation. However, as resource requirements can fluctuate during the long-term distributed training process, elastic GPU allocation strategies are also important for fully utilizing GPU resources while maintaining SLO from the cloud providers' perspective. Both strategy approaches require the support of virtualization technologies and a deep understanding of the workload characteristics of distributed DL. Moreover, the latter approach also requires knowing the runtime workload performance, which can be achieved by performance monitoring and dynamic adaptation techniques. ∙ High-performance large-scale distributed DL requires the orchestration of efficient allocation of GPU and network resources. The allocation of network resources can frequently be overlooked as a bottleneck for efficient resource utilization in distributed DL. Many resource allocation strategies of distributed DL focus on addressing computation issues, such as low utilization, load imbalance, and long queuing delays. However, with the increase of the cluster scale, the complexity of GPU network connections increases exponentially, and lack of consideration to efficient network-resource allocation can result in significant low job-execution performance of large-scale distributed DL. Efficient network bandwidth allocation strategies can alleviate communication contention. Fully utilizing resources of both GPU and network bandwidth leads to enhanced overall performance of distributed training and inference on a large scale. ∙ Heterogeneity in resources and workloads is a significant consideration for effective resource allocation strategies for distributed DL. Heterogeneous resources and workloads are pervasive in the data center environment, which has large-scale resources and numerous tenants. On the one hand, Computing nodes and networks in various specifications and configurations introduce resource heterogeneity, and the heterogeneity that affects the performance of distributed DL most is in the heterogeneity in the GPU computation capacity and network protocol, bandwidth, and topology. Lack of consideration for resource heterogeneity can either underestimate the resource capacity of come resources and cause resource underutilization or overestimate the resource capacity of some other resources and cause resource contention. On the other hand, distributed DL workload characteristics of different tenants over different processing stages of the job in different periods can also be heterogeneous. Lack of consideration for heterogeneous workloads can cause inaccurate estimation of workload performance, which results in inferior resource allocation decisions. § WORKLOAD SCHEDULING In large-scale GPU clusters with complex network connections, scheduling distributed DL workloads effectively is critical for ensuring the high performance of task execution, optimal hardware utilization, and achievement of various scheduling objectives. Training and inference stages of distributed DL are widely recognized as particularly computation and communication-intensive <cit.>. The following section studies workload scheduling strategies on training and inference workloads and focuses on providing efficient communication or overlapping computational and communication tasks for overall efficiency in large-scale distributed DL. §.§ Distributed Training Scheduling Efficient workload scheduling strategies are crucial for distributed training workloads, especially in large-scale settings with large data, models, and device scales. Large-scale distributed training involves iteratively executing massive computational tasks for feedforward and backpropagation calculation and communication tasks for data flowing and model synchronization. It is a long-term and computation-intensive process that requires efficient scheduling strategies to improve execution parallelism and completion time and meet various performance goals. In this subsection, we survey workload scheduling strategies of large-scale distributed training with various performance goals and scheduling granularity levels. Table <ref> summarizes these strategies, which are categorized by various performance goals, including throughput, cost efficiency, and deadline guarantee goals, while the strategies focusing on distributed training throughput are further classified into three categories based on the scheduling granularity: the job, pipeline, and network flow. §.§.§ Throughput The throughput of distributed DL refers to the speed at which jobs or tasks are completed or the amount of work accomplished per unit of time. It is one of the most critical performance goals of distributed training scheduling <cit.> and is determined synthetically by various factors, including resource utilization, parallelism level, and communication overhead. Workload scheduling strategies usually work on the job, pipeline, and network flow levels to achieve high throughput. Challenge [C8]: (1) Online scheduling of distributed DL jobs whose arrival and completion times are unpredictable to achieve high throughput; (2) Resource-aware and workload-aware scheduling of distributed DL jobs in complicated, heterogeneous, or opaque resource structures; (3) Efficiently solving complex distributed DL workload scheduling problems with various resource constraints and workloads. ∙ Job-level scheduling. Scheduling distributed training at the job level, which involves the reordering job execution priorities and the placement of jobs on GPUs, is one of the most common and effective scheduling approaches <cit.>. Job-level scheduling for distributed training workloads faces several challenges as stated in Challenge C8, which include several aspects: online scheduling, resource-aware scheduling, and complexity. For Challenge C8(1) about online scheduling, on the one hand, the unpredictable job arrival time requires a prompt online scheduling decision for each job upon its arrival, which may trigger significant preemption overhead if the system allows preemptive scheduling. On the other hand, the complex workload characteristics and resource topology make it hard to predict the job completion time accurately, and an inaccurate estimate can impede the scheduling algorithm from achieving high throughput. For Challenge C8(2) about resource-aware scheduling, the distributed DL workload scheduler should match workloads with large-scale resources, especially when the network topology of GPUs and nodes is complicated, heterogeneous, and sometimes even opaque and unobservable, e.g., in a multi-available-zone cloud environment. For Challenge C8(3) about scheduling complexity, the complexity of the job-placement problem can increase exponentially with the scale of the cluster with various resource constraints and workloads, requiring efficient and practical algorithms to find the optimal scheduling solution. Some studies refine the job priority algorithm to tackle the preemption problem of online job scheduling. For example, Tiresias <cit.> draws inspiration from the classic Multi-Level Feedback Queue (MLFQ) algorithm <cit.> and develops a priority discretization approach to mitigate issues related to frequent preemption. In addition, Tiresias uses a Least-Attained-Service (LAS) algorithm to prioritize jobs based on their service levels, which are quantified by the product of requested GPU resources and execution time, to avoid scheduling starvation. Some studies utilize resource topology-aware and workload-aware scheduling algorithms to improve performance estimation. For resource topology-aware solutions, OSDL <cit.> designs algorithms for job placing and scheduling of distributed training jobs in hybrid networks with optical circuit switching (OCS) and electrical packet switching (EPS). The job placing algorithm utilizes the hybrid network topology information to use lightpaths reasonably, and the job scheduling algorithm jointly optimizes bandwidth requests of distributed training jobs in the OCS and EPS domains. Heet <cit.> proposes a 3D collaborative filtering method to accurately measure the scaling efficiency of all elastic configurations on heterogeneous nodes, substantially reducing profiling overhead. Meanwhile, Heet utilizes a price function to effectively balance scaling efficiency and scheduling efficiency. For workload-aware solutions, FfDL <cit.>, an open-source scheduling platform developed by IBM, incorporates operational insights from industry practices to strike a balance between dependability and scalability, while maintaining elasticity, flexibility, and efficiency. In a related study, Philly <cit.> performs a comprehensive analysis by correlating logs of the scheduler with logs of individual jobs, examining the impact of gang scheduling and locality constraints on queuing delay and job completion time. Drawing on insights from this analysis, Philly advocates relaxing locality constraints to enhance job time efficiency. Unlike the above methods, which rely on job completion time estimates or prior knowledge, E-LAS <cit.> utilizes real-time epoch progress rates specific to distributed training jobs, combined with service metrics derived from temporal and spatial domains, to inform scheduling decisions. E-LAS surpasses Tiresias in training throughput by reducing the average completion time for distributed training jobs. CASSINI <cit.> is a network-workload-aware distributed training job scheduler that uses a geometric circle abstraction with angular rotations to represent time shifts for communication workload patterns. It schedules different time shifts to distribute communication workloads on network links, interleaves communication workloads on the same network link, and reduces job completion time. Liu et al. <cit.> leverage proportional workload assignment on a heterogeneous GPU cluster to maximize distributed training throughput and minimize job completion time. To reduce the scheduling computational complexity, they propose constructing the sparsification of feasible solutions through sampling, which can significantly decrease the decision-making latency. In addition to common workload schedulers, which schedule distributed training workloads, some studies explore various configurations of existing workload schedulers to find the best configuration for specific workloads. For example, AutoSched <cit.> develops a workload generation engine to produce training workloads that can reveal future trace patterns, which facilitates accurate and efficient configuration tuning of distributed training workload schedulers. With the generated workload trace, AutoSched searches for the best configuration via a learnable causal model. AutoSched is supposed to be a general configuration-turning framework for various off-the-shelf distributed training schedulers, including Tiresias. Several methods tackle the scheduling complexity by modeling the scheduling problem as an optimization problem and applying dynamic programming or DRL algorithms to solve the problem efficiently. SMD <cit.> presents a resource-scheduling analytical model that accommodates multiple jobs competing for communication bandwidth. This model treats the scheduling problem as a non-convex integer non-linear program with bin-packing constraints. SMD introduces an ϵ-approximation algorithm for its resolution, termed the sum-of-ratios multi dimensional knapsack decomposition. Sched^2 <cit.> utilizes DRL to schedule distributed training jobs with a locality-aware approach. This method tries to understand both the locality sensitivity of jobs and the fragmentation condition of clusters comprehensively within the entire learning stack. Through this heightened awareness, the DRL model adjusts its scheduling decisions dynamically and adaptively, responding effectively to the varying locality sensitivities of individual jobs and the evolving state of cluster fragmentation. MLFS <cit.> employs data from heuristic scheduling methods to train a DRL model and subsequently uses this model to make informed decisions about job scheduling autonomously. Yang et al. <cit.> propose a meta-learning-based DRL method to improve the job completion time by adaptively scheduling communication workloads in data-parallel training. To address the issue of massive samples in DRL and improve DRL efficiency, the proposed method trains a performance model to predict the training time and guide the DRL exploration strategy into an effective search space. Challenge [C9]: Partitioning workloads for load balancing across different workers and optimizing the execution order to reduce pipeline stall and memory overhead in pipeline-level scheduling for distributed training workloads. ∙ Pipeline-level scheduling. In the pipeline parallelism mode of distributed training, pipeline-level scheduling divides training mini-batches into micro-batches and manages the sequential processing of micro-batch tasks within a pipeline architecture. This level of scheduling is widely adopted by large-model distributed training jobs. This scheduling approach orchestrates computational and communication tasks across various stages in a pre-defined execution order and aims to improve their execution parallelism. As a result, the execution of computational and communication tasks of the same or different stages overlap, which increases the overall pipeline throughput. This approach primarily faces Challenge C9. The pipeline stall refers to the phenomenon of a faster stage halting to wait for dependent slower stages to catch up, which can lead to low pipeline utilization and high memory overhead. The memory overhead is the space required to retain the results of the feedforward phase in the memory for the later calculation of the backpropagation phase in each micro-batch. Pipeline parallelism is the state-of-the-art approach for large-model training. GPipe <cit.>, a pioneer in utilizing pipeline parallelism to train large models, distributes layer-wise model partitions across multiple GPUs and splits mini-batches into micro-batches for pipelining execution. It reduces the pipeline memory overhead by recomputing the activations of the feedforward phase again in the backpropagation phase. This library can achieve nearly linear convergence speedups and offer the flexibility to scale to various DNN models of immense sizes efficiently. However, GPipe assumes a partitioned model for pipelining is readily available or specified manually by users and does not design a model partitioning scheme for load balance. To design an efficient model partitioning scheme for pipeline-level scheduling, some studies focus on balancing workloads across workers with hardware constraints. PipeDream <cit.> builds a heuristic model to determine the workload to be partitioned on each worker to balance workloads and minimize communication overheads. The model considers various constraints, including the model scale, training iteration, device memory capacity, hardware topology and bandwidth, and number of workers, and the decision result relies on inputs from a short profiling run. To evenly distribute workload among worker, PipeDream also integrates data parallelism with pipeline parallelism at certain stages. AutoPipe <cit.> introduces an adaptive method to achieve balanced partitioning. It first generates a relatively balanced model partition scheme through dynamic programming. It then refines the scheme using the heuristic that pipeline completion time can be reduced by moving certain stages in the pipeline's critical execution path forward or backward in the timeline. However, both PipeDream and AutoPipe focus only on the homogeneous GPU setting. To address pipeline load balancing in heterogeneous GPU clusters, HetPipe <cit.> partitions large DNN models to minimize the maximum completion time of the partitions within heterogeneous GPU memory bounds of multiple virtual workers in the pipeline. To reduce the communication overhead of fully synchronous pipeline parallelism, HetPipe introduces a wave-synchronous-parallel approach to allow bounded model staleness within a wave of micro-batches but guarantees convergence. Some studies focus on fine-grained workload pipelining schemes to maximize pipeline throughput. For instance, Piper <cit.> focuses on fine-grained model partitioning, while MG_WFBP <cit.>, DeAR <cit.> and ScheMoE <cit.> focus on fine-grained overlapping of computational and communication tasks. For fine-grained model partitioning, Piper <cit.> supports tensor-wise model parallelism in the model partitioning scheme for pipeline scheduling, which is not supported in prior work, in addition to data parallelism and layer-wise model parallelism. It applies a two-level dynamic programming algorithm to search for the optimal partitioning of a DNN model to maximize pipeline throughput within memory constraints. With increased search space, Piper can find high-quality parallelism configurations with high pipeline throughput. For fine-grained computation and communication overlapping, MG_WFBP <cit.> divides the calculation of a backpropagation task into numerous subtasks separated by merged-gradient layers, which stand as trigger points for model synchronization in data-parallel training. As a result, the communication of model synchronization in a subtask can overlap with the computation of backpropagation in a subsequent subtask. In contrast, DeAR <cit.> decouples the primitive into two continuous operations: and . This decouple enables overlapping communication tasks of the previous stage with feedforward tasks of the next stage in the pipeline execution, reducing the communication overhead of model synchronization in data-parallel training. Focusing on the distributed training of mixture-of-experts models, which has the communication bottleneck caused by the collective communication, ScheMoE <cit.> pipelines communications with expert computations by virtually partitioning input tokens to multiple smaller tensors to increase the chance of task overlapping. Some studies focus on optimizing the pipeline execution order to reduce pipeline stall. For example, Chimera <cit.> applies bidirectional pipelines that are composed of two pipelines executing stages in reserve directions in a one-forward-one-backward (1F1B) manner. Computational tasks of different micro-batches are mostly overlapped on different workers and the resultant bidirectional pipelines execute in a compacter manner than in PipeDream. Chimera also builds a model for determining the optimal number of pipeline stages and number of replicated pipelines, whose values rely on empirical results as inputs. Out-Of-Order (OOO) BackProp <cit.> leverages gradient computation dependencies to reorder stage executions in the pipeline to maximize GPU-resource utilization. In data-parallel training, OOO reorders the sequence of gradient computations to maximize the overlap between computation and parameter communication. In pipeline-parallel training, it prioritizes critical gradient computations to minimize pipeline stall. Some studies focus on reducing GPU memory consumption and recomputation cost for pipeline execution. On the one hand, though the bidirectional pipeline approach of Chimera can achieve low pipeline stall, it has multiple model replicas in two pipelines, which requires large GPU memory consumption. Hanayo <cit.> mitigates the issue of excessive memory consumption by running multiple waves of forward and backward stages in a pipeline to reduce pipeline stall while not increasing GPU memory consumption. MixPipe <cit.>, another bidirectional pipeline approach for synchronous data-parallel training, regulates a flexible number of micro-batches injected into the bidirectional pipelines to balance pipeline and device utilization. MixPipe also features a mixed scheduling of 1F1B and 2F1B to balance memory usage and pipeline stall. On the other hand, though the recomputation strategy for the backward stage in the pipeline can relieve memory consumption, its cost can be non-negligible. To balance memory saving and computation cost in recomputation, AdaPipe <cit.> models the memory and time cost of different recomputation strategies and introduces an adaptive recomputation mechanism to allow different recomputation strategies, e.g., partial and full recomputation, for different stages in a pipeline. AdaPipe achieves maximum saved recomputation cost within memory limits. Challenge [C10]: Scheduling network flows at different granularity level to increase bandwidth utilization, network latency, and network congestion for distributed DL workloads. ∙ Network-flow-level scheduling. Efficient network flow scheduling determines the transmission priority of data packets, network flows, and coflows related to distributed DL jobs, aiming to significantly increase network bandwidth utilization, reduce network latency, and avoid network congestion, as stated in Challenge C10. Network flow scheduling can work at various granularity levels, including the job, coflow, and data packet levels. Some studies focus on the job level. JPAS <cit.> implements a straightforward greedy mechanism to organize all distributed training jobs periodically. This approach enables each host machine to prioritize its network flows according to the established job order, delegating the task of flow scheduling and rate allocation to the underlying priority-enabled networks. Tereis <cit.> explores the utilization of idle GPU computational resources during data transmission periods. It predicts the completion time for a distributed DL job and its corresponding data transmission time, allowing for the simultaneous packaging of two jobs on the same GPU. This ensures that one job is completed before the other concludes its data transfer. Some studies focus on the coflow level. Geryon <cit.> employs multiple flows with varying priorities to transfer parameters of different urgency levels. This approach coordinates multiple PSs effectively and gives precedence to urgent parameter transfers across the entire network fabric. Beamer <cit.> focuses on reducing the stage-completion time (SCT) by considering stage information in its scheduling approach. It proposes a stage-aware coflow-scheduling method to minimize the average SCT. Some other studies focus on the data packet level. To address in-network delays, such as queuing delays, TensorExpress <cit.> shifts priority scheduling to the transport layer, focusing on the packet granularity. It enables each switch to transmit tensor packets according to their priorities using multiple queues. This method ensures that high-priority data packets are handled efficiently to minimize delays. Similarly, Mercury <cit.> transmits packets with the highest priority in the Mercury buffer first. Additionally, Mercury incorporates immediate aggregation at the transport layer, enabling full overlapping of gradient push-and-pull operations. This approach not only streamlines data flow but also maximizes the efficiency of network resource utilization. Challenge [C11]: Jointly optimizing energy consumption, monetary cost, and throughput for distributed DL workloads with awareness of cloud resources and policies from the perspectives of cloud service providers or service users. §.§.§ Cost efficiency The cost-efficiency objective of distributed training scheduling aims to minimize operational costs while ensuring optimal performance for distributed training workloads, especially in the cloud environment. It primarily faces Challenge C11 and focuses on a balance between resource utilization, energy consumption, and monetary expenditures in the scheduling decisions. Cynthia <cit.> offers predictable distributed training performance while reducing the training budget. This scheduler identifies the optimal resource type and maintains training throughput effectively, thereby minimizing monetary costs. Similar to Cynthia, FC^2 <cit.> is a scheduler that recommends cost-effective cloud resource allocations for parameter servers in distributed training tasks. It prioritizes instances with the largest network bandwidth within the budget to circumvent communication bottlenecks. Furthermore, it introduces a heuristic named Scale-Opt for determining worker instances, ensuring job throughput, and maximizing cost savings. Jahani <cit.> considers computing nodes with varying numbers of GPUs as distinct virtual machines. The scheduling process is modeled as a mixed-integer linear programming (MILP) problem, aiming to reduce leasing costs globally while maintaining job latency. GPOEO <cit.> achieves significant power savings for training workloads. It can be integrated into GPU data centers easily, utilizing a customized scheduler to manage job orchestration. STS <cit.> optimizes the scheduling of distributed training jobs from the perspective of cloud service providers operating data centers. It leverages the probability distribution of early job termination to adapt resource assignments during job execution, with the aim of minimizing the expected energy cost. Challenge [C12]: Accurately estimating job completion or remaining times based on workload monitoring statistics in distributed DL workload scheduling to guarantee deadlines in the cloud environment. §.§.§ Deadline Guarantee Deadline-guaranteed scheduling focuses on ensuring the completion of distributed DL jobs before a specified deadline for jobs whose timing is a crucial consideration. This performance goal is more common in the cloud environment, where cloud providers can elastically scale resources for distributed training workloads to guarantee the SLO for cloud users. Achieving this performance goal primarily faces Challenge C12. GENIE <cit.>, a trailblazing deadline-aware scheduler for distributed training workloads, explores the key factors that impact the performance of distributed DL tasks. It introduces a predictive model based on lightweight profiling, enabling an accurate estimation of the processing rate and response latency for a variety of distributed DL workloads. However, a significant limitation of GENIE is that it is unable to handle mixed workloads that include both deadline-sensitive tasks and best-effort tasks simultaneously <cit.>. Chronus <cit.>, an end-to-end scheduling system, meets SLOs by guaranteeing deadlines for SLO-aware jobs while also enhancing the performance of best-effort jobs. This dual-focused strategy enables Chronus to manage a wide range of workload requirements. By extending these studies, Hydra <cit.> emerges as a dynamic and multifaceted scheduler to tackle various scheduling challenges, including adhering to deadlines and reducing job completion times. Hydra introduces an sampling approach leveraging the iterative periodicity inherent in distributed DL jobs. This technique enables precise estimation of job completion times in heterogeneous GPU environments, thereby improving efficiency and effectiveness of scheduling for various distributed DL workloads. In contrast to other work that usually optimizes a specific scheduling stage to guarantee deadline for distributed training jobs, UniSched <cit.> adopts a mixed integer linear programming framework to jointly optimize job profiling, job scheduling, and resource allocation to satisfy various scheduling objectives, including the deadline SLO and latency. Two key components support the optimization of UniSched: an estimator for estimating job completion time and a selector for selecting jobs and allocating resources. §.§ Distributed Inference Scheduling The scheduling of distributed inference workloads on available GPUs to meet various performance requirements is critical for the application of distributed DL models, especially as online services. Distinct from distributed training workloads, which are typically iterative, long-term, and resource-intensive, distributed inference workloads exhibit another set of characteristics: one-round, short-term, and lightweight <cit.>. In correspondence with such workload characteristic differences, the scheduling of distributed inference workloads also focuses on latency in addition to cost efficiency and throughput. Table <ref> summarizes these distributed inference-scheduling strategies, focusing on various performance goals, including latency, cost efficiency, and throughput. Challenge [C13]: Profiling workload characteristics of distributed inference jobs and scheduling them in low-latency and cost-efficient manners with an awareness of resource budgets. §.§.§ Latency and cost efficiency Scheduling distributed inference jobs faces Challenge C13. The inference latency refers to the time it takes to make a prediction given an inference query. To maintain satisfactory latency, distributed inference schedulers are designed to scale resources proactively in response to request density and to reorder execution sequences strategically at the job level. For example, Sniper <cit.> stands out as a self-updating cloud-edge collaborative inference scheduling system with a focus on time awareness. It abstracts heterogeneous hardware resources and employs a non-invasive performance characterization model to predict the inference time of DNNs accurately based on neural network similarity. This system achieves a stable increase in throughput successfully even in dynamic cloud-edge environments, demonstrating its effectiveness and robustness in optimizing the distributed inference scheduling. Ace-Sniper <cit.> extends Sniper by including software platform information in the resource abstraction, such as the CUDA and PyTorch library, to tackle heterogeneous hardware and platforms for distributed inference. Distributed inference latency is more of a concern in wireless networks, where communications are usually unstable and devices are heterogeneous. AP^2 <cit.> aims to minimize distributed inference latency in 6G mobile communication systems with communications, heterogeneous devices, and task dependency constraints. It estimates task completion time on different devices based on profiling results and adopts a genetic algorithm <cit.> to optimize the task arrangement for minimized inference latency while maintaining system reliability. In practice, cost efficiency is another critical factor for distributed inference, especially when used in cloud services. AutoDeep <cit.> automates cloud deployment for real-time online DNN inference, focusing on minimizing costs while satisfying latency constraints. To achieve this, AutoDeep utilizes Bayesian optimization combined with DRL, which enables the adaptive discovery of the optimal cloud configuration and device placement and reduces the required searching time significantly. Through this method, AutoDeep achieves a trade-off between operational costs and latency in DNN inference workloads efficiently. HexGen <cit.> improves distributed inference cost efficiency for large generative models over heterogeneous GPU devices. It applies asymmetric tensor-wise and layer-wise model partitioning for pipeline-parallel inference and aims to minimize communication and computation costs with heterogeneous GPU memory constraints. Latency and cost efficiency are recognized as interdependent objective in the inference system design. Improving one objective may inadvertently compromise the other if the solution is not designed meticulously, which motivates researchers to develop scheduling systems that optimizes both objectives simultaneously. Challenge [C14]: Scheduling many distributed inference jobs with diverse workload characteristics to improve the inference throughput in the cloud. §.§.§ Throughput Scheduling batches of distributed inference jobs in the cloud also faces Challenge C14. To tackle this challenge, researchers typically refine the scheduling system for distributed inference workloads to enhance throughput through batch execution and configuration adjustments. ∙ Batch execution: Batching inference has been identified as an efficient method to enhance resource utilization and reduce scheduling overhead <cit.>. Various schedulers incorporate heuristic methods to fine-tune the batch size for the optimal performance. For instance, Rafiki <cit.> employs a practical Additive-Increase Multiplicative-Decrease (AIMD) algorithm to adjust the inference batch size dynamically. This approach allows for responsive adaptation to varying workload conditions. Nanily <cit.> establishes an upper limit on the batch size by calculating the maximum remaining time for a request, which is determined by subtracting the minimum queuing time of available resources from the remaining time. It then computes an appropriate batch size such that the inference completion time equals to or approximates this maximum remaining time. ∙ Configuration adjustment: In addition to the batch-execution approach, certain schedulers employ end-to-end configuration tuning to enhance distributed inference throughput. RRL <cit.> emphasizes the optimization of parallel configurations at various levels, including inter-request-level and intra-request-level parallelisms. This optimization significantly reduces the overall system latency and improves the throughput. In cloud environments, distributed inference throughput is significantly affected by client queries per second (QPS) and the number of parallel workers in the inference system. IRIS <cit.> adaptively adjusts the parallelism level based on the online inference QPS predicted by a model pre-trained with monitoring profiles in an offline phase. IRIS integrates the parallelism level scheduling algorithm into the container orchestration platform, increasing overall computational resource utilization and throughput for distributed inference within the cluster. Morphling <cit.>, on the other hand, presents a rapid and near-optimal auto-configuration framework designed specifically for cloud-native model serving. This framework adapts to new inference services by sampling a limited set of configurations and then employs a meta-model to identify the most optimal configuration. This strategy allows Morphling to adjust quickly and efficiently to various service requirements while maintaining high system performance. §.§ Discussion ∙ Fine-grained workload ordering and overlapping is key to scheduling large-scale distributed DL workloads in various parallelism modes. Job-level scheduling is important for online instant workloads and offline batch workloads when the distributed DL workloads are relatively lightweight in the multi-tenant data center environment. However, as the volumes of models and resources increase rapidly, pipeline-level scheduling for large models in large-scale clusters, a scheduling approach orthogonal to job-level scheduling, is essential for contemporary distributed training. Though obeying different training procedures, data-parallel and model-parallel training can both leverage the pipeline to optimize the execution order and maximize the overlapping of different processing stages, including computation-computation, computation-communication, and communication-communication overlapping. In practice, people pipelines the workloads of hybrid training parallelism modes for greater training throughput. ∙ Solving complex distributed DL workload scheduling problems typically requires DRL. Distributed DL workload scheduling can be a complex problem, especially in large-scale data centers with various resource and performance goal constraints. Firstly, DRL can quickly adapt to constantly changing distributed DL environments and workloads by optimizing the policy through trial and error. Secondly, DRL can internally train DNNs for policy and value decisions to efficiently explore the vast search space of large-scale distributed DL workload scheduling. Thirdly, DRL can make real-time decisions for distributed DL workload scheduling. ∙ Though throughput is essential for distributed DL workload scheduling, cost efficiency is an increasing concern. As the energy and monetary cost of distributed training and inference increases exponentially with large datasets and models, cost efficiency has become a decisive factor when deploying a training or inference process in the cloud from both providers' and users' perspectives. On the one hand, cloud providers must measure the cost of scheduling dynamic distributed DL workloads on diverse resources and design a competitive cost model for distributed training and inference services. On the other hand, cloud users need to estimate the cost of distributed training or inference based on the cost model and strike a balance between cost and other performance goals, such as throughput. § DISTRIBUTED TRAINING OF LLMS: A CASE STUDY Recently, with the tremendous success of the application of LLMs <cit.> in various domains, such as NLP <cit.>, programming <cit.>, finance <cit.>, and medicine <cit.>, distributed training and fine-tuning LLMs efficiently have become a heated and important topic for researchers in the fields of computer science, artificial intelligence, and communications. As contemporary LLMs are in ultra-large sizes with up to hundreds of billions of parameters, training them typically requires hundreds of billions of tokens in the training dataset, hundreds of GPUs, and tens of days <cit.>. Efficient resource allocation and workload scheduling distributed DL strategies that can scale well to large data centers and workloads are critical for LLM training. This section examines several real cases of training LLMs in existing literature <cit.> to uncover insights and practical considerations for applying these distributed DL framework strategies in a large-scale setting. ∙ What are the important considerations for allocating resources across multiple data centers for LLM training? As LLM training requires a large volume of computational resources, the resources available in a single data center may not be able to support an LLM training job. Collaborative training of LLMs across multiple geologically apart data centers, , which form a computational power network that share information and resources, has become a common practice <cit.> and faced several new challenges. Firstly, compared to distributed training within a single data center, resource allocation across data centers faces significant challenges due to the heterogeneity in various computational and communication resources. Secondly, with many tenants across multiple data centers, performance isolation enabled by various virtual technologies must be ensured to prevent performance interference between different tenants and workloads. Thirdly, fault tolerance and data security are major concerns when considering the network transfer of the model and data to other data centers. The former concern will be discussed soon while the latter is solved mainly at the distributed DL algorithm level via federated learning <cit.>, which is not a focus of this survey. ∙ How to efficiently allocate resources for LLM training on a heterogeneous computational power network? Tackling heterogeneous resources in a computational power network, the resource allocator needs global knowledge about the resource capacity, pricing, and other specifications of all data centers <cit.>. The computational power network should also monitor real-time resource usage status and workload profiles within each data center. While leveraging both global and local resource information of the computational power network, the resource allocator considers user-specific and workload-specific requirements, such as the geological preference, price and completion time constraint, cost of transferring the training model and dataset, and training performance estimate. Based on these factors, the resource allocator can distribute black-boxed resources efficiently for LLM training within the computational power network with heterogeneous resources. ∙ How crucial is pipeline parallelism for LLM training? Pipeline parallelism is essential for LLM training. As indicated in <cit.>, by diminishing the impact of the communication volume and worker idle time during pipeline flushes, heuristic pipeline parallelism proves effective in practice with trillion-scale LLMs on more than 3,000 GPU. In contrast to layer-slicing parallelism, multiple-layer-slicing pipeline parallelism only communicates end-of-layer activations and gradients, which can be 300times smaller in the communication volume in a 2.2-billion-parameter example <cit.>. It is a common practice to use what is known as 3D parallelism <cit.> for LLM training, which combines data, pipeline, and layer-slicing parallelisms, to maximize the training throughput. ∙ How crucial is fault-tolerant scheduling for LLM training? Given the involvement of a large number of workers in prolonged training sessions for LLMs, ensuring fault-tolerance is of utmost importance for resilient scheduling. Frequent failure in devices or networks can potentially block the training process, degrade the convergence performance, and necessitate redundant restarting of failed tasks and pipelines. SWARM parallelism <cit.> incorporates the dynamic membership of unstable workers into account for fault-tolerant pipeline scheduling. This dynamic fault-tolerant pipeline scheduling allows rerouting a task from a disconnected worker to other workers and ensures continuous task execution in case of worker failure in the pipeline. According to Oobleck <cit.>, a failed pipeline can be recovered by using pipeline replicas and templates swiftly. We can instantiate some logically equivalent pipeline replicas, which possess replicated model states. Additionally, we define pipeline templates, which include information about the number of workers and stages in the pipeline, as well as the mapping of stages to GPUs. Once a pipeline failure occurs, a new pipeline can be restored based on the pipeline template and replicas instantly. § CONCLUSION AND OUTLOOK §.§ Conclusion With the explosive increase in the volume of data, models, and resources, efficient framework strategies, including resource allocation and workload scheduling, are crucial for distributed DL. This survey systematically investigates up-to-date efficient resource allocation and workload scheduling framework strategies for large-scale distributed DL. The discussion covers topics focusing on various resource types, scheduling granularity levels, and performance goals during the training and inference processes of distributed DL. We highlight the critical challenges for each topic and introduce the corresponding solutions. To illustrate the practical application of these framework strategies in real scenarios, we use a case study on distributed LLM training, typically with tens of billions of parameters on hundreds of GPUs. §.§ Outlook With the emergence of LLMs trained on ultra-large datasets, an increasing number of ultra-large GPU data centers are in use or on construction schedule. Distributed DL framework strategies targeting large-scale settings with ultra-large data, models, and clusters are deemed a future research trend in this domain. Compared to traditional distributed DL, large-scale distributed DL has new characteristics and poses new challenges for resource allocation and workload scheduling. We discuss these characteristics and challenges pertaining to the large scale as a hint of future research directions. §.§.§ Multi-data center collaborative learning Many large technology corporations and research organizations are constructing computational power networks consisting of multiple geographically distributed GPU data centers. Promoting new computing paradigms for contemporary large-scale distributed DL, the computational power network enables the sharing and coordination of ultra-large computational resources across multiple data centers for large workloads. However, compared to distributed DL within a single data center, the computational power network case presents various resource allocation and workload scheduling challenges, including higher resource heterogeneity, higher communication overhead, and tighter requirements for fault tolerance and data security. Overcoming these challenges requires scalable algorithms that work efficiently in large-scale environments with various constraints. §.§.§ Resource and workload heterogeneity With the frequent upgrades of hardware devices for distributed DL, data centers commonly have heterogeneous computational and communication resources with various capacities in storage, computation speed, network bandwidth and latency, and energy and pricing costs. Distributed DL workloads also exhibit heterogeneity in various aspects, such as dataset distribution, model complexity, distributed training parallelism modes, model synchronization mechanisms, and training dynamics. A promising research direction is dynamic resource allocation to adjust resources based on resource availability across heterogeneous environments. Another is adaptive workload scheduling, which matches the dynamic nature of changing workloads during prolonged training and inference processes to continually optimize performance. Solving these scheduling problems with resource and workload heterogeneity also requires efficient optimization methods, such as DRL. §.§.§ Pipeline execution for large-model training Pipelining has been applied to overlap various computational and communication workloads in large-model training. However, the optimization of pipeline execution for various training parallelism modes of large models has yet to be sufficiently explored. An example is related to adaptive pipeline scheduling. During the long-term execution of large-model training with dynamic workloads, the pipeline execution plan that initially balanced workloads can lead to significant workload skew among the workers. Fine-grained adaptive pipeline scheduling that dynamically adjusts the granularity of pipeline stages and rebalancing workloads can reduce pipeline stall throughout the training process. Another example is related to hierarchical pipeline scheduling. To isolate the negative influence of the pipeline stall, the global and multi-level local workload schedulers can use multiple pipelines hierarchically within a computing node, within a rack, within a data center, and across data centers. §.§.§ Resilient distributed DL Failure in devices and tasks are always in company with distributed frameworks, and its influence is non-negligible in large-scale environments. Resilient distributed DL framework strategies that can tolerate various failures become important for large-scale resource allocation and workload scheduling. When optimizing distributed DL framework strategies, the solution should consider GPU failures, network disruptions, and storage issues that can interrupt the distributed DL process and decay performance. Resource allocation strategies could proactively allocate redundant resources based on device failure considerations to tolerate failures. Workload scheduling strategies could replicate datasets and tasks to tolerate task failures or conduct checkpoints to reduce recovery overhead. §.§.§ Orchestration with distributed DL algorithms By being aware of the mechanisms of distributed DL algorithms, distributed DL framework strategies can be optimized for and orchestrated with them. For example, the workload scheduler can accurately estimate the communication overhead and effectively optimize the scheduling solution by orchestrating lossless or lossy compression technologies for gradient compression and model synchronization mechanisms for data-parallel training and federating learning. The resource allocator can promptly and dynamically adjust corresponding resources for distributed DL workloads by being aware of the adaptive policies of distributed DL algorithms. Distributed DL algorithms can also be jointly optimized with distributed DL framework strategies for their best configurations and adaptive policies. §.§.§ Orchestration with distributed DL infrastructures Modern distributed DL infrastructures usually apply virtualization technologies and programmable network devices to extend resource capacities. On the one hand, virtualization technologies change resource capacities and pricing costs of computational and communication devices. On the other hand, programmable network devices, such as programmable switches, extend the capability of network devices by integrating limited computational power. They often work to optimize distributed DL-aware network traffic, such as in-network aggregation for distributed gradients. These infrastructure technologies allow framework strategies to be optimized for various performance goals other than throughput, such as cost efficiency, performance isolation, and network congestion. Resource allocation strategies can leverage virtualization technologies to extend the allocation capability elastically, and workload scheduling strategies should consider virtualization performance and in-network aggregation when determining an optimal scheduling solution. ACM-Reference-Format
http://arxiv.org/abs/2406.08572v1
20240612181937
LLM-assisted Concept Discovery: Automatically Identifying and Explaining Neuron Functions
[ "Nhat Hoang-Xuan", "Minh Vu", "My T. Thai" ]
cs.CV
[ "cs.CV" ]
FastEEC: Fast Evaluation of N-point Energy Correlators [ June 17, 2024 ====================================================== § ABSTRACT Providing textual concept-based explanations for neurons in deep neural networks (DNNs) is of importance in understanding how a DNN model works. Prior works have associated concepts with neurons based on examples of concepts or a pre-defined set of concepts, thus limiting possible explanations to what the user expects, especially in discovering new concepts. Furthermore, defining the set of concepts requires manual work from the user, either by directly specifying them or collecting examples. To overcome these, we propose to leverage multimodal large language models for automatic and open-ended concept discovery. We show that, without a restricted set of pre-defined concepts, our method gives rise to novel interpretable concepts that are more faithful to the model's behavior. To quantify this, we validate each concept by generating examples and counterexamples and evaluating the neuron's response on this new set of images. Collectively, our method can discover concepts and simultaneously validate them, providing a credible automated tool to explain deep neural networks. § INTRODUCTION As large deep neural networks become more prevalent in everyday life, so does the need to understand their decision-making process to ensure their trustworthiness. Careful analysis of deep neural networks (DNNs) by looking at individual neurons and their combinations, has yielded valuable insights on how DNNs function <cit.>. For example, in the GPT-2 model, a specific set of neurons has been discovered to perform the natural language task of indirect object identification <cit.>. In the vision domain, some neurons have been found to detect curves <cit.>. However, these analyses require extensive human labor in examining the neurons and coming up with possible explanations. Concept-based explanations seek to characterize the model's global behavior via concepts, which are related to how human reasons <cit.>. These concepts serve to provide an intuition on the common characteristics of the inputs that the network recognizes. They can be more intuitive and require less time to inspect than individual inputs. In providing concept-based explanations to neurons in DNNs, we seek to characterize each neuron by a textual concept that represents a common trait of the inputs activating this neuron. Moreover, we aim to do this automatically, reducing the need for human interference in defining the concepts and validating the generated output. Recently, Large Language Models (LLMs) have been leveraged to automatically generate explanations for other language models<cit.>. LLMs are further utilized to score the resulting explanations, creating a complete framework. It is natural to pose the question of whether a textual concept-based explanation method with the same degree of automation can be realized in the vision domain. In fact, this work demonstrates that, by utilizing LLMs, we can overcome the limitations of existing works <cit.> in terms of the explainable concepts' space and the explaining language. Those limitations come from the fact that the existing explaining algorithms strongly depend on human annotations or curation of a large image-text dataset. In particular, we propose a method to automatically discover concepts for neural representations. Our method probes the neuron with an image-only dataset, identifies an interpretable concept among the highly activating images, and explains it using a Multimodal Large Language Model (MLLM). Our method is training-free, does not require examples of a concept, and can recognize concepts in an open-ended manner. Beyond concept discovery, we propose a procedure to evaluate a proposed concept independently of the probing dataset. Our score compares the activation of a neuron on examples possessing the concept and non-examples containing the concept's co-hyponyms. This serves to quantify the ability of the neuron in telling the concept from other semantically similar concepts. Intuitively, given a concept produced by our concept discovery method, these co-hyponyms represent the concepts one would test the neuron on to ensure the specificity of the proposed concept, therefore, our method helps with automating this process. To verify the effectiveness of our method, we apply our approach in examining various pre-trained models and demonstrate how improved explanations can help with scaling up the analysis of neural networks. The rest of this paper is organized as follows. Section <ref> discusses the related work and some preliminaries. Section <ref> describes our concept discovery method. Section <ref> shows how we compute our validation score. We conduct experiments to show the effectiveness of our method in Section <ref>, and finally Section <ref> presents our conclusion. § PRELIMINARY AND RELATED WORK Visual neuron-concept association Network Dissection <cit.> proposes to label neurons with textual concepts by computing the similarity between the output feature map and the output of a segmentation model. Since then, many methods <cit.> have been developed to associate concepts with neurons using different criteria. Since they only focus on matching the concepts, they usually require a pre-defined set of concepts as input or default to simple alternatives (e.g., 20,000 most common English words <cit.>). As such, these methods rely on the user anticipating the possible concepts, or their vocabulary will be severely limited. Because the association methods only work when comparing different concepts, in our work, we propose an example-based causality criterion to evaluate a single concept that can also be used for neuron-concept association. Automatic vision concept discovery Unlike concept association, where concepts are provided, concept discovery needs need to come up with a concept from the vocabulary. FALCON <cit.> addresses the challenge of a limited concept set by extracting terms from a large image-text dataset like LAION-400M <cit.>. While they benefit from the variety of concepts, they can only output words and phrases present in the dataset, greatly limiting the flexibility of their output. On the other hand, MILAN <cit.> is the first generative method that can generalize to different domains and tasks. However, the method requires a specific dataset of images with masks annotated concepts, posing challenges in obtaining such annotated data. Our method exploits the inherent capabilities of MLLMs and uses them for concept discovery, eliminating the need for annotated data. As MLLMs improve across domains, our method will directly inherit those improvements, while previous works will require additional collection of data. Multimodal large language models MLLMs are similar to LLMs in the sense that they both accept and yield text, however, MLLMs can accept an image as additional context. MLLMs can connect information present in images and text, allowing it to answer questions such as “What does this image describe?" Available MLLMs generally belong to one of two categories: open-source MLLMs and proprietary MLLMs. While open-source MLLMs allow the fine-tuning of MLLMs for a specific task, proprietary MLLMs are often much larger, more capable, and easier to integrate, only requiring an external API call. We elect to use a proprietary MLLM to ensure the highest quality for the task of concept discovery. We compare with open-source MLLMs in ablation tests in the supplementary material. § LLM-ASSISTED CONCEPT DISCOVERY In this section, we first describe our Concept Discovery problem in Section  <ref>. Since it is infeasible to directly solve that problem, we introduce the LLM-assisted Concept Discovery algorithm that, given a neuron, automatically generates a concept c that the neuron captures. The algorithm consists of 3 main steps, which are illustrated in Fig. <ref>. Sect. <ref> describes the first step selecting a set of inputs that highly activate the neuron, called the exemplar representations E_f(μ). To ensure the algorithm returns a single interpretable concept, the second step (Sect. <ref>) eliminates inputs with low cosine similarity from E_f(μ) and returns a more filtered set. Given that, the final step of the algorithm utilizes a MLLM to generate a concept explaining the examined neuron (Sect. <ref>). §.§ Problem formulation We consider a neural network Φ:𝒳→𝒴, where 𝒳 and 𝒴 denote the input and output domains, respectively. Given an arbitrary neuron in Φ, we refer to it by its activation function f: 𝒳→ℝ. Given a dataset D⊂𝒳^* and a concept c, we define D_c and D_c the set of D's elements that contains and does not contain c, respectively: D_c := {x∈ D | 𝒪_c(x) = 1 } D_c := {x∈ D | 𝒪_c(x) = 0 } where 𝒪_c(x) is an oracle determining whether or not the concept c occurs in x, i.e., 𝒪_c(x) = 1 iff c appears in x. Given a neuron or, equivalently, its activation f, the goal of this work is to find a descriptive concept c that the neuron captures: we say a neuron f captures a concept c if f can differentiate the elements of D_c from the elements of D_c. More formally, we aim to generate the concept that maximizes: c^* = *arg max_c [f(X_1) > f(X_2) | X_1 ∈ D_c, X_2 ∈ D_c] In other words, we aim to find the concept c such that the activation of a random input X_1 sampled from D_c has a high probability of being greater than that of an X_2 sampled from D_c. The intuition for this objective is, for a solution c of (<ref>), we can expect that f(x) for an x ∈ D_c is generally greater than f(x), x∈ D_c. It implies that we can directly use f(x) to differentiate D_c from D_c, i.e., f(x) indeed capture the concept c. There are several challenges to directly solve (<ref>). First, there is no trivial way to search, iterate, and/or optimize over the solution's space, i.e., the space of concepts, especially when it might not even be a metric space. Secondly, even if we can construct a metric space on the set of concepts, computing the actual value of (<ref>) is not trivial. The reasons are the absence of the oracle 𝒪 in practice and complex behavior f, which prevent efficient computations of the probability. §.§ Exemplar representation The first step of the algorithm is to generate a set of exemplar representations <cit.>, which is given as: E_f(μ) := {x∈ X | f(x) ≥μ} where μ is a threshold parameter. Intuitively, E_f(μ) is the set of inputs that highly activate the neuron's activation, i.e., the inputs whose activation at the neuron f is not lower than μ. Although the exemplar representations E_f(μ) may not fully characterize the neuron, previous studies have demonstrated that meaningful information can still be extracted from this set alone. For example, <cit.> utilize it to identify the neurons that trigger certain class predictions or generate specific objects. <cit.> employ these sets to forecast output sequence for newly observed inputs. Additionally,  <cit.> and <cit.> use them to pinpoint adversarial vulnerabilities. Following previous research, the first step of our proposed concept discovery algorithm also represents the examined neuron as the set of exemplar representations. In particular, we use a single dataset D_probe consisting only of images. For all neurons we wish to explain in a model, we record the neuron activations on this dataset and extract the exemplar representation from the top activating images of this dataset. In the next sections, we explain how we utilize a MLLM to extract the desired concept from the exemplar representation. §.§ Filtering the exemplar representation for interpretable concept identification Given an exemplar representation, we wish to extract a concept that is common to this set of images. However, it is natural to pose the question whether such a concept always exists. Neurons are known to be polysemantic, whereby a single neuron can respond to multiple unrelated concepts <cit.>. This polysemantic character of a neuron is expected to appear and influence the exemplar representations, making it challenging to identify a single concept explaining the neuron. Secondly, when the neuron exhibits polysemy, the MLLM might fail to generate a meaningful output, whose reason is discussed later in Section <ref>. To address those issues, prior to extracting, the second step of our algorithm involves the selection of a subset of E_f(μ) that promotes the discovery of a single interpretable concept. Specifically, we adopt the observation of <cit.> that members of E_f(μ) with high average cosine similarity are more likely to correspond to a single interpretable semantic concept. As such, we consider the problem of determining this subset as a problem of finding a subgraph with minimal diameter. Let h denote the CLIP vision encoder. We construct a weighted complete graph G = (V, E) where V ≡ E_f(μ) and a distance function d: E →ℝ such that d(u, v) = h(u) - h(v)2. The M images to be kept are those in the following solution: S^*(G) = *arg min_S ⊂ V, |S|=Mmax{d(u, v)|u,v ∈ S} This problem is reducible to the clique decision problem, hence it is NP-complete. However, since M and |V| are small, the solution can be found in a short time with binary search and an efficient clique-finding routine. This subset is then used to prompt the MLLM, as further described in Section <ref>. §.§ MLLM as concept proposer Directly extracting the concept c from the filtered exemplar representation is still non-trivial due to the modality gap. This problem can be viewed as a summarization task, as we wish to concisely describe the common feature of a set of images with only a short phrase. However, this multi-image summarization task is less explored compared to its natural language counterpart, the multi-document summarization, due to the scarcity of related datasets and models <cit.>. MILAN <cit.> tackled the initial challenge by designing a specific dataset for neuron explanations and training the explanation model. However, adopting the method to another domain requires the collection of new data with specialized labels, which can be costly and/or even infeasible in many applications. In contrast, our work leverages the recent advancement of MLLM and utilizes it as a concept proposer. We utilize an API-based MLLM (GPT-4V), as it is more capable than open-source alternatives. Note that, since we have no access to the intermediate representations of the inputs nor their gradients, the task needs to be fully specified by the inputs alone. To this end, we pose the problem as a Visual Question Answering (VQA) task where we give the MLLM images and ask for a common visual concept. The MLLM that we use can take multiple images, however, its ability to comprehend dozens of images is not well-studied. Furthermore, doing so would incur tremendous API costs, reducing the utility of our method. To make our method adaptable to the majority of MLLMs which can take only one image and to reduce API costs, our algorithm transforms E_f(μ) into a singular image while preserving the needed information. This is achieved by downsampling the images and arranging them in a grid-like pattern to form a single image, as shown in Figure <ref>. As shown by previous work <cit.>, MLLMs are capable of understanding this kind of “condensed" image. Despite the MLLM's great capabilities, we observed that it can produce unwanted outputs. Particularly, it can produce concepts that are too abstract or unhelpful for the task (for example, “this image features a variety of objects in different settings)." Since we cannot “train" the model, we try to guide the model towards desired outputs by putting additional information in the prompt. For LLMs, the common method is to perform In-context learning, where examples of inputs and answers are presented in the prompt so the LLM can “learn" from them. In our case, doing so necessitates multiple image inputs, which would dramatically raise the API costs. We opt for a simpler alternative: we collect unwanted outputs during initial trials and use them as examples of bad answers in the final prompt. This strategy reduces those unwanted concepts to some extent, while keeping the input length and costs reasonable. Another challenge in utilizing MLLMs is the presence of built-in safety mechanisms. While general-purpose MLLMs possess the necessary general knowledge for automatic novel concept discovery, they are also designed to reject certain types of prompts <cit.>. In the case of the MLLM deployed in our method (GPT-4V), the prompt characteristics leading to model refusal have not been explicitly provided. However, through our trials, we observed that GPT-4V tends to refuse instances in which the exemplar representation does not appear to contain a single identifiable common concept. Consequently, we hypothesize that the model rejects responses when it lacks confidence, possibly as a measure to mitigate misinformation. This motivates us to have an additional processing step before prompting the MLLM, which is discussed in the section below. § CONCEPT VALIDATION Beyond concept discovery, in this paper, we propose a score to validate the discovered concept. Our evaluation takes in a neuron and its proposed concept c, and returns a score which can be used to quantify whether the neuron's activation is consistently higher for inputs containing c. The overview of our evaluation is depicted in Fig. <ref>. The following sections are organized as follows. In Section <ref>, we formulate our scoring objective and relate it to our previous goal of concept discovery in (<ref>). Section <ref> describes our proposed idea of finding harder non-examples to better evaluate the proposed concept. Section <ref> and Section <ref> discuss how our evaluation can be realized. §.§ Objective formulation Since the objective of concept discovery is (<ref>), it is intuitive to use the following score to evaluate our solution: s(c) = [f(X_1) > f(X_2) | X_1 ∈ D_c, X_2 ∈ D_c] As mentioned previously in Section <ref>, it is non-trivial to compute this score due to the absence of the oracle 𝒪. Previous approach <cit.> overcomes this issue by relying on the availability of concept labels along with the dataset, which may not be the case in many scenarios. Even if we could approximate the oracle 𝒪, finding a comprehensive dataset D such that D_c ≠∅ for all concept c learned by a neural network is non-trivial. To address this issue, we propose a generative approach to obtain sets of examples and non-examples, E_c and E_c̅, as substitutes for D_c and D_c̅ in the evaluation (<ref>). Given E_c and E_c̅, we compute the following score: s(c) = ∑_x_1 ∈ E_c∑_x_2 ∈ E_c̅ 1[f(x_1) > f(x_2)]/|E_c| · |E_c̅| The intuition is the same as discussed in Section <ref>: when we say a neuron activates on concept c, we expect the value of f(x), x ∈ E_c is higher than f(x), x ∈ E_c̅. This score is closely related to the Area Under the Receiver Operating Characteristic curve (AUC), a metric used to evaluate binary classifiers, however, they are different due to our definition of E_c and E_c̅. Our validation process consists of four steps. Given a concept c, the first step is to generate the set of co-hyponyms of c, whose definition we will give in the next sub-section. Next, we generate captions of images that contain c and its co-hyponyms. Then, the captions are used to generate the sets of examples and non-examples E_c and E_c̅. Finally, those sets are used to compute the score in Equation (<ref>). §.§.§ Definitions Before we further describe how we generate E_c and E_c̅, we state some definitions that we will make use of extensively in the later sections. Hypernym A hypernym of a concept is another concept that has a broader meaning. For example, 𝚊𝚗𝚒𝚖𝚊𝚕 is a hypernym of 𝚍𝚘𝚐 since a dog is a type of animal. Hyponym A hyponym of a concept is another concept that has a more specific meaning. For example, 𝚍𝚘𝚐 is a hyponym of 𝚊𝚗𝚒𝚖𝚊𝚕. Co-hyponym Co-hyponyms of a concept are concepts that share the same hypernym with that concept. For example, 𝚙𝚎𝚖𝚋𝚛𝚘𝚔𝚎 is a co-hyponym of 𝚐𝚘𝚕𝚍𝚎𝚗 𝚛𝚎𝚝𝚛𝚒𝚎𝚟𝚎𝚛 as they both are hyponyms of the concept 𝚍𝚘𝚐. §.§ Sampling hard non-examples Simply sampling random images to obtain the negative examples E_c will not give us a good evaluation score s(c) as random images might not contain enough concepts to distinguish the evaluated concept c from other semantically similar concepts. For example, let's consider the scenario where the true concept c=𝚐𝚘𝚕𝚍𝚎𝚗 𝚛𝚎𝚝𝚛𝚒𝚎𝚟𝚎𝚛, the candidate concept c' = 𝚙𝚎𝚖𝚋𝚛𝚘𝚔𝚎, and the set E_c contains images of other non-dog animals. Then, both s(c) and s(c') will likely be high since both concepts will result in a low activation for all samples in E_c. The reason is that the members of E_c are both non-𝚐𝚘𝚕𝚍𝚎𝚗 𝚛𝚎𝚝𝚛𝚒𝚎𝚟𝚎𝚛 and non-𝚙𝚎𝚖𝚋𝚛𝚘𝚔𝚎 as they are all non-dog. As a result, a more careful sampling strategy to obtain E_c is needed to make a higher score more meaningful. In particular, we wish to generate E_c̅ in a way that our score is high only when the neuron activates on that specific concept. To this end, our evaluation uses a concept's co-hyponyms to generate E_c̅. The intuition is illustrated in the previous scenario: the neuron not activating on 𝚙𝚎𝚖𝚋𝚛𝚘𝚔𝚎 is a stronger evidence for c = 𝚐𝚘𝚕𝚍𝚎𝚗 𝚛𝚎𝚝𝚛𝚒𝚎𝚟𝚎𝚛 than the neuron not activating on random images, given the similarity between 𝚙𝚎𝚖𝚋𝚛𝚘𝚔𝚎 and 𝚐𝚘𝚕𝚍𝚎𝚗 𝚛𝚎𝚝𝚛𝚒𝚎𝚟𝚎𝚛 (they are both breeds of dogs). §.§ Finding co-hyponyms with an LLM For the first step of our validation process, given a concept c, we wish to find a set of its co-hyponyms to serve as strong non-examples for validation. While the task of finding co-hyponyms suggests the usage of a lexical database with hyponym relations built in like WordNet <cit.>, such operations are only possible for single concepts. For instance, humans can intuitively think of non-examples of “red sedans" as “blue sedans," yet such concepts are absent in WordNet. Instead, we leverage the flexibility of an LLM to generate co-hyponyms for all concepts. This reduces the need for ad-hoc methods and post-processing and enhances automation, which suits the goal of our method. To use an LLM for the task of finding co-hyponyms, we use a two-step process, inspired by Chain-of-Thought prompting <cit.>. First, given a concept c, we ask the LLM to find a hypernym of it. Then, in the same prompt, we ask it to generate concepts that share the same (previously generated) hypernym with c. In this setting, Chain-of-Thought prompting can improve performance by breaking down the complex problem into two smaller tasks. It also enables us to partially understand how the LLM come up with the answer. There are two common failures the LLM can make. First, it can give a hypernym that is broader than what we want. For example, when asked for a hypernym of “red sadans", the LLM tends to answer “cars", instead of “sedans of a particular color". Second, it might come up with a co-hyponym that is not entirely exclusive of the original concept. For instance, while it can correctly come up with the hypernym “shapes" for the concept “lines", it might use “rectangles" as a co-hyponym. These behaviors can be attributed to the inherent limitations of LLMs having a limited understanding of concepts, compared to humans <cit.>. Nevertheless, we alleviate these issues by using examples to trigger in-context learning <cit.>. §.§ From concepts to images Now that we have the concept to be evaluated and its co-hyponyms, we wish to generate a set of examples and non-examples. We utilize a text-to-image diffusion model for this task. The diffusion model takes in a description of an image, which we will call a caption, and generates an image matching that caption. As this model expects a descriptive caption rather than a single word, we need to first convert the concepts to captions. For each concept, we want its captions to describe images containing that concept, but not its co-hyponyms. Likewise, we do not want the captions corresponding to the co-hypnoyms to unintentionally contain the original concept. To this end, we generate captions by considering pairs of concept and co-hyponym. For each pair, we explicitly ask the LLM to generate a pair of captions that contain one concept but not the other. Finally, the captions are passed to the diffusion model for image generation. § EXPERIMENTS We conduct experiments to show the effectiveness of our method, comparing it to other baselines in terms of explanation quality. Section <ref> provides details on our method's implementation and the experiment setting. Section <ref> describes our experiment on ImageNet pre-trained models. In Section <ref>, we apply our method to the popular CLIP model and show the effectiveness of both concept discovery and concept validation. Due to the page constraint, we put more experiments, including ablation tests, more qualitative evaluations, a user study and experiments on different layers in the supplementary materials. The exact prompt used for the MLLM and the LLM will also be provided in the supp. mat. §.§ Implementation In this section, we provide the implementaion details of our method, including the specific model choice and the parameters used. In the subsequent experiments, the configurations are as follows unless otherwise noted. The source code will be made available. For concept discovery, we use GPT-4V <cit.> as the MLLM. To extract the exemplar representations, we let μ be equal to the 50-th highest activation for each neuron, and sample M=36 images using (<ref>). The vision encoder h is CLIP ViT-B/32 <cit.>. For concept validation, we utilize LLaMA 2 13B <cit.> as our LLM, both in co-hyponym generation and caption generation. To turn captions into images, we use Stable Diffusion XL Turbo <cit.> with =4 for enhanced prompt alignment. For each concept, we generate 5 co-hyponyms, then, we generate 2 pairs of captions for each pair of concept and co-hyponym. Finally, for each caption, we generate 5 images, resulting in |E_c̅| = |E_c| = 5 × 2 × 5 = 50. §.§ Accuracy of explanations It is especially challenging to evaluate the accuracy of the generated explanations as we do not have the groundtruth concept in most cases. A common approach is evaluating on neurons whose expected behavior is given by the task, such as the final layer of classification models <cit.>. In this experiment, we compute explanations for the first 100 neurons of the final layer of ResNet50. We use the ImageNet-1K validation set as D_probe. We compare our results with FALCON <cit.>, which we produce using their official code. Note that we do not compare with MILAN for this experiments because MILAN relies on thresholding the feature maps to obtain the exemplar representation, which the fully-connected layer we are explaining does not possess. Tables <ref>, <ref>, and <ref> demonstrate our results. Note that we were able to interpret all but one out of the first 100 neurons, while FALCON was only able to recognize 11. From Table <ref>, it can be seen that our method produces the exact ImageNet class, a superclass of it, or a related concept. Table 3 compares our descriptions with FALCON's top-3 words, in cases where they produced an explanation. While they were able to generate the exact concept in some case (unit 70), in general, our explanations are more consistently correct. §.§ Analyzing CLIP vision encoder To show that our method can discover more insightful concepts on practical DNNs, we experiment with explaining the final layer of the backbone of a CLIP-ResNet50 <cit.>. To be precise, we generate explanations for the first 512 of the total 2048 neurons of the last layer (prior to attention pooling layer) of the backbone. We use a 1M subset of LAION-400M <cit.> as D_probe. In this experiment, we set μ to the 100-th highest activation to account for a larger probing dataset. We compare our results with MILAN <cit.>. For FALCON, their method generally only explains a small portion of neurons that they deem interpretable. Furthermore, this ratio depends on the choice of the threshold parameter. In contrast, our method and MILAN both aim to generate an explanation for an arbitrary neuron. As such, FALCON requires a different evaluation, which we provide in the supplementary material. §.§.§ Evaluating explanation variety We analyze the generated explanations on a corpus level to show that our method generates more detailed concepts. In particular, we consider all explanations of each method as a corpus. Figure <ref> illustrates the word-level analysis for each corpus. It counts the number of occurrences of each word in a corpus and plots the top 50 of them, as well as some other corpus-level statistics. Overall, we can see that the number of unique words (abbr. vocab) and the number of words that appear only once (abbr. hapax) are significantly higher than those of MILAN. This suggests that our method produce more specific concepts that is tailored to each neuron. Looking at the body of the distribution, MILAN labels neurons with generic concepts more frequently (“text": 175+ usages versus ours' 50, “people": 50+ vs. 30-). Observing that MILAN labels a significant 168 out of 512 neurons as “text", we seek to compare those with our own explanations. In contrast with MILAN, we were able to extract a variety of different concepts. Fig. <ref> shows some examples of the results. In many cases, we discover the specific word on which the neuron activates. In other cases, we find the neuron activates on text coming from a particular subject, as in the “video game" neuron. The results demonstrate that our method can automatically produce more specific concepts that provide more insights on the functionality of neurons. §.§.§ Explanation scores We demonstrate how the evaluation methodology proposed in Section <ref> works in practice. First, we show an example in Figure <ref> with the concept to be evaluated, the co-hyponyms, the captions, and the set of examples and non-examples. Note that this is only a portion of the sets; we generate 100 images in total for both examples and non-examples combined as described in Section <ref>. We can observe that the exclusivity of the concept and its co-hyponyms was successfully transmitted to the captions and then to the images. We plot the distribution of the score for each neuron in Figure <ref>. The distribution suggests that our concepts better represents the chactersistic of the input that cause the neuron to fire strongly. Examples in Fig. <ref> supports this. We further evaluate our results with more qualitative examples and a user study, which we provide in the supplementary materials. § CONCLUSION Previous methods for concept-based neuron explanations suffer from limited output language and require a curated dataset that defines the concepts. Our concept discovery method addresses this issue, leading to interpretable concepts that are more insightful and distinctive. To evaluate the generated concepts without any groundtruth, our evaluation method provides a meaningful to help users judge the quality of the result. Future work can seek to use this score as a signal to create a feedback loop, where the discovery method improves on its answer automatically. named
http://arxiv.org/abs/2406.08797v1
20240613042518
Joint Hybrid Transceiver and Reflection Matrix Design for RIS-Aided mmWave MIMO Cognitive Radio Systems
[ "Jitendra Singh", "Suraj Srivastava", "Surya P. Yadav", "Aditya K. Jagannatham", "Lajos Hanzo" ]
eess.SP
[ "eess.SP" ]
J. Singh, S. P. Yadav, and A. K. Jagannatham are with the Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur, UP 208016, India (e-mail: jitend@iitk.ac.in; sprakashy21@iitk.ac.in; adityaj@iitk.ac.in). S. Srivastava is with the Department of Electrical Engineering, Indian Institute of Technology Jodhpur, Jodhpur, Rajasthan 342030, India (email: surajsri@iitj.ac.in). L. Hanzo is with the School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: lh@ecs.soton.ac.uk). Joint Hybrid Transceiver and Reflection Matrix Design for RIS-Aided mmWave MIMO Cognitive Radio Systems Jitendra Singh, Student Member, IEEE, Suraj Srivastava, Senior Member, IEEE, Surya P. Yadav, Student Member, IEEE, Aditya K. Jagannatham, Senior Member, IEEE, and Lajos Hanzo, Life Fellow, IEEE June 17, 2024 ========================================================================================================================================================================================================= § ABSTRACT In this work, a reconfigurable intelligent surface (RIS)-aided millimeter wave (mmWave) multiple-input multiple-output (MIMO) cognitive radio (CR) downlink operating in the underlay mode is investigated. The cognitive base station (CBS) communicates with multiple secondary users (SUs), each having multiple RF chains in the presence of a primary user (PU). We conceive a joint hybrid transmit precoder (TPC), receiver combiner (RC), and RIS reflection matrix (RM) design, which maximizes the sum spectral efficiency (SE) of the secondary system while maintaining the interference induced at the PU below a specified threshold. To this end, we formulate the sum-SE maximization problem considering the total transmit power (TP), the interference power (IP), and the non-convex unity modulus constraints of the RF TPC, RF RC, and RM. To solve this highly non-convex problem, we propose a two-stage hybrid transceiver design in conjunction with a novel block coordinate descent (BCD)-successive Riemannian conjugate gradient (SRCG) algorithm. We initially decompose the RF TPC, RC, and RM optimization problem into a series of sub-problems and subsequently design pairs of RF TPC and RC vectors, followed by successively optimizing the elements of the RM using the iterative BCD-SRCG algorithm. Furthermore, based on the effective baseband (BB) channel, the BB TPC and BB RC are designed using the proposed direct singular value decomposition (D-SVD) and projection based SVD (P-SVD) methods. Subsequently, the proportional water-filling solution is proposed for optimizing the power, which maximizes the weighted sum-SE of the system. Finally, simulation results are provided to compare our proposed schemes to several benchmarks and quantify the impact of other parameters on the sum-SE of the system. mmWave, cognitive radio, RIS, hybrid beamforming, Riemannian conjugate gradient. Joint Hybrid Transceiver and Reflection Matrix Design for RIS-Aided mmWave MIMO Cognitive Radio Systems Jitendra Singh, Student Member, IEEE, Suraj Srivastava, Senior Member, IEEE, Surya P. Yadav, Student Member, IEEE, Aditya K. Jagannatham, Senior Member, IEEE, and Lajos Hanzo, Life Fellow, IEEE June 17, 2024 ========================================================================================================================================================================================================= § INTRODUCTION The growing need for high data rates has spurred the development of new technologies, such as 6G wireless communication networks. Given the high bandwidth requirements of these networks, the mmWave band spaning the frequency band of 30-300 GHz is eminently suitable for next-generation wireless systems <cit.>. However, in comparison to the conventional sub-6 GHz bands, the mmWave band suffers from severe path, loss as well as penetration, and absorption losses <cit.>. To overcome these losses, high gain multiple-input-multiple-output (MIMO) schemes have been recommended for mmWave systems. Although the resultant large antenna arrays are suitable for mmWave MIMO systems, the signals in the high frequency regime are highly susceptible to blockages, which can adversely affect mmWave MIMO systems. In this context, reconfigurable intelligent surface (RIS) technology <cit.> can play a crucial role by providing an alternative path for communication. RISs are made of low-cost reflecting elements and power efficient passive beamforming can be carried out by harnessing them. Furthermore, due to the highly directive nature of mmWave technology, the interference between different wireless networks operating in the same frequency band is reduced due to their highly directional beams <cit.>. This presents an excellent opportunity for mmWave MIMO systems to be used in cognitive radio (CR) systems. In such systems <cit.>, the secondary users (SUs) opportunistically harness the same frequency band as the primary users (PUs) without significantly affecting the PU's communication. This motivates us to study the impact of RIS on mmWave MIMO CR systems, which can potentially maximize the efficiency of RIS-aided mmWave MIMO systems. Many researchers have studied the active transmit precoder (TPC) and passive RIS reflection matrix (RM) design in RIS-aided mmWave MIMO systems. The related literature survey is discussed in the next subsection. §.§ Literature review CR is a revolutionary technology that provides high spectrum utilization for wireless communications by allowing the SUs to access the radio spectrum or share the unused spectrum of PUs without degrading the quality of service of the PUs <cit.>. The authors of <cit.> have designed an underlay spectrum sharing scheme for CR networks, where the SUs communicate with the cognitive base station (CBS) with controlled power, which does not affect the quality of service of the PUs. Moreover, the authors of <cit.> consider RIS-aided MIMO CR systems. Specifically, Tian et al. <cit.> summarized the potential of RIS-aided spectrum sharing systems and discussed the diverse practical use cases of these systems in vehicular and UAV communication. The authors of <cit.> proposed joint active beamforming at the CBS and passive beamforming at the RIS for maximizing the weighted sum-rate of the SUs under total transmit power (TP) and interference power (IP) constraints in an RIS-aided MIMO CR system. They have used a block coordinate descent (BCD) algorithm to design the active TPC and passive beamforming based on full channel state information (CSI). Furthermore, to incorporate imperfect CSI in RIS-aided MIMO CR systems, Zhang et al. <cit.> proposed joint active and passive beamforming for minimizing total TP at the CBS. Jiang et al.<cit.> consider an underlay RIS-aided MIMO CR system and proposed joint active and passive beamformer designs for maximizing the weighted sum-SE of the SUs under specific TP and IP constraints. They reformulate the resultant non-convex problem as a pair of sub-optimization problems using the weighted minimum mean-square error (WMMSE) criterion and subsequently optimize the TPC and RIS RM using the popular alternating optimization method. Lin et al. <cit.> consider an RIS for spectrum sensing in CR systems, where they proposed a weighted energy detection method operating in the presence of a PU in RIS aided CR networks. Zamanian et al. <cit.> incorporate a vertical beamforming mechanism at the CBS for maximizing the SE in an RIS-aided CR network, where they jointly optimize the active and passive beamformer. They concluded that the SE of the system is maximized, when the tilt angles at the CBS are oriented towards the RIS. Moreover, Dong et al. <cit.> maximize the secrecy rate of the SUs in an RIS-aided multiple input single output (MISO) wiretap channel operating in the underlay mode both under perfect and imperfect CSI of the eavesdropper. Furthermore, Zhang et al. <cit.> consider a symbol level precoder at the CBS to minimize the symbol error rate of an RIS-aided MISO CR system. They use the alternating optimization technique for designing the active and passive beamformers, where a successive convex approximation (SCA) method is adopted for optimizing the RIS RM. Yang et al. <cit.> analyze the outage probability in an RIS-aided CR system, where an RIS has been used for eliminating the interference at the PU caused by the SU. Furthermore, Vu et al. <cit.> investigated an underlay RIS-aided non-orthogonal multiple access (NOMA) CR network, wherein they derived the outage probability of the SUs, the sum throughput and ergodic capacity of the system considering both line of sight (LoS) and non-LoS communication (NLoS) links between the CBS and SUs. However, the authors of <cit.> consider fully-digital beamforming (FDB) at the CBS in RIS-aided CR systems. Due to the large number of antennas in a mmWave MIMO system, these fully digital TPCs are inefficient in mmWave CR systems due to their requirement of a large number of RF chains, which are costly and power thirsty. Therefore, the recently proposed hybrid TPC <cit.> has a higher efficiency in mmWave MIMO CR systems, where the TPC relies on a much lower number of RF chains. Specifically, Wang et al. <cit.> proposed hybrid beamforming (HBF) for a single-user mmWave MIMO system, where the low resolution phase shifters of the RF TPC and RC pair are designed successively for maximizing the SE of the system. Furthermore, the baseband (BB) TPC and RC are obtained based on the effective BB channel for further improving the SE. As a further advance, Zhan et al. <cit.> consider a MU, multi-stream mmWave MIMO system, for which they propose zero-forcing (ZF) and successive interference cancellation (SIC)-based HBF to deal with both the multi-user interference (MUI) and inter-stream interference (ISI). The authors of <cit.> proposed a discrete Fourier transform (DFT)-aided user clustering aided hybrid TPC by considering both partially- and fully-connected architectures in mmWave MIMO systems. They also quantified the energy efficiency (EE) for both the proposed architectures and concluded that the partially-connected architecture has a higher EE efficiency. Moreover, Zhang et al. <cit.> proposed an energy-efficient hybrid TCP and RC based on block diagonalization for the MU mmWave MIMO downlink. To improve the EE, the authors have proposed a water-filling solution for optimizing the power, which maximizes the weighted sum-SE of the system. As a further advance, Chen et al. <cit.> proposed low-complexity hybrid TPC schemes based on orthogonal frequency-division multiplexing (OFDM) for wideband mmWave multi-user (MU) MIMO systems. In the context of HBF-aided mmWave MIMO CR systems, Tsinos et al. <cit.> proposed a hybrid TPC and RC design for mmWave MIMO CR systems while considering both TP, IP and unit modulus constraints for their hybrid architecture. This design is based on the alternating direction method of multipliers (ADMM) method considering full CSI at both the CBS and SUs. Moreover, our work in <cit.> relied on limited CSI to design the hybrid TPC/RC of the mmWave MIMO CR downlink, which maximizes the sum-SE of the secondary system. However, the sum-SE metric results in the problem of low user fairness. Since the users having high channel-quality enjoy a high rate, while those having low-quality channels may have rates close to zero. To circumvent this problem, the paper also proposed hybrid TPC and RC designs for maximizing the geometric mean of the SU's rate in <cit.>. Our recent work in <cit.> investigates hybrid TPC/RC designs conceived for a frequency selective mmWave MIMO CR system, while considering practical uniform rectangular planar arrays (URPAs) both at the CBS and the SUs. The authors of <cit.> consider a joint active hybrid TPC design at the transmitter or base station (BS) and passive beamforming at the RIS in an RIS-aided mmWave MIMO system. More specifically, Bahingayi et al. <cit.> consider a RIS-aided single-user mmWave MIMO system and formulate a problem to optimize the RIS RM. They employ singular value decomposition (SVD) of the channel and a heuristic greedy search method for determining the array response vectors that maximize the SE. Furthermore, they solved the RM optimization problem using the Riemannian conjugate gradient (RCG) algorithm. Li et al. <cit.> proposed a joint active hybrid TPC at the BS and a passive beamformer at the RIS for minimizing the total TP at the BS, while considering a quality of service (QoS) constraint for each user in the RIS-aided mmWave MU MIMO downlink. They used the RCG algorithm for handling the constant magnitude constraints on the elements of the RF TPC and RIS RM. Furthermore, Gong et al. <cit.> proposed a joint active hybrid TPC and passive beamformer for an RIS-aided mmWave MU MIMO system to minimize the MSE. They conceived an accelerated RCG algorithm based on the majorization minimization (MM) method for addressing the non-convex unit modulus constraint on the elements of the RF TPC and RIS RM. Niu et al. <cit.> have considered a double RIS-aided MU mmWave MIMO system and proposed joint hybrid TPC and passive beamforming design for maximizing the weighted sum-rate of the system under specific QoS constraints. They used the BCD method to design the BB TPC by employing quadratically constrained quadratic programming (QCQP), while the RIS RM was optimized using a price-mechanism-based RCG algorithm. Pradhan et al. <cit.> proposed joint active hybrid TPC designs for employment at the BS and passive beamformer designs at the RIS to minimize the mean squared error (MSE) in RIS-aided mmWave MU MIMO systems. They have leveraged a gradient projection method to deal with the non-convex unit modulus constraints imposed on the elements of the RF TPC and RIS RM. Furthermore, Cheng et al. <cit.> consider a beam-steering codebook to capture the practical implementation of a finite-resolution RF TPC and RM with limited feedback in the RIS-aided mmWave MU MIMO downlink. In their work, the authors have derived an upper bound for the achievable rate imposed by the finite resolution of the codebook and the limited feedback. As a further advance, Hong et al. <cit.> exploited the sparsity of the angular domain in mmWave MIMO channels to jointly design the active hybrid TPC of the BS and the passive beamformer of the RIS for both narrowband and wideband RIS-aided mmWave MIMO systems. Moreover, Chen et al. <cit.> investigated the effect of beam squint in RIS-aided mmWave wideband systems. The authors therein proposed a novel technique for mitigating the beam squint effect via optimization of the passive RM. However, none of the contributions reviewed above have conceived hybrid TPC and RC solutions for RIS-aided mmWave MIMO CR systems. This motivates us to consider the underlay RIS-assisted mmWave MIMO CR downlink. The novel contributions of this work are boldly contrasted to the existing studies in Table <ref> at a glance. The detailed contributions of this paper are discussed next. §.§ Contributions of this work * Explicitly, this is the first paper to analyze the benefits of using an RIS in the mmWave MIMO CR downlink, where a CBS transmits multiple streams to multiple SUs in the presence of a PU. We formulate a sum-SE maximization problem for the given CR system to design the hybrid transceiver and passive RM under the TP, IP, and the non-convex unity modulus constraints on the elements of the RF TPC, RF RCs and RM. The problem formulated is highly non-convex and not tractable due to the non-convex constraints as well as owing to the coupling of variables in the objective function (OF) and constraints. To solve this problem, we transform the problem into a tractable one by employing a two-stage hybrid TPC design approach. Furthermore, we propose a BCD algorithm for the design of the RF TPC, RF RC and RM, alternatively. * For a fixed RM, we decompose the RF TPC and RF RC design problem into a series of sub-problems, where we formulate the optimization problem to design the pair of RF TPC and RF RC vectors. In order to optimize each sub-problem successively, we propose the successive Riemannian conjugate gradient algorithm, where each pair of RF TPC and RF RC vectors are optimized jointly, which is suitable for large-scale optimization. Similarly, for a fixed RF TPC and RF RC, each element of the RM is optimized successively based on the RCG algorithm. * To design the BB TPC and BB RC we propose a pair of methods termed: D-SVD and P-SVD, which are based on the effective BB channel to maximize the sum-SE of the system. The D-SVD method directly uses the SVD of the direct effective BB channel of the SUs, whereas the P-SVD method uses the SVD of the channel projected onto the null-space of the PU's channel. Subsequently, this is followed by design of the BB TPC using the ZF method and proportional optimal power allocation. * Finally, simulation results are provided for quantifying the efficiency of the proposed methods for an RIS-aided mmWave MIMO system. §.§ Notation Boldface capital letters, boldface small letters, and normal typeface letters represent matrices, vectors, and scalar quantities, respectively. To denote (i,j)th element of matrix 𝐀, we use the notation 𝐀(i,j); the Hermitian and conjugate transpose of a matrix 𝐀 are denoted by 𝐀^H and 𝐀^*, respectively; ||𝐀||_F denotes the the Frobenius norm of 𝐀, whereas |𝐀| represents its determinant; Tr(𝐀) denotes its trace; ||𝐚||_p represents p-th norm of 𝐚; D(𝐚) denotes a diagonal matrix with vector 𝐚 on its main diagonal; 𝐀⊙𝐁 is the Hadamard product of 𝐀 and 𝐁; ∇ f(𝐚_i) denotes the gradient vector of function f(𝐚) at the point 𝐚_i; the real part of a quantity is denoted by {·}; 𝐈_M denotes an M × M identity matrix; the symmetric complex Gaussian distribution of mean 𝐚 and covariance matrix 𝐀 is represented as CN(𝐚, 𝐀). § RIS-AIDED MMWAVE MU MIMO DOWNLINK CR SYSTEM §.§ System model We consider the underlay RIS-aided mmWave MIMO CR downlink as shown in Fig. <ref>. A CBS equipped with N_t transmit antennas (TAs) and M_t RF chains is transmitting data to M SUs in the presence of a PU with the aid of a RIS. Each SU and PU is equipped with N_r receive antennas (RAs) and M_r RF chains. A fully-connected hybrid TPC is employed at the CBS to transmit MN_s data streams, i.e., N_s data streams to each SU, where we have MN_s≤ M_t and N_s≤ M_r. The signal vector 𝐬=[𝐬^T_1,,𝐬^T_M]^T ∈ℂ^MN_s× 1 is initially precoded by the BB TPC 𝐅_BB=[𝐅_BB,1,, 𝐅_BB,M]∈ℂ^M_t× MN_s followed by the RF TPC 𝐅_RF∈ℂ^M_t× N_t, where 𝐬_m ∈ℂ^N_s × 1 and 𝐅_BB,m∈ℂ^M_t× N_s are the transmitted signal and the BB TPC corresponding to the mth SU. The RIS is deployed on the facade of a building close to the CBS and SUs for substantially suppressing interference at the PU due to the downlink transmission between the CBS and the SUs. In particular, the RIS comprises N reflective elements and it is programmable and reconfigurable via an RIS controller. Let us denote the RIS RM as Φ = 𝒟([ϕ_1, , ϕ_n ]) with ϕ_n = α_n e^jθ_n, where α_n ∈ [0, 1] and θ_n ∈ [0, 2π] are the amplitude and phase shift of the n-th reflective element. Assuming α_n=1 for maximizing the reflection gain of the RIS leads to |Φ(n,n)|=1. The cascaded channel matrix corresponding to the mth SU is given by 𝐇_m=𝐇_IS,mΦ𝐇_CI∈ℂ^N_r× N_t, where 𝐇_CI∈ℂ^N × N_t and 𝐇_IS,m∈ℂ^N_r× N are the channel links spanning from the CBS to the RIS and from the RIS to the mth SU, respectively. Considering a frequency-flat block-fading channel, the signal 𝐲_m ∈ℂ^N_r× 1 received at the SU m is given by 𝐲_m= 𝐇_m 𝐅_ RF𝐅_ BB𝒟(√(𝐩))𝐬 + 𝐧_m = 𝐇_m 𝐅_ RF𝐅_ BB,m𝒟(√(𝐩_m))𝐬_m +∑_n=1, n ≠ m^M𝐇_m𝐅_ RF𝐅_ BB,n𝒟(√(𝐩_n))𝐬_n+ 𝐧_m, where 𝐩 = [𝐩_1,,𝐩_M] ∈ℂ^MN_s× 1 represents the power allocation vector, and 𝐩_m(d) denotes the power assigned to the dth stream of SU m. Furthermore, 𝐧_m ∈ℂ^N_r× 1 is an additive white complex Gaussian noise process with distribution 𝒞𝒩(0, σ^2𝐈). Upon considering a hybrid RC at each SU, the processed received signal 𝐲_m∈ℂ^N_s× 1 at the mth user is given by 𝐲_m= 𝐖_ BB,m^H𝐖_ RF,m^H𝐇_m𝐅_ RF𝐅_ BB,m𝒟(√(𝐩_m))𝐬_m +∑_n=1, n ≠ m^M𝐖_ BB,m^H 𝐖_ RF,m^H𝐇_m𝐅_ RF𝐅_ BB,n𝒟(√(𝐩_n))𝐬_n +𝐖_ BB,m^H𝐖_ RF,m^H 𝐧_m, where 𝐖_ RF,m∈ℂ^N_ r× M_ r and 𝐖_ BB,m∈ℂ^M_ r× N_ s are the RF RC and the BB RC matrices, respectively, of the mth SU. Moreover, the fully-connected hybrid antenna array at both the CBS and each SU ends leads to |𝐅_ RF(i,j)|=1 and |𝐖_ RF,m(i,j)|=1. Similarly, the interference received at the PU is given by 𝐲_P= 𝐆𝐅_ RF𝐅_ BB𝒟(√(𝐩))𝐬+𝐧_p, where 𝐆=𝐇_IPΦ𝐇_CI∈ℂ^N_r× N_t is the effective channel matrix of the PU and 𝐇_IP∈ℂ^N_r× N is the channel spanning from the CBS to the PU. §.§ mmWave MIMO channel Throughout this paper, we employ the widely used Saleh-Valenzuela channel model <cit.> of the wireless channel, which includes complex path losses, delays, angle-of-arrivals (AoAs), and angle-of-departures (AoDs). The frequency flat mmWave MIMO channel between two nodes is given by 𝐇_i= ∑_l=1^N^ p_iα_i,l𝐚_ r(ϕ^ r_i,l,θ^ r_i,l)𝐚_ t^H(ϕ^ t_i,l,θ^ t_i,l), where the subscript i∈{{ CI}, { IS,m}, IP} represents the corresponding link, and N^p_i denotes the number of multipath components in 𝐇_i. The quantity α_i,l is the gain of the lth multipath component in 𝐇_i. Furthermore, 𝐚_ t(ϕ_i,l^tθ_i,l^t)∈ℂ^ col(𝐇_i)× 1 denotes the transmit array response vector corresponding to the azimuth and elevation angles of departure (AoDs), namely ϕ_i,l^t, θ_i,l^t, respectively. Similarly, 𝐚_ r(ϕ_i,l^rθ_i,l^r)∈ℂ^ row(𝐇_i)× 1 denotes the receive array response vector corresponding to the azimuth and elevation angles of arrival (AoAs), namely ϕ_i,l^r, θ_i,l^r, respectively. We consider uniform planar arrays (UPAs) at the BS, RIS, and at each UE. As a result, the array response vectors can be written as 𝐚_z (ϕ, θ)= 1/√(N_z) [1, …, e^j 2 π/λ d (o sinϕsinθ +p cosθ),…, e^j 2 π/λ d ((N_z^h-1) sinϕsinθ)+(N_z^v-1) cosθ)]^T, where z∈{ r,t}, d is the antenna spacing or RIS element spacing, which is assumed to be half of the wavelength λ, 0≤ o< N_z^h and 0≤ p< N_z^v, where N_z^h and N_z^v denote the number of horizontal and vertical elements of the UPA in the 2D plane, respectively. §.§ Problem formulation This paper seeks to jointly design the hybrid TPC/RCs { 𝐖_ RF,m, 𝐖_ BB,m}_m=1^M, 𝐅_ RF,𝐅_ BB, RIS RM Φ and the power allocation vector 𝐩 that maximizes sum-SE of the secondary system under TP, IP and the non-convex constant magnitude phase constraints. The corresponding SE of the SU m is given by R_m = log_2(|𝐈_N_ s+Γ_m |), where the matrix Γ_m ∈ℂ^N_ s×N_ s represents the signal to interference plus noise ratio (SINR) power, which is given by Eq. (<ref>) at the top of the next page. Moreover, due to downlink communication between the CBS and SUs in the same frequency band as the PU, the aggregate interference induced at the PU can be written as I_PU = ∑_m=1^M|| G𝐅_ RF𝐅_ BB,m D(√(𝐩_m))||^2_F. Therefore, for a given RIS-aided mmWave MIMO channel, the sum-SE of the downlink CR system can be formulated as 𝒫_1: max_{ 𝐖_ RF,m, 𝐖_ BB,m}_m=1^M,𝐅_ RF,𝐅_ BB,Φ, 𝐩∑_m=1^Mℛ_m 9a s.t. |𝐅_ RF(i,j)| = 1, ∀ i, j, 9b | 𝐖_ RF,m(i,j)| = 1, ∀ i, j, m, 9c |Φ(n,n)| =1, ∀ n, 9d I_PU≤ I_ th,9e 𝐅_RF𝐅_BB𝒟(√(𝐩))^2_F ≤ P_T, 9f where I_th and P_T control the IP at PU and the TP at the CBS. It is important to highlight here that both the CBS and each SU require complete knowledge of the channel matrices 𝐇_m, 𝐇_IP and 𝐇_CI, which is a typical requirement in underlay CR systems <cit.>. Moreover, the CSI required can be readily obtained via the transmission of training symbols followed by employing suitable channel estimation techniques, as discussed in <cit.>. Observe from 𝒫_1 that the non-convex OF (<ref>) and the non-convex unit modulus constraints (<ref>), (<ref>), (<ref>) imposed on the RF TPC, RCs, and RM elements make the problem highly non-convex. Also observe that the TPC, RC matrices and RM are coupled in the OF (<ref>) and IP constraint (<ref>), which makes 𝒫_1 even more challenging to solve. Therefore, in order to find the solution, we propose a two-stage hybrid transceiver design based on the BCD principle, which is discussed in the next section. § TWO-STAGE HYBRID TRANSCEIVER DESIGN FOR RIS-AIDED MMWAVE MIMO CR DOWNLINK In order to maximize the sum-SE of the system, we decompose the BB TPC 𝐅_BB as 𝐅_BB=𝐅^1_ BB𝐅^2_ BB, where 𝐅^1_ BB = [𝐅^1_ BB,1, ,𝐅^1_ BB,m, , 𝐅^1_ BB,M]∈ℂ^M_ t× MN_ s and 𝐅^2_ BB= [𝐅^2_ BB,1, ,𝐅^2_ BB,m , , 𝐅^2_ BB,M]∈ℂ^N_ s× MN_ s. The key idea behind this decomposition is to design 𝐅_RF and 𝐅^1_BB for jointly maximizing the sum-SE in the first stage while ignoring the MUI. Subsequently, 𝐅^2_BB is designed in the second-stage for mitigating the MUI. Therefore, the updated sum-SE maximization problem can be recast as 𝒫_2: max_{ 𝐖_ RF,m, 𝐖_ BB,m}_m=1^M,𝐅_ RF,𝐅^1_ BB,𝐅^2_ BB,Φ, 𝐩 R_ sum s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). 10 To solve 𝒫_2, we focus first on the joint design of the RF TPC 𝐅_RF, RCs {𝐖_RF,m}_m=1^M and RM Φ based on the BCD method. Next, 𝐅^1_ BB and {𝐖_ BB,m}_m=1^M are determined by maximizing the sum-SE based on the associated effective BB channel. Finally, we compute 𝐅^2_ BB followed by the optimal power allocation vector 𝐩. §.§ Joint RF TPC, RC, and RIS RM design Upon assuming that the MUI can be eliminated in the second-stage of the TPC, one can approximate the rate of the mth SU ℛ_m at high SNR as ℛ_m ≈log_2 ( | 𝐑_m^-1𝐖^H_BB,m𝐖^H_RF,m𝐇_m 𝐅_RF𝐅^1_BB,m𝒟(𝐩_m) ×(𝐅^1_BB,m)^H 𝐅_RF^H 𝐇_m^H 𝐖_RF,m𝐖_BB,m | ), 11 where 𝐑_m=σ^2𝐖_ BB,m^H𝐖_ RF,m^H𝐖_ RF,m𝐖_ BB,m is the effective noise at the mth SU. Note that for a large number of TAs and RAs, the optimal RF TPC and RCs are approximately orthogonal, i.e., 𝐅^H_RF𝐅_RF∝𝐈_M_t and 𝐖^H_RF,m𝐖_RF,m∝𝐈_M_r, ∀ m. Furthermore, the hybrid TPC and RCs approach the optimal fully-digital TPC and RCs, respectively, which obey the approximation (𝐅^1_BB)^H 𝐅^H_RF𝐅_RF𝐅^1_BB≈ I_MN_s and (𝐖_BB,m)^H 𝐖^H_RF,m𝐖_RF,m𝐖_BB,m≈ I_N_s. Following these facts, one can assume that the matrices 𝐅^1_BB,m and 𝐖_BB,m are orthogonal, i.e., (𝐅^1_BB,m)^H 𝐅^1_BB,m∝ I_MN_s and (𝐖_BB,m)^H 𝐖_BB,m∝ I_N_s. Therefore, (<ref>) can be approximated as ℛ_m ≈log_2 ( | 1/σ^2𝐖^H_RF,m𝐇_m 𝐅_RF𝐅^1_BB,m𝒟(𝐩_m) (𝐅^1_BB,m)^H _≈𝒟(𝐪_m) ×𝐅_RF^H 𝐇_m^H 𝐖_RF,m | ) = log_2 ( | 1/σ^2𝐖^H_RF,m𝐇_m 𝐅_RF𝒟(𝐪_m) 𝐅_RF^H 𝐇_m^H 𝐖_RF,m | ) = log_2 ( | 1/σ^2𝐅_RF^H 𝐇_m^H 𝐖_RF,m𝐖^H_RF,m𝐇_m 𝐅_RF𝒟(𝐪_m) | ) = log_2 (|𝒟(𝐪_m)/σ ^2|) +log_2 ( | 𝐅_RF^H 𝐇_m^H 𝐖_RF,m𝐖^H_RF,m𝐇_m 𝐅_RF | ). 12 Let us define the SVD of 𝐇_m = 𝐔_m Σ_m 𝐕_m^H as 𝐇_m = [ 𝐔_m 𝐔_m ][ Σ_m 0; 0 Σ_m ][ 𝐕_m 𝐕_m ]^H, 13 where 𝐔_m contains the first M_r columns of 𝐔_m, Σ_m is comprised of the first M_r singular values of Σ_m and 𝐕_m contains the first M_r columns of 𝐕_m. Upon assuming the effective rank of 𝐇_m to be equal the maximum number of data streams per SU, i.e., M_r, one can approximate 𝐇_m as 𝐇_m ≈𝐇_m=𝐔_m Σ_m 𝐕_m^H. Furthermore, one can write 𝐅_RF as 𝐅_RF=[𝐅_RF,1, , 𝐅_RF,m, , 𝐅_RF,M_r], where 𝐅_RF,m∈ℂ^N_t× M_r. Upon exploiting the above facts, one can rewrite (<ref>) as ℛ_m ≈log_2 (|𝒟(𝐪_m)/σ ^2|) +log_2 ( | 𝐅_RF,m^H 𝐇_m^H 𝐖_RF,m𝐖^H_RF,m𝐇_m 𝐅_RF,m | ) (b)≈log_2 (|𝒟(𝐪_m)/σ ^2|)+2×log_2 ( |𝐖^H_RF,m𝐇_m 𝐅_RF,m | ), 14 where (a) follows since |𝐗𝐘| = |𝐗||𝐘| when 𝐗 and 𝐘 are square matrices. As a result, the joint design of the RF TPC and RCs, and RM is formulated as: 𝒫_3: max_{𝐖_ RF,m,𝐅_ RF,m}_m=1^M, Φ∑_m = 1^Mlog_2 ( |𝐖^H_RF,m𝐇_IS,mΦ𝐇_CI𝐅_RF,m | ) s.t. (<ref>), (<ref>), (<ref>). 15 Note that the problem 𝒫_3 is still intractable due to the non-convex unit modulus constraints. Additionally, the RF TPC 𝐅_ RF,m, RF RC 𝐖_ RF,m and RM Φ are coupled in the OF. Hence, in order to solve this challenging problem, we propose the BCD-successive RCG (SRCG) algorithm, where 𝐅_ RF,m, 𝐖_ RF,m, and Φ are designed alternatively by employing the RCG algorithm. As per this procedure, we initially jointly design the RF TPC 𝐅_RF,m and RF RCs 𝐖_RF,m for a fixed RM Φ. Subsequently, we design the RM Φ for the 𝐅_RF and 𝐖_RF,m computed in the previous step. The proposed BCD-SRCG algorithm in described next in detail. §.§.§ Optimization of RF TPC and RC For a fixed RM Φ, we seek to design 𝐖_ RF,m,𝐅_ RF,m based on the known effective channel 𝐇_m. As a result, the RF TPC and RF RC design problem can be formulated as 𝒫_4: max_{𝐖_ RF,m,𝐅_ RF,m}_m=1^M∑_m = 1^Mlog_2 ( |𝐖^H_RF,m𝐇_m 𝐅_RF,m | ) s.t. (<ref>), (<ref>). 16 To further solve the above problem, we develop a novel SRCG algorithm, where we decompose 𝒫_4 into a series of sub-problems. Explicitly, each pair of the RF TPC and RF RC are designed successively by invoking the RCG algorithm. Let us consider 𝐅_RF,mΔ=[𝐟_RF,m,1, , 𝐟_RF,m,M_r] and 𝐖_RF,mΔ=[𝐰_RF,m,1, , 𝐰_RF,m,M_r], where 𝐟_RF,m,l and 𝐰_RF,m,l are the lth columns of 𝐅_RF,m 𝐖_RF,m, respectively. Let us define 𝐅_RF,m,\ l and 𝐖_RF,m,\ l as the matrices that exclude the vectors 𝐟_RF,m,l and 𝐰_RF,m,l from 𝐅_RF,m and 𝐖_RF,m, respectively. Following the steps from (<ref>) to (<ref>) shown at the top of the next page, the OF of (<ref>) can be reformulated as ∑_m = 1^Mlog_2 ( |𝐖^H_RF,m𝐇_m 𝐅_RF,m | ) ≈∑_m = 1^Mlog_2 ( | 𝐖_RF,m,\ l^H 𝐇_m ×𝐅_RF,m,\ l | ) +∑_m = 1^Mlog_2 (|[1+ 𝐰_RF,m,l^H 𝐐_m,l𝐟_RF,m,l] |), 23 where the matrix 𝐐_m,l∈ℂ^N_r× N_t is defined as 𝐐_m,l≜ 𝐔_m(α𝐈_M_r+Σ_m𝐕_m^H 𝐅_RF,m,\ l𝐖_RF,m,\ l^H𝐔_m)^-1 ×Σ_m𝐕_m^H . 24 Observe that when 𝐅_RF,m,\ l and 𝐖_RF,m,\ l are known, the first term of (<ref>) and 𝐐_m,l are rendered constant. As a result, the sub-problem of optimizing of the RF TPC and RC reduces to the equivalent problem 𝒫_5: max_𝐰_RF,m,l,𝐟_RF,m,l∑_m = 1^Mlog_2 ( | [1+ 𝐰_RF,m,l^H 𝐐_m,l𝐟_RF,m,l] | ) 25 s.t. |𝐟_RF,m,l(i)| =1, i=1,…, N_t, 25a |𝐰_RF,m,l(j)| =1, j=1,…, N_r. 25b Furthermore, the OF in the above equation can be upper-bounded using Jensen's inequality as ∑_m = 1^Mlog_2 ( | [1+ 𝐰_RF,m,l^H 𝐐_m,l𝐟_RF,m,l] | ) ≤log_2 ( [1+ ∑_m = 1^M |𝐰_RF,m,l^H 𝐐_m,l𝐟_RF,m,l |] ). 26 We maximize the upper bound of (<ref>), and the corresponding optimization problem is given by 𝒫_6:max_𝐰_RF,m,l,𝐟_RF,m,llog_2 ( [1+ ∑_m = 1^M |𝐰_RF,m,l^H 𝐐_m,l𝐟_RF,m,l |] ) s.t. (<ref>), (<ref>). 27 Observe that the entries of 𝐰_RF,m,l and 𝐟_RF,m,l are subject to unit-modulus constraints. Thus, in order to jointly design each pair of 𝐰_RF,m,l and 𝐟_RF,m,l, we concatenate them as 𝐳_m,l = [𝐰_RF,m,l^H, 𝐟_RF,m,l^H]^H ∈ℂ^(N_r+N_t)× 1 to form a higher-dimensional vector, which is subject to the unit-modulus constraint. As a result, 𝒫_6 can be reformulated as 𝒫_7: max_𝐳_m,l∑_m = 1^M|𝐳_m,l^H 𝐃_m,l𝐳_m,l| s.t. |𝐳_m,l(i)| = 1, ∀ i, m, l, 28 where 𝐃_m,l=[ 𝐈_N_r× N_r; 0_ N_t× N_r ]𝐐_m,l[ 0_N_t× N_r𝐈_N_t× N_t ]∈ℂ^(N_r+N_t) × (N_r+N_t). Note that 𝒫_7 is also non-convex due to the non-convex unit modulus constraint imposed on each element of 𝐳_m,l. To this end, let us define the feasible set 𝒵 for (<ref>) on the complex circle manifold as 𝒵 = {𝐳_m,l∈ℂ^(N_r+N_t)× 1: |𝐳_m,l(i)| = 1, ∀ i }. 29 Therefore, the problem (<ref>) can be recast as max_𝐳_m,lf(𝐳_m,l)=∑_m = 1^M|𝐳_m,l^H 𝐃_m,l𝐳_m,l| s.t. 𝐳_m(i) ∈𝒵, ∀ i, m. 30 Furthermore, the Euclidean gradient of f(𝐳_m,l) is given by ∇ f(𝐳_m,l)=2× [[ 𝐐_m,l𝐟_RF,m,l; 𝐐^H_m,l𝐰_RF,m,l ]]. 31 The RCG algorithm takes advantage of the Riemannian gradient to evaluate the descent direction, which is defined as the orthogonal projection of ∇ f(𝐳_m,l) onto the tangent space T_𝐳^i_m,l𝒵 of the manifold 𝒵 at the associated point 𝐳^i_m,l. This is mathematically expressed as T_𝐳^i_m,l𝒵 ={𝐳_m,l∈ℂ^(N_r+N_t): {𝐳_m,l⊙ (𝐳^i_m,l)^*}=0_(N_r+N_t)}. 32 Subsequently, the Riemannian gradient at the point 𝐳^i_m,l is obtained as grad  f(𝐳^i_m,l)=∇ f(𝐳^i_m,l) -{∇ f(𝐳^i_m,l) ⊙ (𝐳^i_m,l)^*}⊙𝐳^i_m,l. 33 Similar to the conjugate gradient method of the Euclidean space, the update rule of the search direction in the manifold space is given by η^i+1=-grad  f(𝐳^i+1_m,l)+λ_1 𝒯_𝐳^i_m,l→𝐳^i+1_m,l (η^i), 34 where η^i denotes the search direction at 𝐳^i_m,l, λ_1 is the update parameter choosen as the the Polak-Ribiere parameter <cit.>, and 𝒯_𝐳^i_m→𝐳^i+1_m (η^i) represents the transport operation. Briefly, the transport operation is required because both η^i+1 and η^i are in different tangent spaces and operations such as the sum in (<ref>) cannot be carried out directly. Therefore, the transport operation 𝒯_𝐳^i_m,l→𝐳^i+1_m,l (η^i) proposed in <cit.> is required to map the tangent vector at the previous search direction to its original tangent space of the current point 𝐳^i+1_m,l, which is given by 𝒯_𝐳^i_m,l→𝐳^i+1_m,l (η^i): T_𝐳^i_m,l𝒵↦ T_𝐳^i+1_m,l𝒵: η^i↦η^i-{η^i⊙ (𝐳^i+1_m,l)^*}⊙𝐳^i+1_m,l. 35 Upon determining the search direction η^i+1, the retraction operation Retr_𝐳^i_m(λ _2η_i) of <cit.> is performed for determining the destination on the manifold. Specifically, Retr_𝐳^i_m(λ _2η_i) maps the point on the tangent space T_𝐳^i_m𝒵 to the manifold 𝒵, which is given by Retr_𝐳^i_m,l(λ_2 η^i): T_𝐳^i_m,l𝒵↦ 𝒵: λ_2η^i↦ (𝐳^i_m,l+λ_2η^i)_j/ | (𝐳^i_m,l+λ_2η^i)_j |, 36 where λ_2 is the Armijo backtracking line search step size <cit.> and (𝐳^i_m,l+λ_2η^i)_j denotes the jth entry of (𝐳^i_m,l+λ_2η^i). The key steps of the SRCG algorithm discussed above to solve problem (<ref>) are summarized in Algorithm 1. §.§.§ Optimization of the RM Φ We now focus our attention on the design of the RIS RM Φ for a fixed RF TPC 𝐅_RF,m and RF RC 𝐖_RF,m, which maximize the sum-SE of the SUs. The pertinent problem of designing Φ is given by 𝒫_8: max_Φ∑_m = 1^Mlog_2 ( |𝐖^H_RF,m𝐇_IS,mΦ𝐇_CI𝐅_RF,m | ) s.t. (<ref>). 37 To solve this problem for a fixed 𝐖_RF,m and 𝐅_RF,m, we once again adopt the successive optimization principle, where the problem (<ref>) is decomposed into a series of sub-problems. In each sub-problem, ϕ_n is optimized for fixed values of the other (N-1) elements. Toward this, let us define 𝐑_mΔ= 𝐖^H_RF,m𝐇_IS,m=[𝐫_m,1, , 𝐫_m,N], 38 𝐓_mΔ= 𝐇_CI𝐅_RF,m=[𝐭_m,1, , 𝐭_m,N]^H, 39 where 𝐫_m,n∈ℂ^M_r× 1 is the nth column of 𝐑_m and 𝐭^H_m,n∈ℂ^M_t× 1 is the nth row of 𝐓_m. Thus, the effective BB channel can be written as 𝐖^H_RF,m𝐇_IS,mΦ𝐇_CI𝐅_RF,mΔ=𝐑_m Φ𝐓_m = ∑_n = 1^N ϕ_n𝐫_m,n𝐭^H_m,n. 40 Therefore, the OF of 𝒫_8 can be rewritten as ∑_m = 1^Mlog_2 ( |𝐖^H_RF,m𝐇_IS,mΦ𝐇_CI𝐅_RF | ) =∑_m = 1^Mlog_2 ( |∑_n = 1^N ϕ_n𝐫_m,n𝐭^H_m,n | ). 41 Furthermore, following the steps from (<ref>) to (<ref>) as shown at the top of the next page, (<ref>) can be approximated as ∑_m = 1^Mlog _2(|∑_n = 1^N ϕ_n𝐫_m,n𝐭^H_m,n|) ≈∑_m = 1^Mlog _2(| Δ_m,n| ) + ∑_m = 1^Mlog _2(| 1 + ϕ_nδ_m,n| ), 46 where Δ_m,n=∑_i = 1,i n^N ϕ_i 𝐫_m,i𝐭_m,i^H and δ_m,n=𝐭^H_m,n(α𝐈_M_r + ∑_i = 1,i n^N ϕ_i 𝐫_m,i𝐭_m,i^H )^-1𝐫_m,n. Observe that Δ_m,n is fixed when the other N-1 reflective elements, RF TPC and RC are fixed. Therefore, the designed sub-problem for the optimization of the nth reflective element is given by 𝒫_9: max_ϕ_n ∑_m = 1^Mlog _2(| 1 + ϕ_nδ_m,n| ) s.t. |ϕ_n| = 1,∀ n. 47 Upon defining Ψ_n = 𝒟([δ_1,n,, δ_M,n]) ∈ℂ^M × M, 𝒫_9 can be recast as 𝒫_10: max_ϕ_𝐧 f(ϕ_n) = log _2( | 𝐈_M + ϕ_nΨ_n |) s.t. |ϕ_n| = 1,∀ n. 48 The unit modulus constraint on ϕ_n renders the above problem non-convex. To solve this problem, we once again adopt the above-mentioned RCG algorithm for designing the RF TPC and RC. To this end, the Euclidean gradient of the function f(ϕ_n) is formulated as ∇ f(ϕ_n)=Tr((𝐈_M + ϕ_nΨ_n)^-1Ψ_n ). 49 Therefore, the problem (<ref>) can be efficiently solved again by the RCG algorithm. In summary, the BCD-SRCG algorithm successively designs the lth beamformer pair 𝐟_RF,m,l and 𝐰_RF,m,l of the RF TPC and RF RC jointly for a fixed setting of the RM Φ by solving (<ref>) employing the RCG algorithm. Subsequently, with the RF TPC and RF RC thus computed, each element of Φ is successively optimized according to (<ref>), based on the RCG algorithm. As per the BCD-SRCG algorithm described above, 𝐅_RF, {𝐖_RF}_m=1^M and Φ are alternately designed until convergence is achieved. The key steps of the BCD-SRCG procedure are summarized in Algorithm <ref>. §.§ BB TPC, RCs and optimal power allocation This subsection presents a procedure for the design of the BB TPC 𝐅_BB,m, BB RC 𝐖_BB,m, ∀ m and determines the optimal power allocation 𝐩_m, m=1,,M, which maximizes the sum-SE and minimizes the MUI. This is achieved as per the optimization in 𝒫_1, for fixed RF TPC, RCs and RM. As seen from 𝒫_1, 𝐅_BB,m and 𝐩_m are coupled in the total TP and IP constraints, given by (<ref>) and (<ref>), respectively. Therefore, it is difficult to solve this problem. To compute the solution to this challenging problem, we present a pair of approaches, viz., the direct-SVD (D-SVD) and projected-SVD (P-SVD) techniques, that focus respectively on the spatial multiplexing of the SUs and on the interference mitigation at the PU. Both these approaches are discussed next in detail. §.§.§ D-SVD In this approach, the BB TPC and RCs are designed for maximizing the sum-SE based on the SVD of the channels 𝐇_m,∀ m, while the power allocation is done to meet the TP and IP constraints. Therefore, the optimal fully-digital TPC and RC of the mth SU for the D-SVD method are 𝐕_m and 𝐔_m, respectively. Furthermore, in the first-stage, 𝐅_ RF and 𝐅^1_ BB,m are jointly designed for maximizing the sum-SE of the SUs. Therefore, for a fixed RF TPC 𝐅_RF and RF RC 𝐖_RF,m, the quantities 𝐅^1_BB,m and 𝐖_BB,m, that approach the optimal solution, are given by 𝐅^1,D_BB,m = (𝐅^H_ RF𝐅_ RF)^-1𝐅^H_ RF𝐕^D_m, 50 𝐖^D_BB,m = (𝐖^H_ RF,m𝐖_ RF,m)^-1𝐖^H_ RF,m𝐔^D_m, 51 where we have 𝐕^D_m=𝐕_m and 𝐔^D_m=𝐔_m. Furthermore, to mitigate the MUI, we use the ZF technique to design the precoder 𝐅^2_BB. As per this scheme, the CBS obtains the effective channel matrix of the mth SU as 𝐇^eff,D_m = (𝐖^D_ BB,m)^H𝐖_ RF,m^H𝐇_m𝐅_ RF𝐅^1,D_ BB,m∈ℂ^N_ s× N_ s, ∀ m and stacks them as 𝐇^D = [(𝐇^eff,D_m)^T (𝐇^eff,D_1)^T (𝐇^eff,D_M)^T]^T∈ℂ^MN_ s× N_ s. Subsequently, the BB TPC 𝐅_ BB,2 is formulated as 𝐅^2,D_ BB = ((𝐇^D)^H𝐇^D)^-1(𝐇^D)^H. 52 Finally, the normalized BB TPC corresponding to the mth SU is given by 𝐅^D_ BB,m = 𝐅^1,D_ BB, m𝐅^2,D_ BB, m/||𝐅_RF𝐅^1,D_ BB, m𝐅^2,D_ BB, m||_F. 53 Furthermore, the resource allocation in a typical mmWave MU scenario can potentially suffer from unfairness due to coverage issues, differences in distance between the CBS and various SUs, as well as the priority of the SUs. Therefore, in order to avoid this, we introduce a weighted sum-SE for the system. The optimal power allocation problem of weighted sum-SE maximization, based on the D-SVD method, can now be formulated as 𝒫_11: max_{𝐩^D_m}_m=1^M∑_m=1^M w^D_mℛ^D_m s.t. (<ref>), (<ref>), 54 where w^D_m, 𝐩^D_m and ℛ^D_m denote the weight, power allocation and rate of the mth SU using the D-SVD method. Following Appendix <ref>, the rate of the mth SU ℛ^D_m, based on the D-SVD approach, can be simplified to the following expression ℛ^D_m ≈log_2(| I_N_ s + 1/σ^2_n(𝐅^2,D_ BB,m)^HΣ^2_m𝐅^2,D_ BB,m D(𝐩^D_m)|). 55 Furthermore, let us now define the matrix Υ^D_m ∈ℂ^N_ s× N_ s as Υ^D_m = (𝐅^2,D_ BB,m)^HΣ^2_m𝐅^2,D_ BB,m, (b)= [ υ^D_m,1‖𝐟^2,D_ BB,m,1‖_2^2 0; ⋱ ; 0 υ^D_m,N_ s‖𝐟^2,D_ BB,m,N_s‖_2^2 ], 56 where υ^D_m,i represents the square of the ith principal diagonal element of the matrix Σ_m and 𝐟^2,D_ BB,m,i denotes the ith column of 𝐅^2,D_ BB,m. Furthermore, the approximation (b) employed in (<ref>) follows by noting that the columns of 𝐅^2,D_ BB,m are asymptotically orthogonal for large antenna arrays <cit.>. Furthermore, for the designed Φ, the IP constraint at the PU, due to the transmission by the CBS, can be formulated as I_PU ≤ I_ th, ∑_m=1^MTr(𝐆𝐅_ RF𝐅^D_ BB,m D(𝐩^D_m)(𝐅^D_ BB,m)^H𝐅_ RF^H𝐆^H) ≤ I_ th, ∑_m=1^MTr( D(𝐩^D_m)(𝐅^D_ BB,m)^H𝐅_ RF^H𝐆^H𝐆𝐅_ RF𝐅^D_ BB,m_𝐙_m) ≤ I_ th, ∑_m=1^M∑_d=1^N_ sp^D_m,dζ_m,d ≤ I_ th, 57 where p^D_m,d and ζ_m,d are the dth diagonal elements of D(𝐩^D_m) and 𝐙_m, respectively. Similarly, the total TP constraint at the CBS can be rewritten as ∑_m=1^M∑_d=1^N_ sp^D_m,dt^D_m,d ≤ P_T, 58 where t^D_m,d is the dth diagonal element of the matrix 𝐓^D_m = (𝐅^D_BB,m)^H𝐅^H_RF𝐅_RF𝐅^D_BB,m. Therefore, the sum-SE maximization of the system based on the D-SVD method is given by 𝒫_12: max_p^D_m,d ∑_m=1^M∑_d=1^N_ sw^D_m,dlog_2( 1 + υ^D_m,d‖𝐟^2,D_ BB,m,d‖_2^2/σ^2 p^D_m,d) 59 s.t.∑_m=1^M∑_d=1^N_ sp^D_m,dζ_m,d≤ I_ th, 59a ∑_m=1^M∑_d=1^N_ sp^D_m,dt^D_m,d≤ P_T, 59b p^D_m,d≥ 0, 59c where w^D_m,d is the weight corresponding to the dth stream of the mth SU. The theorem below quantifies the optimal power p_m,d allocated to the mth SU and its dth stream. The SE of the system given in 𝒫_12 is maximized by p^D_m,d =max{0, 1/λζ_m,d+τ^D t^D_m,d-σ^2/w^D_m,dυ^D_m,d‖𝐟^2,D_ BB,m,d‖_2^2}∀ m, d. 60 Given in Appendix <ref>. The quantities λ and τ^D are the Lagrange multipliers associated with ζ_m,d and t^D_m,d, respectively. §.§.§ P-SVD The P-SVD approach completely avoids interference at the PU due to communication between the CBS and SUs, which can be accomplished by projecting the channels of the SUs into the null space of the PU channel 𝐆. To this end, let us define the SVD of 𝐆 as 𝐆=𝐔_gΣ_g𝐕_g^H. 61 Therefore, after taking the projection, the effective channel of the mth SU as per this procedure is given by 𝐇_m=𝐇_m (𝐈_N_t- 𝐕_g𝐕_g^H). 62 In order to maximize the sum-SE of the system, the optimal TPC and RC can be found using the SVD of 𝐇_m. Let us define the SVD of 𝐇_m as 𝐇_m=𝐔_mΣ_m𝐕_m^H. 63 Upon considering the optimal fully-digital TPC and RC for the mth SU as 𝐕^P_m and 𝐔^P_m, which comprise the first M_r columns of 𝐕_m and 𝐔_m, respectively, the BB TPC 𝐅^1,P_BB,m and BB RC 𝐖^P_BB,m for the P-SVD method are given by 𝐅^1,P_BB,m= (𝐅^H_ RF𝐅_ RF)^-1𝐅^H_ RF𝐕^P_m, 64 𝐖^P_BB,m= (𝐖^H_ RF𝐖_ RF)^-1𝐖^H_ RF𝐔^P_m. 65 Furthermore, upon employing the ZF technique, 𝐅^2,P_ BB is given by 𝐅^2,P_ BB = ((𝐇^𝐏)^H𝐇^P)^-1(𝐇^P)^H, 66 where 𝐇^P = [(𝐇_1^ eff,P)^T (𝐇_m^ eff,P)^T (𝐇_M^ eff,P)^T]^T∈ℂ^MN_ s× N_ s and 𝐇^eff, P_m = (𝐖^P_ BB,m)^H𝐖_ RF,m^H ×𝐇_m𝐅_ RF𝐅^1,P_ BB,m∈ℂ^N_ s× N_ s. Finally, the normalized BB TPC of the P-SVD method corresponding to the mth SU is given by 𝐅^P_ BB,m = 𝐅^1, P_ BB, m𝐅^2,P_ BB, m/||𝐅_RF𝐅^1, P_ BB, m𝐅^2, P_ BB, m||_F. 67 Therefore, the sum-SE maximization for the RIS-aided mmWave MIMO CR downlink based on the P-SVD method is given by 𝒫_13: max_p^P_m,d ∑_m=1^M∑_d=1^N_ sw^P_m,dlog_2( 1 + υ^P_m,d‖𝐟^2,P_ BB,m,d‖_2^2/σ^2 p^P_m,d) 68 s.t. ∑_m=1^M∑_d=1^N_ sp^P_m,dt^P_m,d≤ P_T, 68a p^P_m,d≥ 0, 68b where w^P_m,d is the weight corresponding to the dth stream of the mth SU, υ^P_m,d denotes the square of the dth element on the principal diagonal of Σ^P_m, 𝐟^2,P_ BB,m,d is the dth column of the matrix 𝐅^2,P_ BB,m and t^P_m,d is the dth diagonal element of the matrix 𝐓^P_m = (𝐅^P_BB,m)^H𝐅^H_RF𝐅_RF𝐅^P_BB,m. Similar to Theorem 1, the sum SE of the system given in 𝒫_13 based on the P-SVD method is maximized by the power allocation p^P_m,d=max{0, 1τ^P t^P_m,d-σ^2w^P_m,dυ^P_m,d‖𝐟^2,P_ BB,m,d‖_2^2}∀ m, d, 69 where τ^P is the Lagrange multiplier associated with t^P_m,d. Note that the proposed two-stage hybrid transceiver design relying on the optimization of the hybrid transceiver and passive RM can also be applied in a wideband scenario by considering MIMO-orthogonal frequency division multiplexing (OFDM) modulation <cit.>. In a MIMO-OFDM system, the BB TPC precedes the inverse fast Fourier transform (IFFT) operation, which is followed by the RF TPC at the transmitter side. On the other hand, the RF RC is succeeded by the fast Fourier transform (FFT) followed by the BB RC at each SU. Consequently, the BCD-SRCG algorithm can be extended to wideband scenarios for optimizing the RF TPC, RC, and passive RM, which are shared by all the subcarriers. The D-SVD and P-SVD methods can also be extended to optimize the BB TPCs, BB RCs, and power allocation for each subcarrier by employing the SVD of the corresponding frequency-selective mmWave MIMO channel. § SIMULATION RESULTS In this section, we present simulation results for the algorithms proposed for jointly designing the hybrid transceiver and passive beamformer of an RIS-aided mmWave MIMO CR downlink. We consider a two-dimensional coordinated system to model the system as shown in Fig. <ref>, where M SUs share the frequency band of a PU in a single cell. The CBS having a UPA structure is equipped with N_t = N_t_x× N_t_y antennas and M_t=MM_r RF chains and located at the origin (0 m, 0 m). Similarly, each SU and PU that have a UPA structure is equipped with N_r = N_r_x× N_r_y antennas and M_r RF chains. The SUs are assumed to be uniformly distributed within a circle centered at (100 m, 0 m) and a radius of 10m, and the PU is situated at (-100 m, 0 m). Furthermore, the RIS is assumed to have N reflective units with a UPA structure of N = N_x× N_y and situated at (d_RISm, 20 m). For the mmWave MIMO channel 𝐇_i, the coefficients α_i,l are distributed independently, obeying the distribution as CN(0,γ_i^210^-0.1PL(d_i)), ∀ l={1,, N^p_i}, where γ_i=√( row(𝐇_i) col(𝐇_i)/N^p_i) denotes the normalization factor. The quantity PL(d_i) is the path-loss that depends on the distance d_i associated with the corresponding link and it is modeled as <cit.> PL(d_i)[ dB] = α + 10βlog_10(d_i)+ζ, 70 where ζ∈ CN(0,σ_ζ^2). At the carrier frequency of 28 GHz, the parameters of (<ref>) are: α=61.4, β=2, σ_ζ=5.8 dB for LoS <cit.>. Moreover, we set the number of propagation paths to N_i^ p=10, ∀ i, with an angular spread of 10 degrees. The azimuth and elevation angles of departure and arrival follow a Laplacian distribution around the mean angle. The antenna spacing of both the CBS and of each SU is set to half-wavelength, i.e., d_ t=d_ r=λ/2. The noise variance σ^2 at each SU and PU is set to -91 dBm. The simulation results are averaged over 500 independent channel realizations. The SNR is defined as SNR=P_ t/σ^2, and its range is varied from -10 dB to 20 dB to study the performance in both the low- and high-SNR regions. The key simulation parameters are listed in Table <ref>. To demonstrate the efficiency of the proposed algorithms and to reveal some design insights, we compare the performance of the following algorithms when N=16 and 32. * HBF (BCD-SRCG, D-SVD): This is the proposed BCD-SRCG algorithm and D-SVD approach for the joint hybrid transceiver and RIS RM design. * HBF (BCD-SRCG, P-SVD): This is the proposed BCD-SRCG algorithm and P-SVD approach for the joint hybrid transceiver and RIS RM design. * FDB w/o interference: For this scheme, CBS and each SU perform TPC and RC, respectively, using FDB, and the passive beamforming at RIS by employing the RCG approach, followed by power allocation without taking the IP constraint into account. * HBF (Random Phase): The phases of the RM are assumed to be random and distributed uniformly between 0 and 2π, and the hybrid TPC/RC design is performed using the proposed SRCG and D-SVD algorithms. * HBF (white spectrum): The joint hybrid TPC/RC and passive beamforming are performed using the BCD-SRCG algorithm, followed by equal power allocation to all streams of the SUs. We compare the performance by evaluating the achievable sum-SE of the SUs vs several important parameters, which are discussed next. Unless otherwise stated, we consider an 8 × 128 system, where the CBS having N_t = 8 × 16 = 128 antennas and M_t=8 RF chains is communicating with M=4 SUs, each having N_r = 2 × 4 = 8 antennas and M_r=2 RF chains and N={4× 4=16, 4 × 8=32} reflective units, for a fixed IP threshold of Γ=0 dB. §.§.§ Sum-SE versus N In Fig. <ref>, we plot the sum-SE vs. N for fixed SNR=0 dB and Γ=0 dB. As seen from the figure, the sum-SE obtained using all the schemes increases with N due to the higher passive beamforming gain. This demonstrates the advantages of introducing an RIS into mmWave MIMO CR systems. Moreover, the sum-SE of the proposed HBF (BCD-SRCG, D-SVD), HBF (BCD-SRCG, P-SVD) and HBF (white spectrum) schemes approach that of the FDB w/o interference and yield an improved performance in comparison to the HBF (Random Phase) approach. This demonstrates the effectiveness of our proposed joint hybrid TPC/RC and passive beamforming designs. Also, one can observe that at a given SNR and Γ, the HBF (BCD-SRCG, D-SVD) scheme outperforms HBF (BCD-SRCG, P-SVD) at a lower value of N while the latter scheme approaches the former at a higher value of N. This is due to the fact that a large N produces a higher passive beamforming gain, which results in higher IP at the PU and limits the performance of the HBF (BCD-SRCG, D-SVD) scheme. However, a large value of N provides better degrees of freedom for nulling the interference, which improves the performance of the HBF (BCD-SRCG, P-SVD) scheme. §.§.§ Sum-SE versus SNR As shown in Fig. <ref>, we compare the sum-SE of the system versus SNR for a fixed IP threshold Γ=0 dB when the number of reflective elements is N={16, 32}. As can be seen from the figure, the sum-SE of the proposed HBF (BCD-SRCG, D-SVD) method approaches that of FBD w/o interference at low SNR and saturates at high SNR. This is because at low SNR regime, the IP constraint is inactive due to the low level of interference induced at the PU, whereas at high SNR, it becomes active due to the increased interference at the PU. Therefore, the system is limited by the quantity Γ at high SNRs. In addition, the sum-SE of the HBF (BCD-SRCG, P-SVD) method increases with the SNR regardless of Γ, since the interference is nulled via the projection method. Also, one can observe that the HBF (BCD-SRCG, P-SVD) method outperforms the HBF (BCD-SRCG, D-SVD) scheme at high SNR due to IP limitations in the latter optimization at high SNR. Furthermore, as expected, the proposed HBF (BCD-SRCG, D-SVD) scheme has a performance edge over the naive HBF (white spectrum) scheme, which shows the effectiveness of the proportional water-filling solution toward optimal power allocation. Furthermore, it can be observed that the system having N = 32 reflecting elements outperforms that with N = 16. This trend is expected due to the higher passive beamforming gain of the former. §.§.§ Sum-SE versus Γ In Fig. <ref>, we plot the sum-SE of the system with respect to the IP threshold Γ for a fixed value of SNR = 0 dB. It can be seen from the figure that the sum-SE of the HBF (BCD-SRCG, D-SVD) method increases with the IP threshold. This is due to the fact that the large value of Γ provides an opportunity for the SUs to transmit at a higher power due to the improved ability of the PU to tolerate the interference. Furthermore, the sum-SEs of the HBF (BCD-SRCG, P-SVD) and the optimal w/o interference schemes are constant with respect to Γ, which shows that these schemes are independent of the IP threshold. However, note that the HBF (BCD-SRCG, D-SVD) scheme has a superior SE in comparison to the HBF (BCD-SRCG, P-SVD) in the higher Γ regime. This is due to the fact that at sufficiently large values of Γ, the IP constraint becomes ineffective owing to the enhanced interference tolerance at the PU. Therefore, at high Γ, the system performance is only limited by the maximum value of the TP P_T. Again, it can be seen that the higher passive beamforming gain of the N = 32 system makes it superior to the N = 16 system. §.§.§ Sum-SE versus N_r Fig. <ref> illustrates the sum-SE of the system versus the number of RAs N_r for the fixed values of SNR=0 dB and Γ=0 dB. As expected, the sum-SE of the proposed schemes increases upon increasing N_r due to the increased multiplexing gain. However, observe that the sum-SE of the HBF (BCD-SRCG, D-SVD) scheme almost saturates at N_r=8. While the performance of the HBF (BCD-SRCG, P-SVD) is poor at lower values of N_r, it increases almost linearly as N_r increases, and it approaches that of the HBF (BCD-SRCG, D-SVD) at N_r=8. This is due to the fact that a large value of N_r produces higher antenna gain, which increases the IP at the PU in the HBF (BCD-SRCG, D-SVD) scheme, resulting in saturation of its performance. Moreover, the HBF (BCD-SRCG, P-SVD) scheme is free of the IP constraint. As a result, its performance is not limited by the antenna gain. Moreover, observe that the performance gain of all the systems obtained by increasing the number of RAs at the SUs is significantly higher than that obtained by increasing the number of reflective elements of the RIS in Fig. <ref>. However, note that this improved performance is achieved at the cost of the high energy consumption of the former due to the increased number of active RAs. §.§.§ Sum-SE versus number of SUs M Furthermore, in Fig. <ref>, we plot the sum-SE of SUs vs. the number of SUs M for a fixed SNR=0 dB and Γ=0 dB. As seen, the sum-SE of the system decreases as M increases due to the increment in the MUI and reduction in the power per SU. To compensate these losses, it is advisable to increase the number of RAs in the HBF (BCD-SRCG, D-SVD) scheme as M increases, but not to increase the TP, as it leads to an undesirable increase in the IP at the PU. Moreover, the HBF (BCD-SRCG, P-SVD) scheme is outperformed by the HBF (BCD-SRCG, D-SVD) scheme as M increases due to loss in beamforming gain for ZF. However, it is advisable to increase the TP in the HBF (BCD-SRCG, P-SVD) scheme instead of increasing the number of RAs, because an increment in the power of this scheme does not affect the PU. Furthermore, the system performance is improved by increasing the number of reflective elements from N=16 to N=32, demonstrating that an RIS with a large number of reflective elements has an improved ability to suppress MUI. §.§.§ Sum-SE versus horizontal distance of the RIS Moreover, in Fig. <ref>, we plot the sum-SE of the system vs. the horizontal distance of RIS, denoted by d_RIS, in the range of 10m to 90m, for a fixed values of SNR=0dB and Γ=0dB. As seen from the figure, the sum-SE of the system initially decreases as d_RIS increases, approaching its minimum value, and then subsequently increasing as d_RIS increases. Therefore, it is beneficial to place the RIS within the vicinity of the CBS or the SUs for better performance but not in the vicinity of the PU. Also, one can observe that the HBF (BCD-SRCG, P-SVD) method performs better when the RIS is closer to the CBS as the passive beamforming gain of the RIS does not affect the PU. By contrast, the HBF (BCD-SRCG, D-SVD) method performs better when the RIS is closer to the SUs because the passive gain of the RIS is affected the least when it is far from the PU and closer to the SUs. §.§.§ Sum-SE versus N_t Finally, we examine the achievable sum SE versus the number of TAs N_t to quantify the performance gap arising due to the assumption of near orthogonality of user channels considered in (<ref>) and (<ref>), for a large number of antenna elements. Fig. <ref> plots both the analytical and Monte Carlo simulation based sum SE versus N_t for different numbers of reflective elements, N={16, 32, 64, 128} of the RIS at a fixed value of SNR=0 dB and Γ=0 dB. It can be seen from the figure that as N_t increases, the simulated values for both the HBF (BCD-SRCG, D-SVD) and HBF (BCD-SRCG, P-SVD) schemes approach the corresponding analytical values. More specifically, there are only marginal deviations of 1.38 %, 1.26 %, 1.21 % and 1.03 % at N_t=60 for N={16, 32, 64, 128}, which demonstrates the validity of the user orthogonality assumptions in the massive antenna array regime. § CONCLUSION We investigated the ability of the RIS technology to aid multiple SUs in a mmWave MIMO CR system operating in the underlay mode. A two-stage hybrid transceiver design was proposed based on the SRCG-BCD algorithm to jointly design the hybrid TPC/RC and RM, which maximizes the sum-SE of the secondary system, while restricting the IP induced at the PU to a predefined threshold. The proposed approach initially designs a pair of vectors for the RF TPC and RC, and each element of the RM matrix successively. Subsequently, two sub-optimal solutions were proposed to design the BB TPC/RC based on the SVD of the effective BB channel. Furthermore, the proportional water-filling approach was adopted to optimize the power allocation to each stream of each SU for the sake of user fairness. Finally, simulation results were presented, which show the effectiveness of the proposed schemes in RIS-aided mmWave MIMO CR systems. § DERIVATION FOR EQ. (<REF>) For the given hybrid TPC and RC obtained using the D-SVD method, one can express the achievable SE ℛ^D_m of the mth SU as ℛ^D_m = log_2 ( |𝐈_N_s + 𝐑_n^-1𝐖^H_BB,m𝐖^H_RF,m𝐇_m 𝐅_RF𝐅^D_BB,m ×𝒟(𝐩^D_m)(𝐅^D_BB,m)^H 𝐅_RF^H 𝐇_m^H 𝐖_RF,m𝐖_BB,m | ). 71 For a large number of antennas, one can approximate 𝐖^H_BB,m𝐖^H_RF,m𝐖_RF,m𝐖_BB,m≈𝐈_N_s. Thus, ℛ^D_m can be approximated as ℛ^D_m ≈log_2 ( |𝐈_N_s + 1/σ^2(𝐅^D_BB,m)^H 𝐅_RF^H 𝐇_m^H 𝐇_m ×𝐅_RF𝐅^D_BB,m𝒟(𝐩^D_m) | ), 72 = log_2 ( |𝐈_N_s + 1/σ^2(𝐅^2,D_BB,m)^H (𝐅^1,D_BB,m)^H𝐅_RF^H 𝐇_m^H 𝐇_m ×𝐅_RF𝐅^1,D_BB,m𝐅^2,D_BB,m𝒟(𝐩^D_m) | ), 73 ℛ^D_m (c)=log_2(| I_N_ s + 1/σ^2(𝐅^2,D_ BB,m)^HΣ^2_m𝐅^2,D_ BB,m D(𝐩^D_m)|). 74 Approximation (c) follows due to the fact that 𝐅_RF𝐅^1,D_BB,m≈𝐕_m. § PROOF OF THEOREM 1 The equivalent convex optimization problem corresponding to 𝒫_12 is given by 𝒫_14: min_p^D_m,d -∑_m=1^M∑_d=1^N_ sw^D_m,dlog_2( 1 + υ^D_m,d‖𝐟^2,D_ BB,m,d‖_2^2/σ^2 p^D_m,d) s.t. (<ref>), (<ref>), (<ref>). 75 Inspired by the Karush-Kuhn-Tucker (KKT) framework, let us assume λ, τ^D and μ^D_m,d∀ m,d to be the Lagrange multipliers associated with the IP inequality, maximum TP inequality and power causality constraints in 𝒫_14, respectively. Thus, the KKT conditions are given as <cit.> -w^D_m,dυ^D_m,d‖𝐟^2,D_ BB,m,d‖_2^2/σ^2 ( 1 + υ^D_m,d‖𝐟^2,D_ BB,m,d‖_2^2/σ^2 p^D_m,d) + λζ_m,d + τ^D t^D_m,d -μ^D_m,d =0 ∀ m, d, 76 λ(I_ th-∑_m=1^M∑_d=1^N_ sp^D_m,dζ_m,d) =0, 77 τ^D(P_ max-∑_m=1^M∑_d=1^N_ sp^D_m,dt^D_m,d) =0, 78 p^D_m,d≥ 0, μ^D_m,d≥ 0, μ^D_m,dp^D_m,d =0 ∀m, d. 79 From (<ref>), the power profile can be written as p^D_m,d =max{0, 1/λζ_m,d+τ^D t^D_m,d-σ^2/w^D_m,dυ^D_m,d‖𝐟^2,D_ BB,m,d‖_2^2}∀ m, d. 80 Note that the quantities λ and τ^D in (<ref>) can be found using the interior point method so that the KKT conditions are satisfied. IEEEtran
http://arxiv.org/abs/2406.08926v2
20240613085003
Effective affinity for generic currents in nonequilibrium processes
[ "Adarsh Raghu", "Izaak Neri" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
^1Department of Mathematics, King’s College London, Strand, London, WC2R 2LS, UK § ABSTRACT In mesoscopic experiments it is common to observe a single fluctuating current, such as the position of a molecular motor, while the complete set of currents is inaccessible. For such scenarios with partial information we introduce an effective affinity for generic currents in Markov processes. The effective affinity quantifies dissipative and fluctuation properties of fluctuating currents. Notably, the effective affinity multiplied by the current lower bounds the rate of dissipation, and the effective affinity determines first-passage and extreme value statistics of fluctuating currents. In addition, we determine the conditions under which the effective affinity has a stalling force interpretation. To derive these results we introduce a family of martingales associated with generic currents. Effective affinity for generic currents in nonequilibrium processes Adarsh Raghu^1, Izaak Neri^1 June 17, 2024 =================================================================== Introduction. Modern imaging and microscopy techniques can measure the fluctuations of mesoscopic currents in living cells  <cit.>, for example, the motion of cilia <cit.>, molecular motors <cit.>, or the membrane of red blood cells <cit.>. These are nonequilibrium systems and their fluctuations satisfy principles of nonequilibrium and stochastic thermodynamics <cit.>. In general not all system currents are experimentally observable. For example, in molecular motor experiments the motor's position is measurable, yet the currents linked to internal degrees of freedom, such as the chemical state of the motor, are beyond reach <cit.>. Experimental setups are different from theoretical frameworks in nonequilibrium and stochastic thermodynamics, where it is generally assumed that a complete set of currents is available <cit.>. This raises the question of to what extent concepts from nonequilibrium thermodynamics extend to setups with partial information <cit.>, which is referred to as marginal thermodynamics <cit.>. In this Letter, we define an effective affinity a^∗ that extends the affinity concept from nonequilibrium <cit.> and stochastic thermodynamics <cit.>, where it appears as the parameter conjugate to a currents, to setups when a single fluctuating current amongst many is observed. Given a fluctuating current J_t in a Markov process X_t, we define the effective affinity a^∗ through the asymptotic integral fluctuation relation lim_t→∞⟨ e^-a^∗ J_t⟩ = 1 , where ⟨·⟩ is an average over repeated realisations of the process; the Eq. (<ref>) has at most one unique nonzero solution. When the current is the stochastic entropy production <cit.>, then according to the integral fluctuation relation a^∗=1 <cit.>, and in the specific case of edge currents that count the number of transitions along a single edge of a Markov jump process we recover the effective affinity studied in Refs. <cit.>. We also derive a number of physical properties of the effective affinity, which demonstrate that a^∗ quantifies both dissipative and fluctuation properties of J_t. First, using large deviation theory we find that a^∗j≤ṡ, where j = ⟨ J_t⟩/t is the average current associated with the observed current J_t, and where ṡ is the average rate of dissipation. The inequality (<ref>) is suggestive of the equality ṡ = ∑_γ∈𝒞a_γj_γ that expresses the rate of dissipation as a sum over the affinities a_γ multiplied by their conjugate, average currents j_γ, and where 𝒞 represents a complete set of currents <cit.>; for Markov jump processes, 𝒞 is the set of fundamental cycles associated with the graph of admissible transitions and j_γ are the corresponding cycle currents <cit.>. Comparing these relations with (<ref>), we conclude that the effective affinity captures a portion of the total dissipation, consistent with a marginal thermodynamics picture <cit.>. Second we show that the effective affinity constrains fluctuations of currents. Let us assume that j>0 so that we can define the infimum value J_ inf = inf_t≥0{J_t:t≥ 0} of J_t. It then holds that the tails of the distribution of J_ inf are exponential with a decay constant a^∗, i.e., p_J_ min(j) ∼ e^a^∗ j, j≤ 0. The extreme value law (<ref>) extends the exponential law for the infimum statistics of entropy production, see Refs. <cit.>, to generic currents. To derive the infimum law (<ref>), we identify a martingale process associated with generic currents J_t. This represents a significant advancement in martingale theory for stochastic thermodynamics <cit.>, as previously martingales were associated with specific currents, namely, the fluctuating entropy production <cit.> and edge currents <cit.>. Martingales are useful for deriving, amongst others, properties of currents at first-passage times. Notably, by combining results from large deviation theory with those from martingale theory, we derive the trade-off relation between speed, uncertainty, and dissipation conjectured in Ref. <cit.>, which applies to first-passage problems of fluctuating currents. We end this Letter by determining the conditions when the equality in (<ref>) is attained and when the effective affinity has a stalling force interpretation. System setup. For simplicity, we focus on Markov jump processes, even though the defined effective affinity also applies to driven diffusions. We consider a time-homogeneous Markov jump process X_t∈𝒳 defined by a 𝐪-matrix <cit.> on a finite set 𝒳. The off-diagonal entries 𝐪_xy denote the rate at which X_t jumps from x to y. The diagonal entries 𝐪_xx = -∑_y∈𝒳∖{x}𝐪_xy denote the exit rates out of the state x. The probability mass function p_t(x) of X_t solves the differential equation ∂_t p_t(x) = ∑_y∈𝒳p_t(y)𝐪_yx . The stationary state p_ ss(x) is the left eigenvector associated with 𝐪. We assume that X_t is ergodic, so that p_ ss is unique and p_ ss(x)>0 <cit.>. Fluctuating integrated currents J_t are time-extensive and time-reversal antisymmetric observables. They can be expressed as a linear combination J_t = ∑_x,y∈𝒳c_x yJ^x y_t, where the edge currents J^x y_t = N^x y_t - N^y x_t are the difference between the number of forward jumps N^x y_t and the number of backward jumps N^y x_t between x and y, and the coefficients c_x y=-c_y x∈ℝ quantify the flow of the transported resource when the process jumps from x to y. Note that the relevant c_xy coefficients span an Euclidean space of dimension |ℰ|, where ℰ is the set of edges of the graph of admissible transitions (those with 𝐪_xy≠ 0). The corresponding average current j takes the expression j = lim_t→∞⟨ J_t⟩/t = ∑_x∈𝒳∑_y∈𝒳∖{x}c_xyj_xy , where j_xy = lim_t→∞⟨ J^xy_t⟩/t = p_ ss(x)𝐪_xy-p_ ss(y)𝐪_yx. Without loss of generality, we assume that j>0. The fluctuating entropy production S_t = 1/2∑_x∈𝒳∑_y∈𝒳∖{x} J^x y_t lnp_ ss(x)𝐪_xy/p_ ss(y)𝐪_yx is an example of a current <cit.>, and the average entropy production rate ṡ = 1/2∑_x∈𝒳∑_y∈𝒳∖{x}j_xylnp_ ss(x)𝐪_xy/p_ ss(y)𝐪_yx quantifies the rate of dissipation <cit.>. Definition of the effective affinity. As illustrated in Fig. <ref>, we define the effective affinity a^∗ as the nonzero root of the logarithmic moment generating function λ_J(a), i.e., λ_J(a^∗) = 0, where λ_J(a) = lim_t→∞1/tln⟨ e^-aJ_t⟩. If j = 0, then λ_J(a) has no nonzero root, and therefore we set a^∗=0. Note that this definition is equivalent to Eq. (<ref>). For Markov jump processes on finite sets λ_J(a) exists and is differentiable in a, and therefore by the Gartner-Ellis theorem J_t satisfies a large deviation principle with rate function ℐ_J(j) = max_a(λ_J(a)-aj) <cit.>. Obtaining the effective affinity from the tilted generator. Although it is difficult to determine λ_J(a) directly from its definition (<ref>), we can readily obtain λ_J(a), and thus also the effective affinity, from the eigenvalues of a tilted 𝐪-matrix. Indeed, applying Kolmogorov's backward equation to ⟨ e^-aJ_t⟩, it follows that λ_J(a) is the Perron root (i.e., the eigenvalue with the largest real part) of the matrix <cit.> 𝐪̃_xy(a) = {[ 𝐪_xye^-a c_x y, if x≠ y,; -∑_z∈𝒳∖{x}𝐪_xz, if x=y, ]. and a^∗ is the value of a for which the Perron root vanishes. Having defined a^∗, we continue with deriving the main properties (<ref>) and (<ref>) of the effective affinity. Lower bound on dissipation. The bound (<ref>) follows from using the effective affinity definition λ_J(a^∗) = 0 in the lower bound λ_J(a) ≥ a j(-1+aj/ṡ), which follows from the theory of level 2.5 large deviations <cit.>. Indeed, the parabola on the right-hand side of (<ref>) has the root a=ṡ/j and according to the inequality (<ref>) this root is larger or equal than a^∗. Martingale associated to J. To derive the law (<ref>) for the infima statistics of currents, we construct a martingale process M_t associated with J_t. A martingale is a stochastic process that satisfies ⟨ M_t|X^s_0 ⟩ = M_s for all s∈ [0,t], where ⟨·|X^s_0⟩ denotes the expectation conditioned on the trajectory of X_t in the interval [0,s]. The process M_t = ϕ_a^∗(X_t)e^-a^∗ J_t is a martingale, where ϕ_a(x) is the right eigenvector of 𝐪̃(a) associated with its Perron root. The martingality of M_t follows from the fact that ϕ_a^∗(x)e^-a^∗ j is a harmonic function of the generator of the joint process (X_t,J_t) (see Supplementary Material <cit.>). The martingale M_t extends previous results on martingales in stochastic thermodynamics, see Ref. <cit.> for a review. Notably, for J_t=S_t we get M_t= exp(-S_t), as a^∗=1 and ϕ=1, and thus we recover that the exponentiated negative entropy production is a martingale <cit.>, and for J_t=J^x→ y_t we find the martingale of Ref. <cit.>. Splitting probability and extreme value statistics. We derive the infimum law (<ref>) from the martingale M_t. First, we introduce a related first-passage problem, namely, we consider the first time that the current J_t exits the interval (-ℓ_-,ℓ_+), i.e., T = min{t≥ 0: J_t ∉ (-ℓ_-,ℓ_+)}. This is the gambler's ruin problem, as introduced by Pascal in the 17th century <cit.>, applied to a fluctuating current J_t <cit.>. The splitting probability p_-, corresponding with the probability of ruin, is the probability that J_T is smaller or equal than -ℓ_-. Using Doob's optional stopping theorem, ⟨ M_T⟩ = ⟨ M_0⟩ <cit.> we find that (see Supplementary Material <cit.>) lim_ℓ_- →∞|ln p_-|/ℓ_- = a^∗. Hence, the effective affinity is the exponential decay constant of p_-. In the limit of ℓ_+→∞, the splitting probability p_- is the cumulative distribution of J_ inf, and thus we recover the infimum law (<ref>). First-passage ratio bound. The inequality (<ref>) combined with the martingale result (<ref>) implies a trade off relation between dissipation (ṡ), speed (⟨ T⟩), and uncertainty |ln p_-|. Indeed, using Eq. (<ref>) and Wald's equality for fluctuating currents <cit.>, j = ℓ_+/⟨ T⟩(1+o_ℓ_ min(1)), in the inequality (<ref>) yields ṡ≥ℓ_+/ℓ_-|ln p_-|/⟨ T⟩(1+o_ℓ_ min(1)), where o_ℓ_ min(1) represents an arbitrary function that decays to zero when ℓ_ min = min{ℓ_-,ℓ_+ } diverges. Notice that the present derivation of (<ref>) with M_t is clearer than the previous derivation in Ref. <cit.> that uses scaling arguments. In addition, we have shown that the right-hand side of (<ref>) equals ja^∗, which is an improvement on previous work that estimated the right-hand side via simulations at finite thresholds <cit.>. Equivalence with the thermodynamic uncertainty relations for Gaussian fluctuations. The inequality (<ref>) is reminiscent of the thermodynamic uncertainty relations <cit.>, but with the important difference that uncertainty is quantified with the splitting probability p_- instead of the variance of T or J_t. We show that the inequalities (<ref>) and (<ref>) are equivalent with the thermodynamic uncertainty relations when the probability distribution of J_t converges asymptotically with time to a Gaussian distribution. Indeed, it holds then that λ_J(a) = a(aσ^2/2-j), where σ is the standard deviation of J_t/t, and thus a^∗ = 2j/σ^2. Substituting this value into (<ref>) yields the thermodynamic uncertainty relation ṡ≥ 2(j/σ)^2 <cit.>. Cycle equivalence classes. Having identified the physical properties of a^∗, we partition now the set of fluctuating currents J_t into equivalence classes that have the same effective affinity a^∗. To this purpose, we rely on Schnakenberg's network theory  <cit.> that decomposes currents j into linear combinations of the form j = ∑_γ∈𝒞c_γj_γ, where 𝒞 is a set of fundamental cycles of the graph of admissible transitions, j_γ are the corresponding cycle currents, and c_γ are the cycle coefficients obtained from summing up the c_x,y coefficients along the cycle γ (see Supplemental Material <cit.>). The cycle coefficients c_γ partition the space ℝ^|ℰ| of coefficients c_x,y into Euclidean spaces of dimension |𝒳|-1 that contain all coefficients c_x,y that yield the same cycle coefficients c_γ. We call the corresponding set a cycle equivalence class, and we denote the cycle equivalence class associated with J_t by [J_t]. Importantly, currents that belong to the same cycle equivalence class have the same cumulant generating function λ_J(a), and hence also the same effective affinity a^∗ (see Supplementary Material <cit.>). Tightness of the effective affinity bound. Next, we characterise a set of currents J_t that attain the equality in Eq. (<ref>). Consider fluctuating currents that are proportional to the stochastic entropy production S_t, i.e., J_t = kS_t with k∈ℝ. Such currents satisfy the Gallavotti-Cohen symmetry <cit.> λ_J(a) = λ_J(k^-1 -a), and thus a^∗ = 1/k. In addition, since j = kṡ the equalities in (<ref>) and (<ref>) are attained. Thus, currents that belong to the cycle equivalence classes [kS_t] with k∈ℝ are precise currents in the sense of attaining the equality (<ref>). Toy model with two fundamental cycles. Do there exist currents that do not belong to one of the cycle equivalence classes [kS_t], but nevertheless attain the equality in (<ref>)? We settle this question for models with two fundamental cycles through a a numerical case study of the four state model illustrated in Fig. <ref>(a). The four state model has two fundamental cycles denoted by γ=1 and γ=2, and hence the cycle equivalence classes of this model are determined by two coefficients c_1 and c_2, such that j = c_1j_1 + c_2j_2. We normalise c_1 and c_2 such that j=1. For this choice of normalisation, the dependence of a^∗ on the c_xy-coefficients that define J_t is fully determined by one parameter, namely the angle α between the vectors (c_1,c_2) and (a_1,a_2), where the latter are the cycle coefficients of [S_t/ṡ]; see Fig. <ref>(b) for an illustration. Figure <ref>(c) plots a^∗ as a function of α for randomly generated transition rates 𝐪. Note that according to the inequality (<ref>), a^∗/ṡ≤ 1, and the equality a^∗= ṡ is attained when α=0 or α=π, corresponding with fluctuating currents that belong to [S_t/ṡ] or [-S_t/ṡ], respectively. We observe that the effective affinity is a monotonously decreasing/increasing function between the value of α with vanishing average current (where a^∗=0) and the end point values α=0 and α=π. Hence, for the four state model the equality in the trade-off relations (<ref>) and (<ref>) is only attained for currents that belong to the cycle equivalence classes [kS_t] with k∈ℝ. Stalling force interpretation of the effective affinity. In the special case of edge currents, i.e., J = J^xy, the effective affinity equals <cit.>, a^∗ = lnp^(x,y)_ ss(x)𝐪_xy/p^(x,y)_ ss(y) 𝐪_yx where p^(x,y)_ ss(x) is the probability mass function of a modified Markov jump process for which the transition rates along the (x,y)-edge have been set to zero (see Supplementary Material <cit.>). Interestingly, as shown in Refs. <cit.>, for edge currents the effective affinity equals the additional force required to stall the current. This means that if we consider a modified process with rates 𝐪̃_xy/𝐪̃_yx =exp(f)𝐪_xy/𝐪_yx, then at stalling when j=0 it holds that f=a^∗. The stalling force property of a^∗ does not generalise to generic currents. Nevertheless, the effective affinity is a stalling force for currents that belong to [J^x y_t], as such currents have the same effective affinity as J^xy_t. Note that the equivalence class [J^x y_t] contains currents that are not edge currents, as we illustrate next in a biophysical model of a molecular motor. Effective affinity in a mechanochemical model of kinesin-1. We analyse the effective affinity for the positional current of a molecular motor bound to a one-dimensional substrate. Specifically, we use the mechanochemical model for kinesin-1 from Ref. <cit.>. In this model, the molecular motor can step forward through multiple biochemical pathways, as shown in Fig. <ref>(a), and the kinesin-1 steps consist of two substeps, consistent with experimental data <cit.>. Therefore, the positional current sums up contributions from multiple edges, viz., the edges (1,3), (4,2) and (2,3) (see Supplementary Material <cit.> for details). Nevertheless, the positional current J in this biophysical model belongs to [J^2 3_t], and hence the stalling force property holds. Since the selected current is a displacement, the effective affinity has the same dimensions as a mechanical force. In the present model, this is more than a dimensional analogy, as the effective affinity equals the additional force required to stall the molecular motor, i.e., a^∗ = f_0-f, where f is the mechanical force opposing forward motion and f_0 is the value of f for which the motor stalls. Indeed, as shown in Fig. <ref>(b), a^∗ decreases linearly as a function of f with slope equal to -1, and by definition a^∗ vanishes when the positional current j vanishes. Discussion. We have introduced the concept of an effective affinity for a generic current, which is a unique real number associated with currents in Markov processes that quantifies several physical properties of fluctuating currents. Notably, the effective affinity multiplied by the average current lower bounds dissipation, see Eq. (<ref>) and the effective affinity is the exponential decay constant that characterises the tails of the infimum statistics of the current, see Eq. (<ref>). In addition, since the effective affinity is a generalisation of the edge affinity from Refs. <cit.> it admits, under certain physical conditions that we specified here, a stalling force interpretation. In mathematical models, the effective affinity can readily be computed from the tilted generator 𝐪̃. Getting estimates of effective affinities in experimental systems, such as molecular motors, is more challenging, but certainly not out of reach. For example, the extreme value statistics formula (<ref>) can be used to estimate a^∗, and we have shown that in a biophysical model of a kinesin-1 motor the effective affinity can be estimated from the motor's stalling force. From a methodological point of view, this Letter introduces a new class of martingales, M_t, associated to generic currents, which extends previous work on entropy production <cit.> and single edge currents <cit.>. Given the numerous properties of martingales, as outlined in <cit.>, the M_t add to existing techniques for studying current fluctuations. We thank Nikolas Nüsken for discussions and guidance, we thank Stefano Bo for a detailed reading of the manuscript, and we thank Friedrich Hübner, and Alvaro Lanza Serrano for fruitful discussions.
http://arxiv.org/abs/2406.07844v1
20240612032134
Understanding and Mitigating Compositional Issues in Text-to-Image Generative Models
[ "Arman Zarei", "Keivan Rezaei", "Samyadeep Basu", "Mehrdad Saberi", "Mazda Moayeri", "Priyatham Kattakinda", "Soheil Feizi" ]
cs.CV
[ "cs.CV" ]
Output-sensitive Conjunctive Query Evaluation Paraschos Koutris Received ============================================= § ABSTRACT Recent text-to-image diffusion-based generative models have the stunning ability to generate highly detailed and photo-realistic images and achieve state-of-the-art low FID scores on challenging image generation benchmarks. However, one of the primary failure modes of these text-to-image generative models is in composing attributes, objects, and their associated relationships accurately into an image. In our paper, we investigate this compositionality-based failure mode and highlight that imperfect text conditioning with CLIP text-encoder is one of the primary reasons behind the inability of these models to generate high-fidelity compositional scenes. In particular, we show that (i) there exists an optimal text-embedding space that can generate highly coherent compositional scenes which shows that the output space of the CLIP text-encoder is sub-optimal, and (ii) we observe that the final token embeddings in CLIP are erroneous as they often include attention contributions from unrelated tokens in compositional prompts. Our main finding shows that the best compositional improvements can be achieved (without harming the model's FID scores) by fine-tuning only a simple linear projection on CLIP's representation space in Stable-Diffusion variants using a small set of compositional image-text pairs. This result demonstrates that the sub-optimality of the CLIP's output space is a major error source. We also show that re-weighting the erroneous attention contributions in CLIP can also lead to improved compositional performances, however these improvements are often less significant than those achieved by solely learning a linear projection head, highlighting erroneous attentions to be only a minor error source. The code is available and can be accessed at <https://github.com/ArmanZarei/Mitigating-T2I-Comp-Issues>. § INTRODUCTION Text-to-image diffusion-based generative models <cit.> have achieved photo-realistic image generation capabilities on user-defined text prompts. However recent works <cit.> have designed compositionality benchmarks to show that these text-to-image models have low fidelity to simple compositionality prompts such as those consisting of attributes, objects, and their associated relations (, “a red book and a yellow vase”). This hinders the use of these generative models in various creative scenarios where the end-user wants to generate a scene where the composition is derived from words (and their relationships) in the prompt. Existing works <cit.> propose various ways to improve compositionality in text-to-image models. These works primarily rely on modifying the cross-attention maps by leveraging bounding box annotations and performing a small optimization in the latent space during inference. Recent methods based on fine-tuning <cit.> the UNet also lead to improved compositonality. Despite the progress, the core reasons behind compositionality failures in text-to-image models remain unclear. Understanding these reasons helps designing effective methods that can augment text-to-image models with improved compositional capabilities. In our paper, we investigate possible reasons behind compositionality failures in text-to-image generative models. We identify two sources of errors: (i) We observe that output token embeddings in CLIP have significant attention contributions from irrelevant tokens, thereby introducing errors in generation. We then compare the internal attention contributions in CLIP for compositional prompts to the T5 text-encoder which has been shown to display strong compositional capabilities in DeepFloyd[https://huggingface.co/DeepFloyd/IF-I-M-v1.0]. We quantitatively find that the T5 text-encoder displays significantly lesser erroneous attention contributions than CLIP, highlighting a potential reason towards its improved compositionality. (ii) Sub-optimality of CLIP output space on compositional prompts: We observe that optimizing the text embeddings, while utilizing a frozen Stable-Diffusion UNet, effectively generates images with compositional scenes. We find out that there exists a text-embedding space capable of generating highly coherent images with compositional scenes for various attributes (, color, texture, shape) which highlights that the existing CLIP output space is sub-optimal. These results indicate that the output space of the CLIP text-encoder could be further improved to enable text-to-image models to generate more accurate compositional scenes. Leveraging our observations on the deficiencies of the CLIP output space, we show that we can improve the output space of the CLIP text-encoder to better align with the optimal space by applying a linear projection on top of CLIP (see Figure  <ref>). This leads to stronger compositional performances. In particular, we propose Window-based Compositional Linear Projection (), a lightweight fine-tuning method that significantly enhances the model's performance on compositional prompts, achieving results comparable to other existing baselines (see Figure <ref>), while maintaining the model's clean accuracy, as indicated by a low FID on clean prompts. We also show that reweighting the erroneous attention contributions in CLIP can lead to improved compositional performances, however, the improvements often lag behind . This result shows that the sub-optimal alignment of the CLIP text-encoder to the UNet is a major error source compared to erroneous attention contributions in CLIP. Fine-tuning a subset of components of the diffusion model can result in an increase in the FID score for clean prompts. While fine-tuning only a linear projection partially mitigates this, we find that applying it over all the time steps results in an increase in FID. To mitigate this, we introduce where we only apply during the initial steps of generation, switching it off for the remaining steps. This enables the model to obtain a coherent compositional scene in early steps (crucial for compositional prompts) while retaining clean accuracy on surrounding prompts, as the generation in final steps is guided by the original text-encoder not the augmented one that maps to the optimized space. In summary, our contributions are as follows: * We perform an in-depth analysis of the reasons behind compositionality failures in open-source text-to-image generative models, highlighting two reasons for them. * Leveraging our observations, we propose for Stable Diffusion v-1.4 and v-2 which can augment the models with improved compositionality while preserving their clean accuracy on surrounding prompts. We observe improvements of 16.18%, 15.15%, and 9.51% on SD v1.4 and 14.35%, 11.14%, and 6% on SD v2 in VQA scores <cit.> across color, texture, and shape datasets, respectively. Our method achieves competitive VQA scores compared to other baselines while having better FID on clean prompts. Overall, our paper provides quantitative evidence elucidating the compositional challenges within text-to-image models and strong baselines to mitigate such issues. § BACKGROUND Compositionality in Text-to-Image Generative Models. A recent work <cit.> introduces a benchmark for testing compositionality in text-to-image models showing the susceptibility of open-source text-to-image models on simple compositional prompts. In addition, the authors also propose a fine-tuning baseline to augment text-to-image models with improved compositionality. The compositionality issue can also be addressed at inference time by modifying the cross-attention maps leveraging hand-crafted loss functions and bounding boxes generated from a language model <cit.>. However, <cit.> show that a data-driven and fine-tuning approach is more suitable towards improving compositionality in text-to-image models. Our paper specifically targets understanding the source of compositionality errors in text-to-image models which is one of the open research questions in this area. Interpretability of Text-to-Image Generative Models. There have been recent efforts to interpret text-to-image models like Stable Diffusion. DAAM <cit.> studies the generation process in diffusion models by analyzing cross-attention maps between text tokens and image pixels, highlighting their semantic precision. <cit.> use causal tracing to understand how knowledge is stored in models like Stable Diffusion v1 while <cit.> propose a mechanistic approach to localize knowledge in cross-attention layers of various text-to-image models. <cit.> explore concept decomposition in diffusion models. §.§ Text-to-image Diffusion Models: Training and Inference In diffusion models, noise is added to the data following a Markov chain across multiple time-steps t ∈ [0, T]. Starting from an initial random real image x_0 along with its caption c, (x_0, c) ∼𝒟, the noisy image at time-step t is defined as x_t = √(α_t)x_0 + √((1-α_t))ϵ. The denoising network denoted by ϵ_θ(x_t, c, t) is pre-trained to denoise the noisy image x_t to obtain x_t-1. For better training efficiency, the noising along with the denoising operation occurs in a latent space defined by z = ℰ(x), where ℰ is an encoder such as VQ-VAE <cit.>. Usually, the conditional input c to the denoising network ϵ_θ(.) is a text-embedding of the caption c through a text-encoder c = v_γ(c). The pre-training objective for diffusion models can be defined as follows: ℒ(θ) = 𝔼_(x_0, c) ∼𝒟, ϵ, t[ ϵ - ϵ_θ(z_t, c, t)_2^2], where θ is the set of learnable parameters in the UNet ϵ_θ. During inference, where the objective is to synthesize an image given a text-embedding c, a random Gaussian noise z_T∼𝒩(0,I) is iteratively denoised for a fixed range of time-steps to produce the final image. §.§ Dataset Collection We utilize the T2I-CompBench dataset <cit.>, focusing on three categories: color, texture, and shape, each with 1000 prompts. To generate high-quality images, we use three generative models: SD 1.4 <cit.>, DeepFloyd, and SynGen <cit.>, creating 100 samples per prompt with SD 1.4, 60 with DeepFloyd, and 50 with SynGen. This ensures a wide variety of generated images, leveraging each model's strengths. We focus on the disentangled BLIP-Visual Question Answering (VQA) score proposed by <cit.> as a metric for evaluating image quality. The VQA score assesses how accurately an image represents the compositional elements described in a prompt and is more closely correlated with human assessment than other metrics like CLIP-Score <cit.>. For each prompt, we combined all 210 samples from the three models and selected the top 30 with the highest VQA scores, ensuring the final dataset consisted of images that most accurately reflected the prompts. § SOURCE (I) : ERRONEOUS ATTENTION CONTRIBUTIONS IN CLIP In this Section, we leverage attention contributions <cit.> to analyze the text-embeddings of compositional prompts in the CLIP text-encoder (which is commonly used in many text-to-image models) and compare them with T5-text encoder of DeepFloyd, a model which results in stronger compositionality. Many of the compositional prompts from <cit.> have a decomposable template of the form a_i o_j + a_j o_j, where a_i, a_j are attributes (, “black”, “matted”) while o_i, o_j describe objects (, “car”, “bag”). We use attention contributions to understand how the text-embeddings of the compositional tokens (, a_i, a_j, o_i, o_j) are formed for both T5 and CLIP over the layers of these models. The attention mechanism in layer ℓ of a transformer consists of four weight matrices W_q, W_v, W_k, W_o <cit.>. Each of these weight matrices is divided into H heads denoted by W_q^h, W_v^h, W_k^h∈ℝ^d × d_h, W_o^h∈ℝ^d_h× d for all h ∈[H]. Note that d_h is the dimension of the internal token embeddings. We omit ℓ for simplicity, but each layer has its own attention matrices. These matrices are applied on the token embeddings of the output of layer ℓ -1, denoted by _j for token j in that layer. We denote by _j^h, _j^h, and _j^h the projection of _j on query, key, and value matrices of the h-th head of layer ℓ. More precisely, _j^h = _j W_q^h, _j^h = _j W_k^h, _j^h = _j W_v^h. The contribution of token j to token i in layer ℓ, denoted by _i,j, is computed as follows: _i, j = ∑_h=1^H_i, j^h _j^h W_o^h_2 where _i, j^h is the attention weight of token i to j in the h-th head of layer ℓ. Specifically, _i, .^h = Softmax( {⟨_i^h, _j^h⟩/√(d_h)}_j=1^n). Notably, _i, j is a significant metric that quantifies the contribution of a token j to the norm of a token i at layer ℓ. We employ this metric to identify layers in which important tokens highly attend to unintended tokens, or lowly attend to intended ones. We refer to Appendix <ref> for more details on attention contribution. §.§ Key Finding: T5 has less erroneous attention contributions than CLIP We refer to Figure <ref> that visualizes attention contribution of both T5 and CLIP text-encoder in the last layer (ℓ = 11) for the prompt "a green bench and a red car". Ideally, the attention mechanism should guide the token "car" to focus more on "red" than "green", but in the last layer of the CLIP text-encoder, "car" significantly attends to "green". In contrast, T5 shows a more consistent attention pattern, with "red" contributing more to the token "car" and "green" contributing more to the token "bench". We further conduct an extensive analysis on specific types of prompts, consisting of 780 prompts of color dataset and 582 prompts of texture dataset, each structured as “a_1 o_1 and a_2 o_2.” For each prompt, we obtain attention contributions in all layers and count the number of layers where unintended attention contributions occur. In the CLIP text-encoder, unintended attention occurs when o_2 attends more to a_1 than a_2. For T5, it occurs when o_2 attends more to a_1 than a_2, or o_1 attends more to a_2 than a_1. Figure <ref> quantitatively compares unintended attention on various prompts between CLIP text-encoder and T5. The T5 model shows improvement in our metric over the CLIP text-encoder, supporting the hypothesis that erroneous attention mechanisms in CLIP may contribute to the poor compositionality performance of CLIP-based text-to-image models. §.§ Zero-shot Attention Reweighting Inspired by attention mechanism shortcomings of CLIP text-encoder, we aim to improve compositionality of CLIP-based diffusion models by zero-shot reweighting of the attention maps. Specifically, we apply a hand-crafted zero-shot manipulation of the attention maps in certain layers of the CLIP text-encoder to effectively reduce unintended attentions while enhancing meaningful ones. This zero-shot reweighting is applied to the logits before the Softmax layer in the last three layers of the text-encoder. To be more precise, we compute a matrix M ∈ℝ^n × n and add this matrix to the logits of attention mechanism. For each head h, the new attention values are obtained as follows and then propagated to the subsequent layers of the text-encoder: _i, .^'h = Softmax( {⟨_i^h, _j^h⟩/√(d_h) + M_i, j}_j=1^n). We set the values in M by considering the ideal case where no incorrect attentions occur in the mechanism. For example, for prompt “a green bench and a red car", we ensure that the token "car" does not attend to the token "green" by assigning a sufficiently large negative value to the corresponding entry in matrix M. Further details on how we obtain matrix M can be found in Appendix <ref>. Key Results. Applying zero-shot attention reweighting with matrix M on 780 compositional prompts of color dataset, we achieved a 2.93% improvement in VQA Scores. We provide examples of effective zero-shot reweighting, demonstrating its impact on mitigating compositionality issues in Appendix <ref>. From this result, we can infer that although erroneous attention contributions in the CLIP text-encoder is one source of error, it is not the primary error source due to modest improvements in compositional accuracy. In the next section, we investigate the sub-optimality of the output space of CLIP text-encoder, which we find to be a significant source of error. § SOURCE (II) : SUB-OPTIMALITY OF CLIP TEXT-ENCODER FOR COMPOSITIONAL PROMPTS In this section, we understand if the UNet is capable of generating compositional scenes by optimizing the text-embeddings that it takes as the conditional input. Given an input prompt c with a particular composition (, “a red book and a yellow table”), we utilize our dataset and obtain 𝒟_c including high-quality compositional images for prompt c. We then optimize the output text-embedding c as follows: c^* = min_c𝔼_x_0 ∼𝒟_c, ϵ, t[ ϵ - ϵ_θ(z_t, c, t)_2^2]. We then use c^* to generate images using the UNet ϵ_θ across different seeds. Figure <ref> depicts a few of generated images using optimized text-embeddings. Key Results. As seen in Figure <ref>, we consistently improve VQA scores across a variety of compositional prompts (, color, texture, and shape). This indicates that CLIP text-encoder does not output the proper text-embedding suitable for generating compositional scenes. However, that optimized embedding space exists, highlighting the ability of UNet to generate coherent compositional scenes when a proper text-embedding is given. This further motivates the idea of improving CLIP output space to mitigate compositionality issues in text-to-image diffusion models. We refer to Appendix <ref> for other configurations of optimizing text-embeddings where we observe that optimizing only a subset of few tokens, can also effectively improve compositionality. § LINEAR PROJECTION ON CLIP: A SIMPLE BASELINE TO IMPROVE COMPOSITIONALITY IN TEXT-TO-IMAGE GENERATIVE MODELS In this Section, we provide two baselines and that are linear modification of CLIP output to map that sub-optimal space to an enhanced one, better suited for compositionality. §.§ : Token-wise Compositional Linear Projection Given the text-embedding c∈ℝ^n × d as the output of the text-encoder for prompt c, i.e., c = v_γ(c), we train a linear projection _W, b: ℝ^n × d→ℝ^n × d. This projection includes a matrix W ∈ℝ^d × d and a bias term b ∈ℝ^d, which are applied token-wise to the output text-embeddings of the encoder. More formally, for c∈ℝ^n× d including text-embeddings of n tokens c_1, c_2, …, c_n ∈ℝ^d, _W, b(c) is obtained by stacking projected embeddings c'_1, c'_2, …, c'_n where c'_i = W^T c_i + b. Finally, we solve the following optimization problem on a dataset 𝒟 including image-caption pairs of high-quality compositional images: W^*, b^* = min_W, b𝔼_(x_0, c) ∼𝒟, ϵ, t[ ϵ - ϵ_θ](z_t, _W, b(c), t)_2^2 ]. We then apply _W^*, b^* on CLIP text-encoder to obtain improved embeddings. §.§ : Window-based Compositional Linear Projection In this Section, we propose a more advanced linear projection scheme where the new embedding of a token is derived by applying a linear projection on that token in conjunction with a set of its adjacent tokens, i.e., tokens within a specified window. This method not only leverages the benefits of but also incorporates the contextual information from neighboring tokens, potentially leading to more precise text-embeddings. More formally, we train a mapping _W, b : ℝ^n × d→ℝ^n × d including a parameter s (indicating window length), matrix W ∈ℝ^(2s+1)d × d, and a bias term b ∈ℝ^d. For text-embeddings c∈ℝ^n × d consisting of n token embeddings of c_1, c_2, …, c_n ∈ℝ^d, we obtain _W, b by stacking projected embeddings c'_1, c'_2, …, c'_n where c'_i = W^T Concatenation((c_j)_j=i-s^i+s) + b Similarly, we solve the following optimization problem to train the projection: W^*, b^* = min_W, b𝔼_(x_0, c) ∼𝒟, ϵ, t[ ϵ - ϵ_θ](z_t, _W, b(c), t)_2^2 ]. Note that we use s=2, i.e., window length of 5 in our experiments. §.§ : Trade-off between Compositionality and Clean Accuracy Fine-tuning models or adding supplementary modules to a base model often results in a degradation of image quality and an increase in the Fréchet Inception Distance (FID) score. To balance the trade-off between improved compositionality and the quality of generated images for clean prompts – an important issue in existing work – inspired by <cit.>, we adopt , where we apply the linear projection only during the initial steps of inference. Specifically, given a time-step threshold τ, for t ≥τ, we use _W^*, b^*(c), while for t < τ, we use the unchanged embedding c as the input to the cross-attention layers. Figure <ref> illustrates the trade-off between VQA score and FID on a randomly sampled subset of MS-COCO <cit.> for different choices of τ. As shown, even a large value of τ suffices for obtaining high-quality compositional scenes as the composition of final generated image is primarily formed at early steps. Thus, choosing a large τ preserves the model's improved compositionality while maintaining its clean accuracy. Setting τ = 800 offers a competitive VQA score compared to the model where projection is applied at all time steps, and achieves a competitive FID similar to that of the clean model. Figure <ref> depicts a few images generated using different choices of τ. We refer to Appendix <ref> for more visualizations. §.§ Experiments Training Setup. All of the models are trained using the objective function of diffusion models on color, texture, and shape datasets. During training, we keep all major components frozen, including the U-Net, CLIP text-encoder, and VAE encoder and decoder, and only the linear projections are trained. We refer to Appendix <ref> for details on training procedure. Improved cross-attention maps. Figure <ref> illustrates cross-attention maps for a sample prompt. In the baseline model, the attention maps are flawed, with some tokens incorrectly attending to the wrong pixels. However, with both and , objects and attributes more accurately attend to their respective pixels. For more visualizations, see Appendix <ref>. Results. Figure <ref> presents images generated when applying and . When generating compositional prompts with a baseline model, objects are often missing or attributes are incorrectly applied. However, with and , objects and their corresponding attributes are more accurately generated. We refer to Appendix <ref> for more visualizations. We compare VQA scores of our proposed methods and other state-of-the-art methods in Table <ref>. As shown, both and significantly improve upon the baselines. achieves higher VQA scores compared to other state-of-the-art methods, despite its simplicity. For evaluation, we relied on the VQA score, identified by <cit.> as the most informative metric. While GORS <cit.> achieves a comparable VQA score, it fine-tunes the entire model, whereas our method uses a simple linear projection with about 200 times fewer parameters. Additionally, our approach yields lower FID scores, highlighting another advantage. More precisely, the FID scores for the models are: GORS (30.54), SD v1.4 (24.33), SD v2 (23.27), on top of SD v1.4 (25.40), and on top SD v2 (27.40) when using with τ = 800. Comparison between and . We observe that improves over (special case of with s=0) by incorporating adjacent tokens in addition to the actual token. This approach likely improves embeddings by mitigating unintended attention from adjacent tokens. For discussion on choosing the window length (s) in , see Appendix <ref>. § CONCLUSION Our paper examines potential error sources in text-to-image models for generating images from compositional prompts. We identify two error sources: (i) A minor error source, where the token embeddings in the CLIP text-encoder have erroneous attention contributions and (ii) A major error source, where we find the output space of the CLIP text-encoder to be sub-optimally aligned to the UNet for compositional prompts. Leveraging our observations, we propose a simple and strong baseline which involves fine-tuning a linear projection on CLIP's representation space. though inherently simple and parameter efficient, outperforms existing methods on compositional image generation benchmarks and maintains a low FID score on a broader range of clean prompts. We discuss limitations in Appendix <ref>. § ACKNOWLEDGEMENT This project was supported in part by a grant from an NSF CAREER AWARD 1942230, ONR YIP award N00014-22-1-2271, ARO’s Early Career Program Award 310902-00001, Army Grant No. W911NF2120076, the NSF award CCF2212458, NSF Award No. 2229885 (NSF Institute for Trustworthy AI in Law and Society, TRAILS), an Amazon Research Award and an award from Capital One. abbrvnat § LIMITATIONS In this paper, we have thoroughly analyzed one of the key reasons why Stable Diffusion struggles to generate compositional prompts and proposed a lightweight method to mitigate this issue. However, there remains significant room for improvement in this area. Our approach focuses on improving the text encoder, which we identified as a major source of error. There are potentially other sources of the issue within the entire generative model pipeline that need to be explored. Additionally, our method involves a small fine-tuning step using a simple linear projection. Future work could explore alternative approaches, such as more sophisticated fine-tuning techniques, advanced attention mechanisms, or hybrid models that integrate multiple strategies. § OPTIMIZING THE TEXT-EMBEDDING OF A SUBSET OF TOKENS Given c∈ℝ^n× d, where n refers to the number of tokens and d refers to the dimensionality of the text-embedding, for the second configuration we only optimize a subset of tokens n' ∈ n. We refer to this subset of tokens as c'. These tokens correspond to relevant parts of the prompt which govern compositionality (e.g., “red book” and “yellow table” in “A red book and an yellow table”). c'^* = min_c'𝔼_ϵ, t || ϵ - ϵ_θ(z_t, c', t) ||_2^2, Figure <ref> shows the results for the sample prompt "a red book and a yellow vase". We considered different subsets of tokens n': adjectives ("red" and "yellow"), nouns ("book" and "vase"), both nouns and adjectives, and all tokens in the sentence. The results indicate that optimizing even a few tokens significantly improves the VQA score. However, optimizing all tokens in the sentence yields the highest score. § SOURCE (I) : ERRONEOUS ATTENTION CONTRIBUTIONS IN CLIP §.§ Attention Contribution In this Section, we provide more details on our analysis to quantitatively measure tokens' contribution to each other in a layer of attention mechanism. One natural way of doing this analysis is to utilize attention maps _i, j^h and aggregate them over heads, however, we observe that this map couldn't effectively show the contribution. Attention map does not consider norm of tokens in the previous layer, thus, does not provide informative knowledge on how each token is formed in the attention mechanism. In fact, as seen in Figure <ref>, we cannot obtain much information by looking at these maps while attention contribution clearly shows amount of norm that comes from each of the attended tokens. §.§ Zero-shot Attention Reweighting To fix unintended attentions, we aim to compute a matrix M to be applied across various heads in the last few layers of CLIP, reducing the effect of wrong attention, leading to more accurate text-embeddings that are capable of generating high-quality compositional scenes. To avoid unintended attention for prompts of the form “a_1 o_1 + a_2 o_2", we add large negative values to entries M_o_2, a_1, M_a_2, a_1, and some positive value to M_o_2, a_2 and M_o_1, a_1, and small negative value to M_o_2, o_1. To find what values to assign to those entries, we consider a small set of prompts in color dataset (5 prompts in total) and obtain parameters for that matrix to maximize VQA score. Figure <ref> shows few examples of zero-shot modification. § TRAINING SETUP In this section, we present the details of the experiments conducted to evaluate our proposed methods. The training is performed for 25,000 steps with a batch size of 4. An RTX A5000 GPU is used for training models based on Stable Diffusion 1.4, while an RTX A6000 GPU is used for models based on Stable Diffusion 2. We employed the Adam optimizer with a learning rate of 1 × 10^-5 and utilized a Multi-Step learning rate scheduler with decays (α=0.1) at 10,000 and 16,000 steps. For the , a window size of 5 was used. All network parameters were initialized to zero, leveraging the skip connection to ensure that the initial output matched the CLIP text embeddings. Our implementation is based on the Diffusers[https://github.com/huggingface/diffusers] library, utilizing their modules, models, and checkpoints to build and train our models. This comprehensive setup ensured that our method was rigorously tested under controlled conditions, providing a robust evaluation of its performance. § AND VISUALIZATION In this section, we provide additional visualizations comparing , , and baseline models in Figures <ref>, <ref>. § VISUALIZATION OF CROSS-ATTENTIONS In this section, we provide additional cross-attention map visualizations in Figures <ref> and <ref>. § VISUALIZATION OF In this section, we present more qualitative samples illustrating the effect of at different timestep thresholds for various prompts in Figures <ref> and <ref>. § CHOICE OF WINDOW LENGTH IN One might suggest that instead of using token-wise linear projection () or a window-based linear projection with a limited window (), employing a linear projection that considers all tokens when finding a better embedding for each token might yield better results. However, our thorough quantitative study and experiments tested various window sizes for . We found that using a window size of 5 achieves the highest performance.
http://arxiv.org/abs/2406.08654v1
20240612213322
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
[ "Yuhang Cai", "Jingfeng Wu", "Song Mei", "Michael Lindsey", "Peter L. Bartlett" ]
stat.ML
[ "stat.ML", "cs.LG", "math.OC" ]
Interacting holes in a gated WSe_2 quantum channel: valley correlations and zigzag Wigner crystal Pawel Hawrylak ================================================================================================== § ABSTRACT The typical training of neural networks using large stepsize gradient descent (GD) under the logistic loss often involves two distinct phases, where the empirical risk oscillates in the first phase but decreases monotonically in the second phase. We investigate this phenomenon in two-layer networks that satisfy a near-homogeneity condition. We show that the second phase begins once the empirical risk falls below a certain threshold, dependent on the stepsize. Additionally, we show that the normalized margin grows nearly monotonically in the second phase, demonstrating an implicit bias of GD in training non-homogeneous predictors. If the dataset is linearly separable and the derivative of the activation function is bounded away from zero, we show that the average empirical risk decreases, implying that the first phase must stop in finite steps. Finally, we demonstrate that by choosing a suitably large stepsize, GD that undergoes this phase transition is more efficient than GD that monotonically decreases the risk. Our analysis applies to networks of any width, beyond the well-known neural tangent kernel and mean-field regimes. § INTRODUCTION Neural networks are mostly optimized by gradient descent (GD) or its variants. Understanding the behavior of GD is one of the key challenges in deep learning theory. However, there is a nonnegligible discrepancy between the GD setups in theory and in practice. In theory, GD is mostly analyzed with relatively small stepsizes such that its dynamics are close to the continuous gradient flow dynamics, although a few exceptions will be discussed later. However, in practice, GD is often used with a relatively large stepsize, with behaviors significantly deviating from that of small stepsize GD or gradient flow. Specifically, notice that small stepsize GD (hence also gradient flow) induces monotonically decreasing empirical risk, but in practice, good optimization and generalization performance is usually achieved when the stepsize is large and the empirical risk oscillates <cit.>. Therefore, it is unclear which of the theoretical insights drawn from analyzing small stepsize GD apply to large stepsize GD used practically. The behavior of small stepsize GD is relatively well understood. For instance, classical optimization theory suggests that GD minimizes convex and L-smooth functions if the stepsize η̃ is well below 2/L, with a convergence rate of (1/(η̃t)), where t is the number of steps <cit.>. More recently, <cit.> show an implicit bias of small stepsize GD in logistic regression with separable data, where the direction of the GD iterates converges to the max-margin direction. Subsequent works extend their implicit bias theory from linear model to homogenous networks <cit.>. These theoretical results all assume the stepsize of GD is small (and even infinitesimal) such that the empirical risk decreases monotonically and, therefore cannot be directly applied to large stepsize GD used in practice. More recently, large stepsize GD that induces oscillatory risk has been analyzed in simplified setups <cit.>. In particular, in logistic regression with linearly separable data, <cit.> showed that the implicit bias of GD (that maximizes the margin) holds not only for small stepsizes <cit.> but also for an arbitrarily large stepsize. In the same problem, <cit.> further showed that large stepsize GD that undergoes risk oscillation can achieve an (1/t^2) empirical risk, whereas small stepsize GD that monotonically decreases the empirical risk must suffer from a Ω(1/t) empirical risk. Nonetheless, these theories of large stepsize GD are limited to relatively simple setups such as linear models. The theory of large stepsize GD for non-linear networks is underdeveloped. This work fills the gap by providing an analysis of large stepsize GD for non-linear networks. In the following, we set up our problem formally and summarize our contributions. Setup. Consider a binary classification dataset (_i,y_i)_i=1^n, where _i∈^d is a feature vector and y_i∈{± 1} is a binary label. For simplicity, we assume _i≤ 1 for all i throughout the paper. For a predictor f, the empirical risk under logistic loss is defined as L() := 1/n∑_i=1^n ℓ( y_i f(; _i)), ℓ(t) := log(1+e^-t). Here, the predictor f(; ·): ^d ↦ is parameterized by trainable parameters and is assumed to be continuously differentiable with respect to . The predictor is initialized from _0 and then trained by gradient descent (GD) with a constant stepsize η̃>0, that is, _t+1 := _t - η̃∇ L(_t), t≥ 0. GD We are interested in a non-linear predictor f and a large stepsize η̃. A notable example in our theory is two-layer networks with Lipschitz, smooth, and nearly homogenous activations (see <Ref>). Note that minimizing L() is a non-convex problem in general. Observation. Empirically, large stepsize GD often undergoes a phase transition, where the empirical risk defined in <Ref> oscillates in the first phase but decreases monotonically in the second phase (see empirical evidence in Appendix A in <cit.> and a formal proof in <cit.> for linear predictors). This is illustrated in <Ref> (the experimental setup is described in <Ref>). We follow <cit.> and call the two phases the edge of stability (EoS) phase <cit.> and the stable phase, respectively. Contributions. We prove the following results for large stepsize GD for training non-linear predictors under logistic loss. * For Lipschitz and smooth predictor f trained by GD with stepsize η̃, we show that as long as the empirical risk is below a threshold depending on η̃, GD monotonically decreases the empirical risk (see <Ref>). This result extends the stable phase result in <cit.> from linear predictors to non-linear predictors, demonstrating the generality of the existence of a stable phase. * Assuming that GD enters the stable phase, if in addition the preditor has a bounded homogenous error (see <Ref>), we show that the normalized margin induced by GD, min_i y_i f(_t;_i)/ _t, nearly monotonically increases (see <Ref>). To the best of our knowledge, this is the first characterization of implicit bias of GD for non-homogenous predictors. In particular, our theory covers two-layer networks with commonly used activations functions (which are often non-homogenous) that cannot be covered by existing results <cit.>. * Under additional technical assumptions (the dataset is linearly separable and the derivative of the activation function is bounded away from zero), we show that the initial EoS phase must stop in (η̃) steps and GD transits to the stable phase afterwards. Furthermore, by choosing a suitably large stepsize, GD achieves a (1/t^2) empirical risk after t steps. In comparison, GD that converges monotonically incurs an Ω(1/t) risk. This result indicates an optimization benefit of using large stepsize and generalizes the results in <cit.> from linear predictors to neural networks. § STABLE PHASE AND MARGIN IMPROVEMENT In this section, we present our results for the stable phase of large stepsize GD in training non-linear predictors. Specifically, our results apply to non-linear predictors that are Lipschitz, smooth, and nearly homogeneous, as described by the following assumption. [Model conditions] Consider a predictor f(;_i), where _i is one of the feature vectors in the training set. * Lipschitzness. Assume there exists ρ>0 such that for every , sup_i ∇_ f(;_i)≤ρ. * Smoothness. Assume there exists β>0 such that for all ,, f(;_i) - f(;_i)≤β -, i=1,…,n. * Near homogeneity. Assume there exists κ>0 such that for every , |f(;_i) - ⟨∇_ f(; _i), ⟩ | ≤κ, i=1,…,n. <Ref> are commonly used conditions in the optimization literature. If κ=0, then <Ref> requires the predictor to be exactly 1-homogenous. Our <Ref> allows the predictor to have a bounded homogenous error. It is clear that linear predictors f(;) : = ^⊤ satisfy <Ref> with ρ=sup_i_i≤ 1, β=0, and κ = 0. Another notable example is two-layer networks given by f(; ) := 1/m∑_j=1^m a_j ϕ(^⊤^(j)), ^(j)∈^d, j=1,…,m, where we assume a_j ∈{± 1} are fixed and = (^(j) )_j=1^m ∈^md are the trainable parameters. We define two-layer networks with the mean-field scaling <cit.>. However, our results hold for any width. The effect of rescaling the model will be discussed in <Ref>. The following example shows that <Ref> covers two-layer networks with many commonly used activations ϕ(·). The proof is provided in <Ref>. [Two-layer networks] Two-layer networks defined in (<ref>) with the following activation functions satisfy <Ref> with the described constants: * GELU. ϕ(x) := x/2( 1+(x/ √(2))) with κ= e^-1/2/√(2π), β=2/m, and ρ = (√(2π)+ e^-1/2)/√(2π m). * Softplus. ϕ(x) := log(1+e^x) with κ=log2, β=1/m, and ρ = 1/√(m). * Sigmoid. ϕ(x) := 1/(1+e^-x) with κ=1, β=2/m, and ρ = 1/√(m). * Tanh. ϕ(x) := tanh(x) with κ=5, β=2/m, and ρ = 1/√(m). * SiLU. ϕ(x) := x/(1+e^-x) with κ=1, β=4/m, and ρ = 2/√(m). * Huberized ReLU <cit.>. For a fixed h>0, ϕ(x) 0 x<0, x^2/2 h 0≤ x ≤ h, x-h/2 x>h, with κ=h/2, β=1/(hm), and ρ=1/√(m). Margin for nearly homogenous predictors. For a nearly homogenous predictor f(;·) (see <Ref>), we define its normalized margin (or margin for simplicity) as γ̅() := min_i∈[n] y_i f(;_i)/. A large normalized margin γ̅() guarantees the prediction of each sample is away from the decision boundary. The normalized margin <Ref> is introduced by <cit.> for homogenous predictors. However, we show that the same notion is also well-defined for non-homogenous predictors that satisfy <Ref>. The next theorem gives sufficient conditions for large stepsize GD to enter the stable phase in training non-homogenous predictors and characterizes the increase of the normalized margin. The proof of <Ref> is deferred to <Ref>. Consider <Ref> with stepsize η̃ on a predictor f(;) that satisfies <Ref>. If there exists r ≥ 0 such that L(_r) ≤1/η̃(2ρ^2 + β) , then GD is in the stable phase for t≥ r, that is, (L(_t))_t≥ r decreases monotonically. If additionally the predictor satisfies <Ref> and there exists s ≥ 0 such that L(_s) ≤min{1/e^κ+22n, 1/η̃(4ρ^2 + 2β)} , we have the following for t ≥ s: * Risk convergence. L(_t) = Θ(1/t), where the constants depend on L(_s), _s, η̃ and ρ. * Parameter increase. _t+1≥_t and _t = Θ(log (t)), where the constants depend on n, ρ, κ,η̃, _s, L(_s) and f(_s;_i). * Margin improvement. There exists a modified margin function γ^c() such that * γ^c(_t) is increasing and bounded. * γ^c(_t) is a multiplicative approximiator of γ̅(_t), that is, there exists c>0 such that γ^c(_t) ≤γ̅(_t) ≤(1+c/log (1/L(_t))) γ^c(_t), t≥ s. As a direct consequence, lim_t→∞γ̅(_t) =lim_t→∞γ^c(_t). <Ref> shows that for an arbitrarily large stepsize η̃, GD must enter the stable phase if the empirical risk falls below a threshold depending on η̃ given by <Ref>. In the next section, We will show that under additional technical conditions, large stepsize GD is guaranteed to satisfy <Ref> even though with an initially oscillatory risk. Furthermore, for nearly homogenous predictors, <Ref> shows that under a stronger risk threshold condition <Ref>, the risk must converge at a Θ(1/t) rate and that the normalized margin nearly monotonically increases. This demonstrates an implicit bias of GD, even when used with a large stepsize and the trained predictor is non-homogenous. Our <Ref> makes several important extensions compared to existing results. First, <Ref> suggests that the stable phase happens for general non-linear predictors such as two-layer networks, while the work by <cit.> only studied the stable phase for linear predictors. Second, the margin improvement is only known for small (and even infinitesimal) stepsize GD and homogenous predictors <cit.>. To the best of our knowledge, <Ref> is the first implicit bias result that covers large stepsize GD and non-homogenous predictors. From a technical perspective, our proof uses techniques introduced by <cit.> for analyzing homogenous predictors. Our main innovations are constructing new auxiliary margin functions that can deal with errors caused by large stepsize and non-homogeneity. More details are discussed in <Ref>. § EDGE OF STABILITY PHASE Our stable phase results in <Ref> require the risk to be below a certain threshold (see <Ref>). In this section, we show that the risk can indeed be below the required threshold, even when GD is used with large stepsize. Recall that minimizing the empirical risk with a non-linear predictor is non-convex, therefore solving it by GD is hard in general. We make additional technical assumptions to conquer the challenges caused by non-convexity. We conjecture that these technical assumptions are not necessary and can be relaxed. We focus on two-layer networks <Ref>. We make the following assumptions on the activation function. [Activation function conditions] In the two-layer network <Ref>, let the activation function ϕ:→ be continuously differentiable. Moreover, * Derivative condition. Assume there exists 0<α<1 such that α≤ | ϕ^'(z)| ≤ 1. * Smoothness. Assume there exists β̃>0 such that for all x,y∈, |ϕ^'(x) - ϕ^'(y)| ≤β̃|x-y|. * Near homogeneity. Assume there exists κ>0 such that for every z∈, |ϕ(z) - ϕ^' (z)z | ≤κ. Recall that sup_i_i≤ 1. One can then check by direct computation that, under <Ref>, two-layer networks <Ref> satisfy <Ref> with ρ=1/√(m), β=β̃/m, and κ=κ. <Ref> cover many commonly used activation functions. In <Ref>, we assume |ϕ'(z)| ≤ 1. This is just for the simplicity of presentation and our results can be easily generalized to allow |ϕ'(z)| ≤ C for a constant C>0. The other condition in <Ref>, |ϕ'(z)| ≥α, however, is non-trivial. This condition is widely used in literature <cit.> to facilitate GD analysis. Technically, this condition guarantees that each neuron in the two-layer network <Ref> will always receive a non-trivial gradient in the GD update; otherwise, neurons may be frozen during the GD update. Furthermore, commonly used activation functions can be combined with an identity map to satisfy <Ref>. This is formalized in the following example. The proof is provided in <Ref>. [Leaky activation functions] Fix 0.5≤ c<1. For each ϕ in <Ref> except the Huberized ReLU, its modification ϕ̃(x) := c x + (1-c)/4 ·ϕ(x) satisfies <Ref> with κ=1, α=0.25, and β̃= 1. The modification of the Huberized ReLU satisfies <Ref> with κ=h/2, α=0.5, and β̃= 1/4h. In particular, the modification of softplus can be viewed as a smoothed leaky ReLU. For the technical difficulty of non-convex optimization, we also need to assume a linearly separable dataset to conduct our EoS phase analysis. [Linear separability] Assume there is a margin γ >0 and a unit vector _* such that y_i_i^⊤_* ≥γ for every i=1,… ,n. The following theorem shows that when GD is used with large stepsizes, the average risk must decrease even though the risk may oscillate locally. The proof of <Ref> is defered to <Ref>. Under <Ref>, consider <Ref> on two-layer networks <Ref> that satisfy <Ref>. Denote the stepsize by η̃:= m η, where m is the network width and η can be arbitrarily large. Then for every t>0, we have 1/t∑_k=0^t-1 L (_k ) ≤1+8log ^2(γ^2 η t)/α ^2+ 8κ^2/α ^2+η^2 /γ^2 η t + _0^2/ mη t = (log^2(η t) + η^2/η t). <Ref> suggests that the average risk of training two-layer networks decreases even when GD is used with large stepsize. Consequently, the risk thresholds <Ref> for GD to enter the stable phase must be satisfied after a finite number of steps. This will be discussed in depth in the next section. Compared to Theorem 1 in <cit.>, <Ref> extends their EoS phase bound from linear predictors to two-layer networks. § PHASE TRANSITION AND FAST OPTIMIZATION For two-layer networks trained by large stepsize GD, <Ref> shows that the average risk must decrease over time. Combining this with <Ref>, GD must enter the stable phase in finite steps, and the loss must converge while the normalized margin must improve. However, a direct application of <Ref> only leads to a suboptimal bound on the phase transition time. Motivated by <cit.>, we establish the following sharp bound on the phase transition time by tracking the gradient potential (see <Ref>). The proof of <Ref> is deferred to <Ref>. Under <Ref>, consider <Ref> on two-layer networks <Ref> that satisfy <Ref>. Clearly, the two-layer networks also satisfy <Ref> with ρ=1/√(m), β=β̃/m, and κ=κ. Denote the stepsize by η̃:= m η, where m is the network width and η>0 can be arbitrarily large. * Phase transition time. There exists s≤τ such that (<ref>) in <Ref> holds, where τ := 128(1+4κ)/α ^2max{ c_1η, c_2n, e, c_2η+c_1n/ηlogc_2η+c_1n/η, (c_2η+c_1n)/η·_0/√(m)}, where c_1 := 4e^κ+2 and c_2 := (8+4β̃). Therefore <Ref> is in the stable phase from s onwards. * Explicit risk bound in the stable phase. We have (L(_t))_t≥ s monotonically decreases and L(_t) ≤2/α ^2 γ ^2 η (t-s), t≥ s. <Ref> together characterize the behaviors of large stepsize GD in training two-layer networks. Specifically, large stepsize GD may induce an oscillatory risk in the beginning; but the averaged empirical risk must decrease (<Ref>). After the empirical risk falls below a certain stepsize-dependent threshold, GD enters the stable phase, where the risk decreases monotonically (<Ref>). Finally, the normalized margin <Ref> induced by GD increases nearly monotonically as GD stays in the stable phase (<Ref>). Fast optimization. Our bounds for two-layer networks are comparable to those for linear predictors shown by <cit.>. Specifically, when used with a larger stepsize, GD achieves a faster optimization in the stable phase but stays longer in the EoS phase. Choosing a suitably large stepsize that balances the steps in EoS and stable phases, we obtain an accelerated empirical risk in the following corollary. The proof is included in <Ref>. Under the same setup as in <Ref>, consider (<ref>) with a given budget of T steps such that T ≥256(1+4κ)/α ^2 γ ^2max{c_1 n, 4c_2^2, 2c_2_0/√(m)}, where c_1 := 4e^κ+2 and c_2 := (8+4β̃) are as in <Ref>. Then for stepsize η̃:= η m, where η := α ^2 γ ^2/256(1+4κ)c_2 T, we have τ≤ T/2 and L(_T) ≤2048(1+4κ) c_2/α ^4 γ^4·1/T^2 = (1/T^2). <Ref> and <Ref> extend Theorem 1 and Corollary 2 in <cit.> from linear predictors to two-layer networks. Another notable difference is that we obtain a sharper stable phase bound (and thus a better acceleration bound) compared to theirs, where we remove a logarithmic factor through a more careful analysis. <Ref> suggests an accelerated risk bound of (1/T^2) by choosing a large stepsize that balances EoS and stable phases. We also show the following lower bound, showing that such acceleration is impossible if <Ref> does not enter the EoS phase. The proof is included in <Ref>. Consider (<ref>) with initialization 𝐰_0 =0 and stepsize η̃>0 for a two-layer network <Ref> satisfying <Ref>. Suppose the training set is given by 𝐱_1=(γ, √(1-γ^2)), 𝐱_2=(γ,-√(1-γ^2) / 2), y_1=y_2=1, 0<γ<0.1 . It is clear that (𝐱_i, y_i)_i=1,2 satisfy <Ref>. If (L(𝐰_t))_t ≥ 0 is non-increasing, then L(𝐰_t) ≥ c_0 / t, t ≥ 1 where c_0>0 is a function of (α, ϕ, _1, _2, γ, κ, β) but is independent of t and η̃. Effect of model rescaling. We conclude this section by discussing the impact of rescaling the model. Specifically, we replace the two-layer network in the mean-field scaling <Ref> by the following f(;) := b·1/m∑_j=1^m a_j ϕ(^⊤^(j)), and evaluate the impact of the scaling factor b on our results. By choosing the optimal stepsize that balances the EoS and stable phases as in <Ref>, we optimize the risk bound obtained by GD with a fixed budget of T steps and get the following bound. Detailed derivations are deferred to <Ref>. L(_T) = ( 1/T^2) if b ≥ 1, (b^-3/T^2 ) if b < 1. This suggests that as long as b≥ 1, we get the same acceleration effect. In particular, the mean-field scaling b=1 <cit.> and the neural tangent kernel (NTK) scaling b=√(m) <cit.> give the same (1/ T^2) acceleration effect. An NTK analysis of large stepsize is included in <cit.> and their conclusion is consistent with ours. Finally, we remark that our analysis holds for any width m and uses techniques different from the mean-field or NTK methods. However, our acceleration analysis only allows linearly separable datasets. § EXPERIMENTS We conduct three sets of experiments to validate our theoretical insights. In the first set, we use a subset of the CIFAR-10 dataset <cit.>, which includes 1,979 randomly selected samples from the "airplane" and "automobile" classes. Our model is a multilayer perceptron (MLP) with two trainable layers and tanh activation functions, with a hidden dimension of 100. The MLP is trained using gradient descent with random initialization, as described in <Ref>. The results are shown in <Ref>. In the second set of experiments, we consider an XOR dataset consisting of four samples: _1=(-1,-1), y_1=1; _2=(1,1), y_2=1; _3=(1,-1),y_3=-1; _4=(-1,1), y_4= -1. The above XOR dataset is not linearly separable. We test <Ref> with different stepsizes on a two-layer network <Ref> with the leaky softplus activation (see <Ref> with c=0.5). The network width is m=20. The initialization is random. The results are presented in <Ref>. In the third set of experiments, we consider the same task as in the first set of experiments, but we test <Ref> with different stepsizes on a two-layer network <Ref> with the softplus activation. The network width is m=40. The initialization is random. The results are presented in <Ref>. Margin improvement. <Ref> show that the normalized margin nearly monotonically increases once gradient descent (GD) enters the stable phase, regardless of step size. This observation aligns with our theoretical findings in <Ref>. Fast optimization. From <Ref>, we observe that after GD enters the stable phase, a larger stepsize consistently leads to a smaller empirical risk compared to the smaller stepsizes, which is consistent with our <Ref> and <Ref>. Besides, <Ref> suggest that, asymptotically, GD converges at a rate of (1/(η̃t)) = (1/(η t)) (The width of networks is fixed), which verifies the sharpness of our stable phase bound in <Ref>. § RELATED WORKS In this section, we discuss related papers. Small stepsize and implicit bias. For logistic regression on linearly separable data, <cit.> showed that the direction of small stepsize GD converges to the max-margin solution. Their results were later extended by <cit.> to other algorithms and non-linear models. However, in all of their analysis, the stepsize of GD needs to be small such that the empirical risk decreases monotonically. In contrast, our focus is GD with a large stepsize that induces non-monotonic risk. Large stepsize and EoS. In practice, large stepsizes are often preferred when using GD to train neural networks to achieve effective optimization and generalization performance <cit.>. In such scenarios, the empirical risk often oscillates in the beginning. This phenomenon is named edge of stability (EoS) by <cit.>. The theory of EoS is mainly studied in relatively simplified cases such as one- or two-dimensional functions <cit.>, linear model <cit.>, matrix factorization <cit.>, scale-invariant networks <cit.>, for an incomplete list of references. Compared to them, we focus on a more practical setup of training two-layer non-linear networks with large stepsize GD. There are some general theories of EoS subject to subtle assumptions <cit.>, which are not directly comparable to ours. In what follows, we make a detailed discussion about papers that directly motivate our work <cit.>. Comparison with <cit.>. Both results in <cit.> focused on L-homogenous networks. Specifically, <cit.> showed that a modified version of normalized margin (see <Ref>) induced by GD with small stepsize (such that the risk decreases monotonically) increases, with limiting points of {_t/_t}_t=1^∞ converging to KKT points of a margin-maximization problem. Under additional o-minimal conditions, <cit.> showed that gradient flow converges in direction. Our work is different from theirs in two aspects. First, we allow GD with a large stepsize that may cause risk oscillation. Second, our theory covers non-homogenous predictors, which include two-layer networks with many commonly used activation functions beyond the scope of <cit.>. Compared to <cit.>, we only show the improvement of the margin, and our theory is limited to nearly 1-homogenous predictors (<Ref>). It remains open to show directional convergence and to extend our near 1-homogenity condition to a “near L-homogeneity” condition. Comparison with <cit.>. The work by <cit.> studies the convergence of GD in training deep networks under logistic loss. Their results are related to ours as we both consider networks with nearly homogeneous activations and we both have a stable phase analysis (although this is not explicitly mentioned in their paper). However, our results are significantly different from theirs. Specifically, in our notation, they require the homogenous error κ (see <Ref>) to be smaller than (log(1/L(_s)) / _s)≈(γ̅(_s)), where s is the time for GD to enter the stable phase. Note that the margin when GD enters the stable phase could be arbitrarily small. In comparison, we only require the homogenous error to be bounded by a constant. As a consequence, we can handle many commonly used activation functions (see <Ref>) while they can only handle the Huberized ReLU with a small h in <Ref>. Moreover, they require the stepsize η̃ to be smaller than (κ / _s^8), thus they only allow very small stepsize. In contrast, we allow η̃ to be an arbitrarily large constant. Comparison with <cit.>. The works by <cit.> directly motivate our paper. In particular, for logistic regression on linearly separable data, <cit.> showed margin maximization of GD with large stepsize and <cit.> showed fast optimization of GD with large stepsize. Our work can be viewed as an extension of <cit.> from linear predictors to non-linear predictors such as two-layer networks. Besides, our results for margin improvement and convergence within the stable phase (<Ref>) hold for the general dataset, while their results strongly rely on the linear separability of the dataset. § CONCLUSION We provide a theory of large stepsize gradient descent (GD) for training non-homogeneous predictors such as two-layer networks using the logistic loss function. Our analysis explains the empirical observations: large stepsize GD often reveals two distinct phases in the training process, where the empirical risk oscillates in the beginning but decreases monotonically subsequently. We show that the phase transition happens because the average empirical risk decreases despite the risk oscillation. In addition, we show that large stepsize GD improves the normalized margin in the long run, which extends the existing implicit bias theory for homogenous predictors to non-homogenous predictors. Finally, we show that large stepsize GD, by entering the initial oscillatory phase, achieves acceleration when minimizing the empirical risk. § ACKNOWLEDGEMENTS We thank Fabian Pedregosa for his suggestions on an early draft. We gratefully acknowledge the support of the NSF for FODSI through grant DMS-2023505, of the NSF and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning through awards DMS-2031883 and #814639, and of the ONR through MURI award N000142112431. § STABLE PHASE ANALYSIS In this section, we will prove results for a general smooth predictor f(;) under the logistic loss in the stable phase. Before the proof, we introduce some notations here. Notation. We use the following notation to simplify the presentation. * q_i(t) y_i f(_t; _i), q_min(t) min_i∈ [n] q_i(t). * L_t L(_t), ρ_t := _t_2. Then, we have the following expression: L(_t) = 1/n∑_i=1^n ℓ(q_i(t)). Here, we give a summary of this section. The proofs are organized into 5 parts. * In <Ref>, we characterize the decrease of loss L_t. * In <Ref>, we characterize the change of the parameter norm ρ_t. * In <Ref>, we show the convergence of the normalized margin γ̅(_t). * In <Ref>, we characterize the sharp rates of loss L_t and parameter norm ρ_t. * In <Ref>, we give the proof of <Ref>. §.§ Decrease of the Loss In this section, we will show that the loss L_t decreases monotonically in the stable phase. To begin with, we introduce the following definition which is another characterization of β-smoothness. Given a continuously differentiable function f: ^d → and two points ,∈^d, the linearization error of f() with respect to is: ξ[f](,) := f() - f() - ∇ f()^⊤ (-). For a β-smooth function, standard convex optimization theory gives the following linearization error bound. [Linearization error of β-smooth function] For a β-smooth function f:^d →, we have ξ[f](,) := f() - f() - ∇ f()^⊤ (-) ≤β/2 -_2^2, for every and . We first show a stable phase bound for general smooth and Lipschitz predictors. The following is an extension of Lemma 10 in <cit.>. Since we do not require f to be twice differentiable, extra efforts are needed. For the logistic loss ℓ(z):=log (1+exp (-z)), we have 0≤ℓ(z) -ℓ(x)-ℓ^'(x)(z-x)≤ 2ℓ(x)(z-x)^2 for |z-x|<1. See the proof of Proposition 5 in <cit.>. The lower bound is by the convexity of ℓ(·). The next lemma controls the decrease of the risk L_t. Suppose <Ref> hold. If L(_t) ≤1/η̃ρ^2, then we have -η̃(1+ βη̃L(_t)) ∇ L(_t)^2 ≤ L(_t+1) - L(_t) ≤ -η̃(1-(2ρ^2 + β )η̃L(_t)) ∇ L(_t)^2. Particularly, this indicates that if L(_t) ≤1/η̃(2ρ^2 + β), then L(_t+1) ≤ L(_t). By <Ref>, we have ∇ f_2 ≤ρ and f(;) is β-smooth as a function of . Therefore, for every i∈[n] we have |q_i(t+1)-q_i(t)| =|y_i(f(_t+1 ; _i)-f(_t ; _i))| =|∇ f(_t+θ(_t+1-_t) ; _i)^⊤(_t+1-_t)| by intermediate value theorem ≤ρ_t+1-_t ≤ρη̃∇ L_t since _t+1 = _t - η̃∇ L_t ≤ρ^2 η̃L_t ≤ 1. since ∇ L_t ≤ L_t ρ Then by <Ref>, we have ℓ(q_i(t+1)) ≤ℓ(q_i(t)) + ℓ^'(q_i(t)) (q_i(t+1) - q_i(t)) + 2ℓ(q_i(t)) (q_i(t+1) - q_i(t))^2 ≤ℓ(q_i(t)) + ℓ^'(q_i(t)) ⟨ y_i ∇ f(_t; _i), _t+1 - _t ⟩ + |ℓ'(q_i(t))| · | ξ[f](_t, _t+1)| + 2ℓ(q_i(t)) (q_i(t+1) - q_i(t))^2 since q_i(t+1) - q_i(t) = ⟨ y_i ∇ f(_t; _i), _t+1 - _t ⟩ + y_i ξ[f] (_t, _t+1) ≤ℓ(q_i(t)) + ℓ^'(q_i(t)) ⟨ y_i ∇ f(_t; _i), _t+1 - _t ⟩ + ℓ(q_i(t)) (β + 2ρ^2) _t+1 - _t^2 . by <Ref> and the previous inequality Taking an average over all data points, we have L_t+1≤ L_t -η̃∇ L_t^2 + (2ρ^2 + β) η̃^2 L_t ∇ L_t^2, which is equivalent to L_t+1 - L_t ≤ -η̃(1-(2ρ^2 + β) η̃L_t) ∇ L_t^2. We complete the proof of the right hand side inequality. The left hand side inequality can be proved similarly. In detail, we can show that: ℓ(q_i(t+1)) ≥ℓ(q_i(t)) + ℓ^'(q_i(t)) (q_i(t+1) - q_i(t) ≥ℓ(q_i(t)) + ℓ^' (q_i(t)) ⟨ y_i ∇ f(_t;_i), _t+1 - _t ⟩ - |ℓ^' (q_i(t))| · | ξ[f] (_t, _t+1)|. Taking the average over all data points, we have L_t+1≥ L_t -η̃(1+ βη̃L_t) ∇ L_t^2. Now we have completed the proof of <Ref>. §.§ Increase of the Parameter Norm In this section, we demonstrate that the parameter norm, ρ_t, increases monotonically during the stable phase. We introduce a crucial quantity, v_t, defined as the inner product of the gradient and the negative weight vector: v_t ⟨∇ L(_t), -_t ⟩. This quantity, v_t, plays a key role in controlling the increase of the parameter norm. Notably, v_t appears as the cross term in the expression _t+1^2 = _t - η̃∇ L(_t)^2. By managing v_t, we can effectively characterize the increase in the parameter norm. Recall that our loss function is ℓ(x) := log(1+e^-x). Inspired by <cit.>, we define the following two auxiliary functions for the logistic loss: ψ(x) -log (ℓ(x)) = -loglog(1+e^-x), x∈, ι(x) ψ^-1(x) = -log (e^e^-x-1), x∈. One important remark is that if we change the loss to the exponential loss, both ψ and ι will be the identity function. Since the logistic loss and the exponential loss have similar tails, our ψ(x) and ι(x) are close to the identity function, i.e., ψ(x) ≈ι(x) ≈ x, for x large enough. Then, we have an exponential-loss-like decomposition of L_t: L_t = 1/n∑_i=1^n ℓ(q_i(t)) = 1/n∑_i=1^n e^-ψ(q_i(t)). These two functions ψ, ι will help us to analyze the lower bound of v_t. First, we list some properties of ψ and ι here. The following claims hold for ℓ, ψ, and ι. * ℓ(x) = e^-ψ(x). * ℓ is monotonically decreasing, while ψ and ι are monotonically increasing. * ψ^' ( ι(x)) = 1/ι^' (x); * ψ^' (x) x is increasing for x∈ (0, +∞). The first two properties are straightforward. For the third property, we apply chain rule on ψ(ι(x)) = x to get ψ^' (ι(x)) ι^' (x) = 1. For the fourth property, notice that ψ^' (x) x = x/(1+e^x)log(1+e^-x). The denominator is positive and decreasing since d/dx[(1+e^x)log(1+e^-x)] = e^x log(1+e^-x) - 1 ≤ e^x e^-x -1 =0. Combining this with the fact that x is positive and increasing, we have completed the proof of <Ref>. Besides, we have the following property of ι. This is the key lemma to handle the homogeneous error. Actually, this lemma is another way to show ι(x) is close to the identity function. For every x∈, we have ι (x)/ι^' (x)≥ x+loglog 2. Recall that ι(x) = -log (e^e^-x-1) , ι^' (x) = e^e^-xe^-x/e^e^-x-1. Let y = e^-x. We have ι (x)/ι^' (x) = -log (e^e^-x-1) (e^e^-x-1)/e^e^-xe^-x= -log (e^y-1)(e^y-1)/e^y y. Define s(y) ι(x)/ι^' (x) - x - loglog 2. Then, s(y) = - log(e^y-1)(e^y-1)/e^y y +log(y) - loglog 2, s^' (y) = log(e^y-1) ·e^y-y-1/e^y y^2_>0. Note that the sign of s^' is determined by log(e^y-1). For 0<e^y≤ 2, s^' (y)≤ 0 and s(y) is decreasing; for e^y≥ 2, s(y) is increasing. Therefore, min_y∈(0,∞) s(y) = s(log 2) = 0. Since x = -log y, we have completed the proof of <Ref>. Another important property of ι is that it can provide a lower bound for q_min(t). For every t≥ 0, we have q_min(t) ≥ι ( -log(L_t) - log n ). We use <ref> to get 1/nℓ(q_min(t)) ≤ L_t ⇒1/ne^-ψ( q_min(t))≤ L_t ⇒ψ(q_min(t)) ≥ -log n - log L_t ⇒ q_min(t) ≥ι ( -log(L_t) - log n ). by <Ref> Then, we complete the proof of <Ref>. Now, we are ready to give a lower bound of v_t. The following lemma is an extension of Corollary E.6 in <cit.>, where they dealt with a homogeneous model and the exponential loss; we extend this to a non-homogeneous model. The key ingredient is <Ref>. Suppose <Ref> holds. Consider v_t:= L(_t), -_t. If L_t ≤1/2n e^κ, then v_t ≥ - L_t log (2 n e^κ L_t)≥ 0. By definition, we have v_t := ⟨∇ L(_t), -_t ⟩ = -1/n∑_i=1^n ℓ'(y_i f(_t; _i)) y_i ⟨∇ f(_t; _i), _t ⟩ =-1/n∑_i=1^n ℓ'(y_i f(_t; _i)) y_if(_t; _i) -1/n∑_i=1^n ℓ'(y_i f(_t; _i)) (y_i ⟨∇ f(_t; _i), _t ⟩ - y_if(_t;_i) ) ≥ -1/n∑_i=1^n ℓ'(y_i f(_t; _i)) y_i f(_t; _i) - κ L_t since |ℓ^' (x) | ≤ℓ(x) and |⟨∇ f(_t;_i) ,_t⟩-f(_t;_i)| ≤κ by <Ref> = 1/n∑_i=1^n e^-ψ(q_i(t))ψ^' (q_i(t)) q_i(t) - κ L_t. since ℓ(·) = exp(-ψ(·)) Applying <Ref> and <Ref>, we have q_i(t) ≥ q_min(t) ≥ι ( -log(nL_t) ) := -log(e^nL_t-1) ≥ -log(e^1/2-1) ≥ 0 . Then we can apply <Ref> to get ψ^' (q_i(t)) q_i(t) ≥ψ^'(ι ( -log(nL_t) )) ι ( -log(nL_t) ) = ι ( -log(nL_t) )/ι^' ( -log(nL_t) ). Invoking <Ref>, we have ι ( -log(nL_t) )/ι^' ( -log(nL_t) )≥ -log(nL_t) + loglog 2 ≥-log(nL_t) + loglog e^1/2 = - log (2nL_t) . Putting the above two inequalities together, we have ψ^' (q_i(t)) q_i(t) ≥ -log(2nL_t), for every i=1,…,n. Plugging this back to the bound of v_t, we get v_t ≥ -1/n∑_i=1^n e^-ψ(q_i(t))log(2nL_t) - κ L_t = -L_t log(2n L_t) - κ L_t = - L_t log(2n e^κ L_t) ≥ 0. This completes the proof of <Ref>. Right now, we get a lower bound for v_t, which is the cross term in the expression of _t+1^2. The next lemma controls the increase of the parameter norm ρ_t using v_t and L_t. Suppose <Ref> hold. If L_t≤min{1/2n e^κ, 1/η̃(4ρ^2 + 2 β)}, then 0≤ 2η̃v_t≤ρ_t+1^2 - ρ_t^2 ≤ 2η̃v_t ·( 1- 1/2log(2ne^κL_t)). By definition, we have ρ_t+1^2 - ρ_t^2 = 2η̃⟨∇ L_t, -_t ⟩ + η̃^2 ∇ L_t^2 =2η̃v_t + η̃^2 ∇ L_t^2≥ 2η̃v_t ≥0 , where the last inequality is by <Ref>. Besides, ρ_t+1^2 - ρ_t^2 = 2η̃v_t ( 1+ η̃∇ L_t^2/2v_t) ≤ 2η̃v_t ( 1+ η̃L_t^2 ρ^2/2v_t) by ℓ'≤ℓ, <Ref>, and <Ref> ≤2η̃v_t (1+ L_t/2v_t) by L_t≤1/η̃(4ρ^2 + 2 β) ≤ 2η̃v_t ( 1- 1/2log(2ne^κL_t)). by <Ref> This completes the proof of <Ref>. §.§ Convergence of the Margin In this section, we show that the normalized margin of a general predictor converges in the stable phase. Recall that we define the (normalized) margin as γ̅() := min_i∈ [n] y_i f(;_i)/_2. However, this normalized margin is not a smooth function of . Instead, we consider a smoothed margin γ^a as an easy-to-analyze approximator of the normalized margin <cit.> γ^a() := -log L(_t)/_2. We see that γ^a is a good approximator of γ̅. We can then use γ^a to analyze the convergence of the normalized margin since they share the same limit (if it exists). While γ^a is relatively easy to analyze for gradient flow <cit.>, analyzing that for GD with a large (but fixed) stepsize is hard. To mitigate this issue, we construct another two margins that work well with large stepsize GD following the ideas of <cit.>. Under <Ref>, we define an auxiliary margin as γ^b() -log(2ne^κL())/, and a modified margin as γ^c () := e^Φ(L())/, where Φ(x) := log ( -log(2ne^κx)) + 1+(4ρ^2 + 2β) η̃/log(2ne^κx). These two margins provide a second-order correction when viewing large stepsize GD as a first-order approximation of gradient flow. In the following discussion, we will show that γ̅()≈γ^a()≈γ^b()≈γ^c(). At last, we will use the convergence of γ^c(_t) to prove γ̅(_t) converges. The following lemma shows that γ̅()≈γ^a(). For the smooth margin γ^a(_t) defined in <ref> and the normalized margin γ̅(_t) defined in <ref>, we have * When L_t ≤1/2n, we have q_min(t) ≤ -log L_t ≤log(2n) + q_min(t), and γ̅(_t) ≤γ^a(_t) ≤γ̅(_t) + log(2n)/ρ_t. * Assume <Ref> holds. If L_t → 0, then |γ^a(_t) - γ̅(_t)| → 0. To prove the first claim, notice that L_t≤1/2nℓ(q_min(t)) = log (1+ exp(-q_min(t)))≤ nL_t≤1/2. Therefore we have e^-q_min(t)≤ e^1/2 -1 ≤ 1. Using x/2≤log(1+x) ≤ x for 0≤ x≤ 1, we get 1/2 e^-q_min(t)≤ℓ(q_min(t)) = log(1+e^-q_min(t)) ≤ e^-q_min(t). Then we can bound L_t by 1/2ne^-q_min(t)≤1/nℓ(q_min(t)) ≤ L_t ≤ℓ(q_min(t)) ≤ e^-q_min(t), which is equivalent to q_min(t) ≤ -log L_t ≤log(2n) + q_min(t). Dividing both sides by ρ_t proves the second claim: γ̅(_t) := q_min(t)/ρ_t≤γ^a(_t) := -log L_t/ρ_t≤γ̅(_t) + log(2n)/ρ_t = log (2n) + q_min(t)/ρ_t. For the last claim, we only need to show that ρ_t →∞. This is because if L_t→ 0, we have for any i∈ [n], y_i f(_t; x_i) →∞. Using y_i f (_t; x_i) ≤ C_r,κ_t+C_r from <Ref>, we have ρ_t = _t_2 →∞. Now we have completed the proof of <Ref>. The following lemma shows that γ^c() ≈γ̅(). Suppose that <Ref> holds. Let γ^c(_t) be the modified margin as defined in <Ref> and γ^b(_t) be the auxiliary margin as defined in <Ref>. If L_t ≤1/2ne^κ+2, there exists a constant c such that γ^c (_t) ≤γ^b(_t)≤γ̅(_t) ≤(1+ c/log (1/L(_t)))γ^c(_t).   Step 1. Proof of the first inequalities. Notice that e^Φ(L_t) = - log(2n e^κ L_t) ·exp( 1+(4ρ^2 + 2β) η̃/log(2ne^κ L_t)) using <Ref> ≤ -log(2n e^κ L_t) since L_t≤1/2ne^κ, exp( 1+(4ρ^2 + 2β) η̃/log(2ne^κ L_t))≤ 1, and log(2ne^κL_t) >0 ≤ -log(L_t) - log(2n) ≤ q_min(t). By argument 1 in <Ref> By the definition of γ^b and γ^c as in <Ref>, we have γ^c (_t) := e^Φ(L(_t))/_t≤-log(2n e^κ L_t)/_tγ^b(_t) ≤q_min(t)/_t =: γ̅(_t). This completes the proof of the first two inequalities. Step 2. Proof of the third inequality. First, we have γ̅(_t)/γ^c(_t) = γ̅(_t)/γ^b(_t)·γ^b(_t)/γ^c(_t) = q_min(t)/ - log (2ne^κL_t)·exp( 1+ (4ρ^2 + 2β)η̃/- log (2ne^κL_t)) By the definitions of γ̅, γ^b, γ^c ≤-log(L_t)/- log (2ne^κL_t)·exp( 1+ (4ρ^2 + 2β)η̃/- log (2ne^κL_t)) Since q_min(t) ≤log(-L_t) by <Ref> = (1+ log(2ne^κ)/- log (2ne^κL_t)) ·exp( 1+ (4ρ^2 + 2β)η̃/- log (2ne^κL_t)). To simplify the notation, we let c_1 1+ (4ρ^2 + 2β)η̃ and c_2 = log(2ne^κ). Since L_t≤1/2ne^κ+2⇒ - log (2ne^κL_t) ≥ 2>1, we have 1+ (4ρ^2 + 2β)η̃/- log (2ne^κL_t)= c_1/- log (2ne^κL_t)≤ c_1. Besides, given x<c, we have e^x≤ 1+e^c x. Therefore, exp( 1+ (4ρ^2 + 2β)η̃/- log (2ne^κL_t)) = exp( c_1/- log (2ne^κL_t)) ≤ 1+ c_1exp(c_1)/- log (2ne^κL_t). Plugging this into the bound for γ̅(_t)/ γ^c(_t), we get γ̅(_t)/γ^c(_t) = (1+ c_2/- log (2ne^κL_t)) ·exp( c_1/- log (2ne^κL_t)) ≤(1+ c_2/- log (2ne^κL_t)) ·( 1+ exp(c_1) c_1/- log (2ne^κL_t)) ≤ 1+ c_2 + exp(c_1)c_1+c_2c_1 exp(c_1)/- log (2ne^κL_t) Since -log (2ne^κL_t) ≥ 1 = 1+ c_2 + exp(c_1)c_1+c_2c_1 exp(c_1)/- log L_t-c_2. Note that - log L_t-c_2 ≥ 2>1. Because x/x-c_2 is decreasing when x≥ c_2+1, we have -log L_t/- log L_t-c_2≤ c_2+1 1/- log L_t-c_2≤c_2+1/-log L_t. Plug this inequality into the previous bound for γ̅(_t)/γ^c(_t), we get γ̅(_t)/γ^c(_t)≤ 1+ (c_2 + exp(c_1)c_1 + c_2c_1exp(c_1))(c_2+1)/-log L_t. Let c (c_2 + exp(c_1)c_1 + c_2c_1exp(c_1))(c_2+1). This completes the proof of the third inequality, and thus completes the proof of <Ref>. The next lemma shows the convexity of Φ defined in <Ref>. The convexity will help us analyze the change of γ^c(_t) in the gradient descent dynamics. Specifically, we are going to use the property that Φ(x) - Φ(y) ≥Φ^' (y)(x-y), for all x,y. The function Φ(x) defined in <Ref> is convex for 0<x< 1/2ne^2+κ. Check that Φ'(x)= 1 - 1/log (2ne^κ x)(1+(4ρ^2 +2 β)η̃)/x log (2ne^κ x), and that Φ^''(x) = (1+(4ρ^2 +2 β)η̃)·(2+ log(2n e^κx)) - log^2(2n e^κx) - log(2n e^κx)/x^2 log^3(2n e^κx). Note that when x≤1/2ne^2+κ, we have log(2n e^κx) ≤ -2, which implies 2+ log(2n e^κx)≤ 0, log(2n e^κx)<0, and - log^2(2n e^κx) - log(2n e^κx) <0. Plugging these into the previous equality, we have Φ^''(x) ≥ 0 when 0<x< 1/2ne^2+κ. Now we have completed the proof of <Ref>. Before we dive into the proof of the monotonic increasing γ^c(_t), we show that γ^c is bounded first. The convergence of γ^c is a direct consequence of the monotonic increasing and the boundedness of γ^c. When L_t ≤{1/2n e^κ, 1/η̃(4ρ^2 + 2 β)} for t≥ s, there exists B_0 such that γ^c(_t)≤γ^b(_t)≤γ^a(_t) ≤γ̅(_t) +log 2n/ρ_s≤ B_0. Apply <ref>, we have _t≥ρ_t≥ρ_s. Then we can apply <ref> and there exists a constant C_ρ_s,κ such that for all i, |y_i f(_t;_i)| ≤ C_ρ_s,κ_t. Hence, γ̅(_t) = min_i∈[n] y_i f(_t;_i)/_t≤ C_ρ_s, κ. Besides, by <Ref>, we have γ^a (_t) ≤γ̅(_t) +log 2n/ρ_t≤ C_ρ_s, κ + log 2n/ρ_s. By <Ref>, we have γ^c(_t) ≤γ^b(_t) ≤γ^a(_t) ≤ C_ρ_s, κ + log 2n/ρ_s. Let B_0 = C_ρ_s, κ + log 2n/ρ_s. Then, we complete the proof of <Ref>. The following lemma is a variant of Proposition 5, item 1, in <cit.> and Lemma 4.8, in <cit.>. Before the lemma, we need some auxiliary definitions. let us define _t _t/_t, _t _t _t^⊤ (-∇ L_t), _t ( -_t _t^⊤) (-∇ L_t). Therefore, we have ∇ L_t^2 = _t^2 + _t^2. The key point of this decomposition is that we consider the gradient of the loss function as a sum of two orthogonal components. The first component _t is the component in the direction of _t, and the second component _t is the component orthogonal to _t. We will show that the modified margin γ^c(_t) is monotonically increasing. And the increase of γ^c(_t) is lower bounded by a term that depends on _t^2. Suppose <Ref> holds. If there exists s such that L_s ≤min{1/e^κ+22n, 1/η̃(4ρ^2 + 2 β)}, then for t≥ s, we have * L_t+1≤ L_t. * v_t ≥ -L_t log(2n e^κ L_t) ≥ 0. * ρ_t+1^2 - ρ_t^2 ≥ 2η̃v_t. * logγ^c(_t+1) - logγ^c (_t) ≥ρ_t^2/v_t^2_t^2 logρ_t+1/ρ_t. As a consequence, γ^c(_t) admits a finite limit. The first claim is by <Ref> and induction. The second and the third claims are consequences of <Ref>, respectively. We now prove the last claim. The proof is divided into two steps. * In step 1, we constuct an auxilary function Ψ(x) which is close to -Φ^'(x) and show that: Ψ(L_t)(L_t+1 - L_t) ≤ - 1/ρ_t^2(ρ_t^2 - ρ_t+1^2) ( 1/2 + ρ_t^2/2v_t^2_t^2 ). * In Step 2, we show that Ψ(x)≤ -Φ^'(x) and use this property to get the monotonicity of the modified margin γ^c. Step 1. Construct Ψ(x). By <Ref>, we have L_t+1 - L_t:= L(_t+1) - L(_t) ≤ -η̃(1-(2ρ^2 + β )η̃L_t) ∇ L_t^2. Multiplying both sides by 2 v_t 1- 1/2log (2ne^κ L_t)/1-(2ρ^2 + β)η̃L_t>0, we get 1- 1/2log (2ne^κ L_t)/1-(2ρ^2 + β)η̃L_t 2v_t (L_t+1 - L_t) ≤ -2η̃v_t ( 1- 1/2log (2ne^κ L_t)) ∇ L_t^2. From <Ref> we have 0≤ρ_t+1^2 - ρ_t^2 ≤ 2η̃v_t ( 1- 1/2log (2ne^κ L_t)). Using the above we get 1- 1/2log (2ne^κ L_t)/1-(2ρ^2 + β)η̃L_t 2v_t (L_t+1 - L_t) ≤ -(ρ_t+1^2 - ρ_t^2) ∇ L_t^2. Recall that ∇ L_t^2 = _t^2 + _t^2. For _t, we have _t = 1/ρ_t⟨_t, -∇ L_t ⟩ = v_t/ρ_t. Then we can decompose ∇ L_t^2 as ∇ L_t ^2 = _t^2 + _t^2 = v_t^2/ρ_t^2 + _t^2. Plugging this into <Ref> and dividing both two sides by 2v_t^2, we have 1- 1/2log (2ne^κ L_t)/(1-(2ρ^2 + β)η̃L_t)v_t (L_t+1 - L_t) ≤ -1/ρ_t^2(ρ_t+1^2 - ρ_t^2)( 1/2 + ρ_t^2/2 v_t^2_t^2) . By <Ref>, we have v_t ≥ - L_t log(2ne^κ L_t). Define Ψ(x) := -1- 1/2log (2ne^κ x)/(1-(2ρ^2 + β)η̃x)x log (2ne^κ x). Then we have Ψ(L_t) (L_t+1 - L_t) -1- 1/2log (2ne^κ L_t)/(1-(2ρ^2 + β)η̃L_t) L_t log (2ne^κ L_t) (L_t+1 - L_t) ≤1- 1/2log (2ne^κ L_t)/(1-(2ρ^2 + β)η̃L_t)v_t (L_t+1 - L_t) ≤ -1/ρ_t^2(ρ_t^2 - ρ_t+1^2)( 1/2 + ρ_t^2/2 v_t^2_t^2). Step 2. Show Ψ(x)≤ -Φ^'(x). We are going to show that Ψ(x) ≤ - Φ^' (x). Note that when 0<x≤min{1/e^κ+22n, 1/η̃(4ρ^2 + 2 β)}, we have log (2n e^κx) < 0 and 1- (2ρ^2 + β) η̃x) ≥1/2>0. Therefore, we have Ψ(x) = 1- 1/2log (2ne^κ x)/1-(2ρ^2 + β)η̃x_ J >0·-1/x log(2ne^κx)_>0. To get an upper bound of Ψ(x), we just need an upper bound of J. Let a := -1/2log(2ne^κx)>0 and b := (2ρ^2 + β) η̃x∈ (0,1/2]. Then we invoke <Ref> to get J := 1+a/1-b≤ 1+2a + 2b = 1 - 1/log(2ne^κx) + (4ρ^2 + 2β) η̃x. Recall that x ≤1/e^κ+22n≤1/2n e^κ and 2n e^κ≥ 1. Then we apply <Ref> to get x≤-1/log (2n e^κx). Plugging this into the bound of J, we get J ≤ 1 - 1/log(2ne^κx) + (4ρ^2 + 2β) η̃x ≤ 1 - 1/log(2ne^κx)(1+ (4ρ^2 + 2β) η̃). Plugging (<ref>) into (<ref>), we have Ψ(x) = J ·-1/x log(2ne^κx)≤ - 1 - 1/log (2ne^κ x)(1+(4ρ^2 +2 β)η̃)/x log (2ne^κ x) = - Φ'(x), which verifies that Ψ(x) ≤ - Φ^' (x). By this and <Ref>, we have Φ'(L_t) (L_t+1 - L_t) + φ^'(ρ_t^2) (ρ_t+1^2 - ρ_t^2) ( 1/2 + ρ_t^2/2 v_t^2_t^2) ≥ 0, where φ(x) = -log x = log(1/x). Recall that for 0<x≤1/2ne^κ+2, Φ(x) is convex by <Ref>. By convexity of φ and Φ, we have Φ(L_t+1) - Φ(L_t) + (log1/ρ_t+1^2 - log1/ρ_t^2) ( 1/2 + ρ_t^2/2 v_t^2_t^2) ≥ 0. By the definition of γ^c in <Ref>, this can be rewritten as logγ^c(_t+1) - logγ^c (_t) = ( Φ(L_t+1) - Φ(L_t)) + (log1/ρ_t+1 - log1/ρ_t) ≥ - (log1/ρ_t+1^2 - log1/ρ_t^2)ρ_t^2/2 v_t^2_t^2 = ρ_t^2/v_t^2_t^2 logρ_t+1/ρ_t ≥ 0, where the last inequality is because of <Ref>. We have shown that γ^c(_t) is monotonically increasing. By <Ref>, γ^c(_t) is bounded. Therefore γ^c(_t) admits a finite limit. This completes the proof of <Ref>. §.§ Sharp rates of Loss and Parameter Norm Right now, we have already proved that γ^c(_t) is monotonically increasing and bounded, which indicates γ^c(_t) converges. However, if we want to show that γ̅(_t) converges, we still need to verify that L_t→ 0, which is the crucial condition for γ^c(_t), γ^b(_t), γ^a(_t), and γ̅(_t) to share the same limit, by <Ref> and <Ref>. Fortunately, with the monotonicity of γ^c(_t), we can prove that L_t converges to zero and even characterize the rate of L_t. Suppose <Ref> holds. If there is an s such that L(_s) ≤min{1/e^κ+22n, 1/η̃(4ρ^2 + 2 β)}, then for every t≥ s we have 1/1/ L(_s) + 3η̃ρ^2(t-s)≤ L(_t) ≤2/(t-s) η̃γ^c(_s)^2. That is, L(_t) = Θ( 1/t) → 0 as t→∞. By <Ref> and <Ref> in the proof of <Ref>, we know L_t is decreasing and L_t+1 - L_t ≤ -η̃/2∇ L_t^2 ≤ -η̃/2_t_2^2≤ -η̃/2v_t^2/ρ_t^2. We will establish an upper bound for ρ_t first. Note that γ^c(_t) is increasing for t≥ s by <Ref> and γ^b(_t)≥γ^c(_t) by <Ref>. By <Ref>, we have ρ_t = -log(2ne^κL_t)/γ^b(_t)≤-log(2ne^κL_t)/γ^c(_t)≤-log(2ne^κL_t)/γ^c(_s). Combining this with <Ref>, we have v_t/ρ_t≥-L_tlog(2ne^κL_t)/-log(2ne^κL_t)/γ^c(_s) = L_t γ^c(_s). Plugging this into (<ref>), we have L_t+1 -L_t ≤ - η̃/2 L_t^2 γ^c(_s)^2, which implies η̃γ(_s)^2/2 ≤L_t -L_t+1/L_t^2 ≤L_t -L_t+1/L_t L_t+1 Since L_t+1≤ L_t = 1/L_t+1 - 1/L_t, t≥ s. Telescoping the sum from s to t, we have (t-s) η̃γ^c(_s)^2/2≤1/L_t - 1/L_s≤1/L_t. Therefore we have L_t≤2/(t-s) η̃γ^c(_s)^2. Next we show the lower bound on the risk. By <Ref> we have L_t+1 - L_t ≥ - η̃( 1+ βη̃L_t) ∇ L_t^2 ≥-̃3/2η∇ L_t^2. Observe that under <Ref>, ∇ L_t = 1/n∑_i=1^n ℓ'( q_i(t)) y_i ∇ f(_t; _i)≤ρ L_t. Then we have L_t+1 - L_t ≥ - η̃3/2ρ^2 L_t^2, t≥ s. Let L̃_t := 3 η̃ρ^2/2 L_t, we have L̃_s ≤3 η̃ρ^2/21/η̃(4ρ^2 + 2β)≤3/8≤1/2. Furthermore, since L_t decreases monotonically, L̃_t≤L̃_s ≤1/2. The inequality becomes L̃_t+1 - L̃_t ≥ - L̃_t^2. Therefore, let c = 1/L̃_s and apply <Ref>, we have for any t≥ s, L̃_t≥1/c+ 2 (t-s). This is equivalent to L_t ≥1/1/ L_s + 3η̃ρ^2(t-s). We have completed the proof of <Ref>. Furthermore, we can characterize the order of ρ_t in the stable phase. Suppose <Ref> holds. If there is s such that L(_s) ≤min{1/e^κ+22n, 1/η̃(4ρ^2 + 2 β)}, then for t≥ s we have ρ_t = Θ(log(t)). Note that γ^c(_t) is increasing for t≥ s by <Ref> and γ^b(_t)≥γ^c(_t) by <Ref>. Therefore, ρ_t ≤-log(2ne^κL_t)/γ^b(_t)≤-log(2ne^κL_t)/γ^c(_t)≤-log(2ne^κL_t)/γ^c(_s). Combining this with <Ref>, we have ρ_t ≤log1/L(_s) + 3η̃ρ^2 (t-s)/2n e^κ/γ^c(_s) = (log(η̃t)). Besides, we have q_min≥ι(log1/L- log n) by <Ref> and q_min≤ B_0 ρ_t by <Ref>. Therefore we have ρ_t ≥ι(log1/nL_t)/B_0≥log1/nL_t/2B_0≥log(t-s)η̃γ^c(_s)^2/2n/2B_0 = Ω(log( t)), where the second inequality is because for ι(x) defined in (<ref>), ι(x) ≥x/2 for x ≥ 0.6, and the third inequality is by <Ref>. Combining them, we get ρ_t = Θ(log( t)). This completes the proof of <Ref>. §.§ Proof of Theorem <ref> We prove the items one by one. * The monotonicity of L_t comes from the result of <Ref> directly. * Item 1 is due to <Ref> . * For item 2, the monotonicity of ρ_t comes from the result of <Ref> and the order is due to <Ref>. * For item 3, we first know that L_t→ 0 by <Ref>. Then, by <Ref> and <Ref>, we know that γ^c(_t) converges. Combining these with <Ref> and <Ref>, we know that γ^c(_t) is an (1+O(1/(log1/L(_t)))-multiplicative approximation ofγ̅(_t), and γ̅(_t) shares the same limit as γ^c(_t). § EOS PHASE ANALYSIS In this section, we focus on the linearly separable case, that is, we work under <Ref>. We mainly follow the idea of <cit.> for the proof. In detail, we consider a comparator := _1 + _2, where where _1 := [ _1^(1); ⋮; _1^(m) ], with _1^(j) := a_j log(γ^2 η T) + κ/αγ·_*, j=1,… ,m, and _2 := [ _2^(1); ⋮; _2^(m) ], with _2^(j) := a_j η/ 2γ·_*, j=1,… ,m. Consider the following decomposition, _t+1-^2 =_t-^2+2 mη⟨∇ L (_t ), -_t⟩+m^2η^2∇ L(_t )^2 =_t-^2+ 2 mη⟨∇ L(_t), _1-_t⟩_=: I_1(_t)+ m η(2⟨∇ L(_t ), _2⟩+ m η∇ L(_t)^2_=: I_2(_t)) . We aim to prove I_1(_t) ≤1/T - L(_t) and I_2(_t) ≤ 0. Then we can get a bound for the average loss by telescope summing the decomposition. Here we also introduced the following vector _*: _* := [ a_1_*; ⋮; a_n_* ] We can observe that _1 = log(γ^2 η T)+ κ/αγ_* and _2 = η/2γ_*. For _1 defined in (<ref>), we have I_1() := ⟨∇ L ( ), _1-⟩≤1/γ ^2 η T - L(). Since L is averaged over the individual losses incurred at the data (_i, y_i)_i=1^n and gradient is a linear operator, it suffices to prove the claim assuming there is only a single data point (, y). Then by <Ref>, we have ⟨ y, _* ⟩≥γ >0. Then the loss becomes L() = ℓ (y f(; )) = ℓ (y 1/m∑_j=1^m a_j ϕ(^⊤^(j)) ). Now we expand I_1(): I_1() :=⟨∇ L(), _1 - ⟩ =ℓ'(yf(;))⟨ y∇ f(;), _1 - ⟩ = ℓ' (yf(;) ) 1/m∑_k=1^m a_k yϕ'(x^⊤^(k)) ^⊤ ( _1^(k) - ^(k)) = ℓ' ( y f(; ) ) [ 1/m∑_k=1^m a_k y (ϕ'(^⊤^(k)) ^⊤_1^(k) + ϕ(^⊤^(k)) - ϕ'(^⊤^(k)) ^⊤^(k) )_=:J_1 - 1/m∑_k=1^m a_k yϕ(^⊤^(k)) _=: J_2]. By definition we have J_2 = y f(; ). As for J_1, using ϕ'≥α and a_k y x^⊤_1^(k)≥ 0 by <Ref>, we have J_1 := 1/m∑_k=1^m a_k y (ϕ'(^⊤^(k)) ^⊤_1^(k) + ϕ(^⊤^(k)) - ϕ'(^⊤^(k)) ^⊤^(k) ) ≥1/m∑_k=1^m a_k α y ^⊤_1^(k) + 1/m∑_k=1^m a_k y ( ϕ(^⊤^(k)) - ϕ'(^⊤^(k)) ^⊤^(k) ) ≥1/m∑_k=1^m a_k^2 αlog(γ ^2 η T)+κ/αγ y ^⊤_* - 1/m∑_k=1^m |a_k| κ since |ϕ(^⊤^(k)) - ϕ'(^⊤^(k)) ^⊤^(k)|≤κ by <Ref> ≥log(γ ^2 η T) +κ - κ since y^⊤_* ≥γ and ∑_k=1^m a_k^2 = m = log (γ ^2 η T). Plugging in J_2 = y f(; ) and (<ref>) into (<ref>), we get I_1() = ⟨∇ L(), _1 - ⟩ = ℓ'(yf(;))(J_1 -J_2) ≤ℓ' ( y f(; ) ) [ log(γ ^2 η T) - y f(; ) ] since ℓ'<0 ≤ℓ(log(γ ^2 η T)) - ℓ( y f(; )) since ℓ is convex ≤1/γ ^2 η T - L(). where in the last inequality, we use ℓ(x) ≤exp(-x) and we only consider a single data point. This completes the proof of <Ref>. For _2 defined in (<ref>), for every , I_2():=2⟨∇ L(), _2 ⟩ + m η∇ L()^2 ≤ 0. For simplicity, we define g_i(^(j)) ℓ'(y_i f(; _i)) ϕ'(_i^⊤^(j)) . Note that -1≤ℓ'(·)≤ 0 and 0<α≤ϕ'(·) ≤ 1, we have -1≤ g_i(^(j))≤ 0. Under this notation, we have ∂ L()/∂_i = 1 /n∑_i=1^n ℓ'(y_i f(; _i)) y_i a_j m^-1ϕ'(_i^⊤^(j)) _i = 1/n∑_i=1^n g_i(^(j)) a_j m^-1 y_i _i. So we have I_2() := 2⟨∇ L(), _2 ⟩ + mη∇ L()^2 = 1/m∑_j=1^m [ 2/n∑_i=1^n g_i(^(j)) a_j y_i·_i^⊤_2^(j) + η1/n∑_i=1^n g_i(^(j)) a_jy_i _i ^2 ] . For the term inside the bracket, we have 2/n∑_i=1^n g_i(^(j)) a_j y_i·_i^⊤_2^(j) + η1/n∑_i=1^n g_i(^(j)) a_jy_i _i ^2 = 2/n∑_i=1^ng_i(^(j)) a_j y_i ·_i^⊤η/2γ a_j _* + η1/n∑_i=1^n g_i(^(j)) a_jy_i _i ^2 since _2^(j) := η a_j/2 γ_* by (<ref>) ≤ 2/n∑_i=1^ng_i(^(j)) a_j ^2 η/2γγ + η1/n∑_i=1^n g_i(^(j)) a_jy_i _i ^2 since g_i(·)≤ 0 and y_ix_i^⊤_* ≥γ = η( 1/n∑_i=1^n g_i(^(j)) + 1/n∑_i=1^n g_i(^(j)) y_i _i ^2) since a_j^2=1 ≤ η( 1/n∑_i=1^n g_i(^(j)) + 1/n∑_i=1^n g_i^2(^(j))) since |g_i(·)|≤ 1 and y≤ 1 ≤ 0. since -1≤ g_i(·)≤ 0 Hence, we prove that I_2()≤ 0. This completes the proof of <Ref>. For every η >0 and = _1 + _2 such that _1 := [ _1^(1); ⋮; _1^(m) ], with _1^(j) := a_j log(γ^2 η t) + κ/αγ·_*, j=1,… ,m, and _2 := [ _2^(1); ⋮; _2^(m) ], with _2^(j) := a_j η/ 2γ·_*, j=1,… ,m. we have: _T-^2/2 m η T+1/T∑_k=0^T-1 L (^(k) ) ≤1+8log ^2(γ^2 η T)/α ^2+ 8κ^2/α ^2+η^2 /γ^2 η T + _0^2/ mη T, for all T. By <Ref> and <Ref>, we have _t+1-^2 =_t-^2+ 2mη I_1(_t) +η m I_2(_t) ≤_t-^2+2 mη I_1(_t) ≤_t-^2+ 2 mη ( 1/γ^2 η T - L(_t) ). Telescoping the sum, we get _T-^2/2 mη+∑_t=0^T-1 L(_t) ≤ 1 +_0-^2/2 m η. By <Ref>and <Ref>, we have _0 - ^2 ≤ 2 _0^2 + 2^2 ≤2 _0_2^2 +4_1^2 + 4_2^2 = 2_0_2^2 + 8mlog (γ^2 η T)^2 + 8mκ^2/α ^2 γ ^2 + mη^2 /γ^2, which implies that _T-^2/2 m η t+1/T∑_k=0^T-1 L (^(k) ) ≤1+8log ^2(γ^2 η T)/α ^2+ 8κ^2/α ^2+η^2 /γ^2 η T + _0^2/ mη T. We complete the proof of <Ref>. §.§ Proof of Theorem <ref> By <Ref>, we have 1/T∑_k=0^T-1 L (^(k) ) ≤1+8log ^2(γ^2 η T)/α ^2+ 8κ^2/α ^2+η^2 /γ^2 η T + _0^2/ mη T. This completes the proof. § PHASE TRANSITION ANALYSIS In this section, we will analyze the phase transition. In detail, we follow the idea of <cit.> and apply the perceptron argument <cit.> to locate the phase transition time. Compare to the previous EoS phase analysis, we need an extra assumption on the smoothness of the activation function, which is the <Ref>. To proceed, let us define the following quantities for the GD process: G(): =1/n∑_i=1^n 1/1+exp( y_i f(; _i) ), F() :=1/n∑_i=1^n exp(-y_i f(; _i)). Due to the self-boundedness of the logistic function, we can show that G(), L(), F() are equivalent in the following sense. 1. G() ≤ L() ≤ F(). 2. αγ G()≤√(m)∇ L()≤ G(). 3. If G() ≤1/2n, then F() ≤ 2G(). The first claim is by the property of the logistic loss. For the second one, ∇ L()^2 = ∑_j=1^m 1/n∑_i=1^n ℓ'(y_i f(; _i)) · y_i · a_j m^-1ϕ(_i^⊤^(j)) _i _2^2 ≤∑_j=1^m ( 1/n∑_i=1^n ℓ'(y_i f(; _i)) · m^-1 )^2 since y_i a_j ϕ(_i^⊤^(j)) _i ≤ 1 = 1/m G^2(). Besides, we have √(m)∇ L() ≥⟨ -∇ L (), _* ⟩ since _*≤√(m) = -1/nm∑_i=1^n ∑_j=1^m ℓ'(y_if(; _i)) y_i ϕ'(_i^⊤_*) _i^⊤_* ≥αγ1/n∑_i=1^n 1/1+exp(y_i f(; _i)) since ϕ^'≥α and y_i _i^⊤^* ≥γ = αγ G(). For the third claim, by the assumption, we have 1/n·1/1+ exp (y_i f(; _i) )≤ G() ≤1/2n, which implies that y_i f(; _i) ≥ 0, ∀ i ∈ [n]. Therefore, G() = 1/n∑_i=1^n 1/1+exp ( y_i f(; _i) )≥1/n∑_i=1^n 1/2exp ( y_i f(; _i) ) = 1/2 F(). We complete the proof of <Ref>. The key ingredient of the phase transition analysis is the following lemma. The main idea is to consider the gradient potential G() instead of the loss function L() in EoS phase. And this will decrease the order of the bound of phase transition time from Õ(η^2) to Õ(η). For every η, we have _t≤√(m)·2+8log (γ^2 η t)/α+ 8κ/α+4η/γ + 2_0. By <Ref>, we have _t-^2/2 m η t≤_t-^2/2 m η t+1/t∑_k=0^t-1 L (^(k) ) ≤1+8log ^2(γ^2 η t)/α ^2+ 8κ^2/α ^2+η^2 /γ^2 η t + _0^2/ mη t. Besides, we know that ^2 ≤ 2_1^2 + 2_2^2= 4mlog (γ^2 η t)^2 + 4mκ^2/α ^2 γ ^2 + mη^2 /2γ^2. Combining them, we have _t^2 ≤ 2_t - ^2 + 2^2 ≤ m·2+24log ^2(γ^2 η t)/α ^2+ 24κ^2/α ^2+3η^2 /γ^2 + 2_0^2. Hence, we can get a bound for _t. _t≤√(m)·2+8log (γ^2 η t)/α+ 8κ/α+4η/γ + 2_0. Now we have completed the proof of <Ref>. For every η, we have 1/t∑_k=0^t-1 G(^(k)) ≤⟨_t, _* ⟩ - ⟨_0, _* ⟩/m αγη t≤√(m)_t- ⟨_0, _* ⟩/m αγη t, t≥ 1. Additionally, we have 1/t∑_k=0^t-1 G(^(k)) ≤2+8 log (γ^2 η t)/α + 8κ/ α +4η/αγ^2 η t + 3 _0/αγη t, t ≥ 1. This is from the perceptron argument <cit.>. Specifically, ⟨_t+1, _* ⟩ = ⟨_t, _* ⟩ - mη⟨∇ L(_t), _* ⟩ = ⟨_t, _* ⟩ - η∑_i=1^n∑_k=1^m a_k^2ℓ'(y_i f(_t;_i))y_i ϕ(_i^⊤_t^(k)) ⟨_i, _* ⟩ ≥⟨_t, _* ⟩ - η∑_i=1^n∑_k=1^m a_k^2ℓ'(y_i f(_t;_i))αγ ≥⟨_t, _* ⟩ + m αγη G(_t). Telescoping the sum, we have 1/t∑_k=0^t-1 G(^(k)) ≤⟨_t, _* ⟩ - ⟨_0,_* ⟩/mαγη t ≤√(m)_t - ⟨_0, _* ⟩/mαγη t ≤2+8 log (γ^2 η t)/α + 8κ/ α +4η/αγ^2 η t + 3 _0/√(m)αγη t. by <Ref> We have completed the proof of <Ref>. Besides, we can make use of the equivalence between G and L to get a bound for the loss function which is independent of the initial margin at s. Suppose that there exists a time s such that L(_s) ≤min{1/η(4+2β̃), 1/2 e^κ+2n}. Then for every t≥ s+1, we have L(_t) ≤2/(t-s)α ^2 γ ^2. By <Ref> and f(x) is 1/√(m) Lipschitz and β̃/m smooth, we have L_k+1≤ L_k - mη(1 - (2 + β̃) η L(_k) ) ∇ L_t^2. By <Ref> and L_t ≤1/η(4+2β̃), we have L_k+1≤ L_k -α^2 γ ^2 /2 L_k^2. Multiplying 1/L_k^2 in both sides, we have α ^2 γ ^2 /2≤L_t - L_k+1/L_k^2 ≤1/L_k+1 - 1/L_k. Taking summation for k=s,… ,t-1, we have 1/L_t > 1/L_t - 1/L_s≥(t-s) α ^2 γ ^2/2 L_t ≤2/(t-s) α ^2 γ ^2. This completes the proof of <Ref>. At last, we will use the bound for the gradient potential to get an upper bound for the phase transition time. §.§ Proof of Theorem <ref> Applying <Ref>, we have 1/τ∑_k=0^τ-1 G(^(k)) ≤2+8 log (γ^2 ητ)/α + 8κ/ α +4η/αγ^2 ητ + 3 _0/√(m)αγητ ≤2+8κ/α +8 log (γ^2 τ )/α +(4+8/α )η/αγ^2 ητ + 3 _0/√(m)αγητ since log(η) ≤η. Let c_1 = 4e^κ+2, c_2 = (8+4β̃). Note that we have 2+8κ/α/αγ^2ητ≤1/4(c_1n + c_2η) if γ^2 τ≥ 4(2+8κ) c_2η + c_1n/ηα ^2 8 log (γ^2 τ )/α/αγ^2 ητ≤1/4(c_1n + c_2η) if γ^2 τ≥ 128 c_2η +c_1 n/ηα ^2logc_2η+c_1n/η, since <Ref> (4+8/α)η/αγ ^2 ητ≤1/4(c_1n + c_2η) if γ^2 τ≥48/α ^2 (c_2η + c_1n), 3 _0/mαγητ≤1/4(c_1n + c_2η) if γτ≥12/α(c_2η+ c_1n)/η·_0/√(m) and that the two conditions are satisfied because γ^2 τ 128(1+4κ)/α ^2max{ c_2η, c_1n, e, c_2η+c_1n/ηlogc_2η+c_1n/η, (c_2η+c_1n)_0/η√(m)} ≥max{ 4(2+8κ) c_2η + c_1n/ηα ^2, 128 c_2η +c_1 n/ηα ^2logc_2η+c_1n/η ,48/α ^2 (c_2η + c_1n), 12/α(c_2η+ c_1n)/η·_0/√(m)}. So there exits s≤τ such that G(_s) ≤min{1/e^κ+2 4 n, 1/η(8+4β̃)} Then we have L(_s) ≤ F(_s) ≤ 2 G(_s) ≤{1/e^κ+2 2 n, 1/η(4+2β̃)}. We complete the proof of Theorem <ref>. §.§ Proof of Corollary <ref> The main idea is to show that τ≤T/2. Note that by <Ref>, we have τ =128(1+4κ)/α ^2max{ c_2η, c_1n, e, c_2η+c_1n/ηlogc_2η+c_1n/η, (c_2η+c_1n)/η·_0/√(m)}, in which expression c_1 = 4e^κ+2 and c_2 = (8+4β̃). We can verify that, 128(1+4κ)/α ^2c_2η = 128(1+4κ)/α ^2c_2 ·α ^2 γ ^2/256(1+4κ)c_2 T = T/2, 128(1+4κ) c_1 n/α ^2≤T/2. Furthermore, we have n≤α ^2 γ ^2 T/256(1+4κ) c_1. Hence, c_2 η + c_1 n/η = α ^2 γ ^2 T/256(1+4κ) + c_1n/α ^2 γ ^2 T/256(1+4κ)c_2≤2 ·α ^2 γ ^2 T/256(1+4κ)/α ^2 γ ^2 T/256(1+4κ)c_2≤ 2c_2. We get that: 128(1+4κ)/α ^2·c_2η+c_1n/ηlogc_2η+c_1n/η ≤ 2 128(1+4κ)/α ^2 c_2 ln (2c_2) ≤128(1+4κ)/α ^2 4c_2^2≤T/2, 128(1+4κ)/α ^2·(c_2η+c_1n)/η·_0/√(m) ≤128(1+4κ)/α ^2· 2c_2 _0/√(m)≤T/2. Hence, we have τ≤T/2. Applying <Ref>, we have L(_T) ≤2/α ^2 γ ^2 η(T-τ)≤4/α ^2 γ ^2 η T≤2048(1+4κ) c_2/α ^4 γ ^4T^2 = (1/T^2). We have completed the proof of Corollary <ref>. §.§ Proof of Theorem <ref> The main idea is to construct an upper bound of η and apply the analysis in <Ref>. Note that give _0=0, we have f(_0; _i) = 1/m∑_k=1^m a_k ϕ(_i^⊤^(k)_0) = s_a ϕ(0), where s_a = ∑_k=1^m a_k/m. Therefore, [∇ L(_0)]^(k) = 1/2ℓ^' (s_a ϕ(0)) ·a_k/mϕ'(0) _1 + 1/2ℓ^' (s_a ϕ(0)) ·a_k/mϕ'(0) _2 = a_k/mℓ^' (s_a ϕ(0)) ϕ'(0) _1 + _2/2 = a_k/mℓ^' (s_a ϕ(0)) ϕ'(0) (γ, √(1-γ ^2)/4). Let 1/mℓ^' (s_a ϕ(0)) ϕ'(0) (γ, √(1-γ ^2)/4), we have _1^(k) = 0- η∇ [L(_0)]^(k)= - η a_k . Therefore, f(_1;_i) = 1/m∑_k=1^m a_k ϕ(-_i^⊤ (η a_k )). We can notice that -_1 ^⊤<0 and -_2^⊤>0, when γ≤ 0.1. Furthermore, we have f(_1; _1) = 1/m∑_a_k=1ϕ( -_1^⊤ (η)) + 1/m∑_a_k=-1 -ϕ( _1^⊤ (η)) = 1/m∑_a_k=1 [ϕ(0) -_1^⊤ (η) ϕ^' (ϵ_1) ] + 1/m∑_a_k=-1 [-ϕ(0) - _1^⊤ (η) ϕ^' (ϵ_2) ] = s_a ϕ(0) - η_1^⊤1/m [∑_a_k=1ϕ^' (ϵ_1)+ ∑_a_k=-1ϕ^' (ϵ_2) ] ≤ s_a ϕ(0) - η_1^⊤α ϕ^' (ϵ _i) ≥α. Note that 1/2ℓ(s_aϕ(0) - η_1^⊤α) ≤1/2ℓ(f(_1;_1))≤ L(_1) ≤ L(_0) = ℓ(s_aϕ(0)). We apply <Ref> to get η≤|s_a ϕ(0)| + ln 3/_1^⊤α. We use c_3|s_a ϕ(0)| + ln 3/_1^⊤α. Now we know η≤ c_3. Furthermore, notice that ∇ L(_t)≤ L_t ≤ L_0. We get that _t+1 - _t≤η L_0 ≤ c_3 L_0. Hence, |f(_t+1; _i) - f(_t; _i)| ≤ c_3 L_0. Assume that l_b = min{1/e^κ+2 4 n, 1/η(8+4 β̃)} and L_s-1≥ l_b, L_s ≤ l_b. We know that l_b ≥min{1/e^κ+24n, 1/c_3(8+4β̃)} l_c. We want to show that there is an lower bound for L_s. Now that L_s = 1/2[ ℓ(f(_s;_1)) + ℓ(f(_s;_2))]. Applying <Ref>, we can get that L_s ≥exp(-c_3L_0) L_s-1≥exp(-c_3 L_0) l_b. Recall that by <Ref>, we have L_t ≥1/1/ L_s + 3η̃ρ^2(t-s), t ≥ s. Combine this with ρ=1/√(m), η̃= η m and L_s ≥exp(-c_3 L_0) l_b and we get L_t ≥1/exp(c_3 L_0)/l_b + 3η (t-s), t≥ s. Note that when t≤ s, L_t ≥ l_b. We can get a lower bound for L_t by L_t ≥1/exp(c_3 L_0)/l_bt + 3η t≥1/exp(c_3 L_0)/l_bt + 3c_3t≥1/exp(c_3 L_0)/l_ct + 3c_3t= c_4/t, where c_4 = 1/exp(c_3 L_0)/l_c + 3c_3 depends only on {a_j}_j=1^m, ϕ(0), κ, β̃ and n. Now we have completed the proof of Theorem <ref>. § SCALING AND HOMOGENOUS ERROR In this section, we consider different scaling of two-layer networks. We add a scaling factor b into the model, i.e., f(;) = b/m∑_j=1^m a_j ϕ( ^⊤^(j)). We will show that given a limited computation budget T (total iterations), larger b and a corresponding best η̃=η· m will achieve the same best rate as b=1, i.e., O(1/T^2). While for smaller b, the rate is O(b^-3/T^2). Before we present the analysis, here are the bounds with b and η̃= m ·η following the process of <Ref>: 1/t∑_k=0^t-1 L (_k ) ≤1+8log ^2(γ^2 η t)/(α ^2 b^2)+ 8κ^2/α ^2+η^2 b^2 /γ^2 η t + _0^2/ mη t, 1/t∑_k=0^t-1 G (_k ) ≤2+8log (γ^2 η t)/(α b)+ 8κ /α+2η b /αγ^2 b η t + 3_0/√(m)η b t. Case when b≥ 1.Given the previous bounds, we have the following results following the idea in <Ref>: * Gradient potential bound: G(_t) ≤C/t for all t≥ 0, * Phase transition threshold: G(_s) ≤min{ 1/4e^κ+2n, 1/η(8ρ^2 b^2 + 4β̃b)}, * Stable phase bound: L(_t) ≤2/C b^2 η(t-s), where C depends on α ,γ. Combine the first two arguments and assume η(8ρ^2b^2 + 4β̃b) ≥ 4e^κ+2n. We get s ≤ C η (8ρ^2b^2 + 4β̃b). Plug this into the third bound. We have L(_T) ≤2/Cb^2 η(T-Cη (8ρ^2 b^2 + 4β̃b)). It's obvious that the best η = T/16ρ^2 b^2C + 8β̃bC. Hence, L(_T) ≤8(8ρ^2 b^2C + 4β̃bC)/Cb^2T^2 = (1/T^2). Then, the rate is still O(1/T^2). Case when b<1. Similarly, we can get the following bounds: * Gradient potential bound: G(_t) ≤Cb^-2/t for all t≥ 0, * Phase transition threshold: G(_s) ≤min{ 1/4e^κ+2n, 1/η(8ρ^2 b^2 + 4β̃b)}, * Stable phase bound: L(_t) ≤2/C b^2 η(t-s), where C depends on α ,γ. Without loss of generality, we can assume η(8ρ^2b^2 + 4β̃b) ≥ 4e^κ+2n, since η can be small enough. Then, we have s≤ Cη (8ρ^2 + 4β̃b^-1). Then, we can get L(_T) ≤2/C b^2η(T-Cη (8ρ^2 + 4β̃b^-1)). It's obvious that the best η = T/16ρ^2 C + 8β̃b^-1C. Hence, L(_T) ≤8(8ρ^2 Cb^-2 + 4β̃b^-3C)/CT^2 = (b^-3/T^2). Combining the analysis for two cases, we observe that when b≥ 1, the fast loss rate is (1/T^2) given finite budget T. While b<1, the rate is (b^-3/T^2). In our main results, we set b=1 for the mean-field scaling. Under the mean-field regime, all bounds are independent of the number of neurons since we consider the dynamics of the distributions of neurons. Alternatively, if we set b=√(m), then the model becomes: f(;) = 1/√(m)∑_j=1^m a_j ϕ( ^⊤^(j)). The model falls into the NTK regime. The loss threshold will be related to m, but the loss rate is the same as that of the mean-field scaling. § ADDITIONAL PROOFS §.§ Proof of Example <ref> Recall that the two-layer neural network is defined as: f(;) = 1/m∑_j=1^m a_j ϕ(^T ^(j)). We can verify that if ϕ(x) is β-smooth and ρ-Lipschitz with respect to x, then f(;) is β/m-smooth and ρ/√(m)-Lipschitz with respect to . This is because: L() = ℓ'(y f(; )) y f(; ), ^2 L() = ℓ”(y f(; )) f(; )^⊗ 2 + ℓ'(y f(; )) y^2 f(; ), and that f(;) = [ ⋮; 1/m a_j ϕ'(^⊤^(j) ); ⋮ ], ^2 f(;) = [ ⋱ 0 0; 0 1/m a_j ϕ”(^⊤^(j) ) ^⊤ 0; 0 0 ⋱ ]. Now, we will focus on the parameters of each activation function. * GELU. ϕ(x) = x ·(1+(x/√(2)))/2 = x· F(x). ϕ^'(x) = F(x) + x· f(x), ϕ^''(x) = 2f(x) + x · f^'(x), where F(x), f(x) are the CDF and PDF of standard normal distribution. Note that xf(x) = x/√(2π) e^-x^2/2 and (xf(x))^' = 1/√(2π) (1 - x^2) e^-x^2/2. We can find the maximum of xf(x) is 1/√(2π) e^-1/2. Besides, we know that F(x), f(x) ≤ 1 and x · f^'(x)≤0. Combining them, we have ρ = 1+ e^-1/2 /√(2π) and β =2. For κ, ϕ - ϕ^'(x) x = -x· f(x). So the bound of κ is e^-1/2/√(2π). * Softplus. ϕ(x) = log(1+e^x). Therefore, ϕ^'(x) = e^x/1+e^x≤ 1, ϕ^''(x) = e^x/(1+e^x)^2≤ 1. Besides, (ϕ(x) - ϕ^'(x) x)^' = ( log (1+e^x) - e^x x/1+e^x)^' = - e^x x/(1+e^x)^2. So the maximum is ϕ(0) - ϕ^'(0)0 = log 2. Besides, when x>1, ϕ(x) ≥ x ≥e^x x/1+e^x. When x→ -∞, ϕ(x) - ϕ'(x)x → 0. Therefore, κ = log 2. * Sigmoid. ϕ(x) = 1/(1+e^-x). Hence, ϕ^'(x) = e^-x/(1+e^-x)^2≤ 1, ϕ^''(x) = e^-2x-e^-x/(1+e^-x)^3≤ 1. As for κ, we know that |ϕ(x) - ϕ^'(x) x| = | 1+e^-x - x e^-x/(1+e^-x)^2 | ≤1+e^-x +|x| e^-x/(1+e^-x)^2. Note that |x|e^-x≤ e^-2x + 1. We have |ϕ(x) ≤1+e^-x +e^-2x + 1/(1+e^-x)^2≤ 2. * Tanh. ϕ(x) = e^x- e^-x/e^x + e^-x≤ 1. Note that ϕ^'(x) = 1- ϕ(x)^2≤ 1 ϕ^''(x) = 2ϕ(x)^3 - ϕ(x) ≤ 2. Besides, we know that |x ϕ^'(x)| = 4|x|/(e^x + e^-x)^2≤ 4. Hence |ϕ(x) - ϕ^'(x)| ≤ |ϕ(x)| + |x ϕ^'(x)| ≤ 5. * SiLU. Note that ϕ^'(x) = 1+e^-x + xe^-x/(1+e^-x)^2, ϕ^''(x) = (2-x)e^-x/(1+e^-x)^2 + xe^-2x/(1+e^-x)^3. Because |x|e^-x≤ e^-2x+1 and |x|e^-2x≤ e^-3x + 1. We get |ϕ^'(x)| ≤ 2 and ϕ^''(x) | ≤ 4. At last, |ϕ(x) - x ϕ^'(x)| = |x|e^-x/(1+e^-x)^2≤ 1. * Huberized ReLU. It's obvious that ϕ^'(x)≤ 1 and β = 1/h. Note that ϕ is not second-order differentiable. At last, |ϕ(x) - x ϕ^'(x) | = 0 x<0, x^2/2h 0≤ x≤ h, h/2 x >h. Hence, it's upper bounded by h/2. Now we have completed the proof of Example <ref>. §.§ Proof of Example <ref> Because for activation functions in <Ref>, β≤ 4 and ρ≤ 2. Hence, for ϕ̃(x) = cx + (1-c)ϕ(x)/4, β̃=1 and ρ=1. Besides, since 0.5<c<1, we must have (ϕ̃(x))^'≥ 0.25. § ADDITIONAL LEMMAS If 1/2≥ L_1 ≥1/c and L_2 ≥ L_1 - L_1^2, we have L_2 ≥1/c+2. For function g(x) = x - x^2, g'(x) = 1-2x. If x ≤1/2, then g(x) is increasing. Then g(L_1) ≥ g( 1/c) = c-1/c^2 = c^2 + c-2/c^2 (c+2)≥1/c+2. Now we have completed the proof of <Ref>. Given a continuous function f(x) s.t. |f(x) - ⟨∇ f(x), x ⟩ | ≤κ, then for a fixed constant r>0 there exists C_r,κ and C_r s.t. for any x≥ r, |f(x)| ≤ C_r,κx, and for any x, |f(x)| ≤ C_r,κx + C_r. Since f is continuous, let C_r = max_x=r |f(x)| /r. Now for any x >r, let y = rx/x and consider g(s) = f(sy)/s. Then we have g^' (s) = ⟨∇ f(sy), sy ⟩ - f(sy) /s^2. Therefore, - κ/s^2≤ g^' (s) ≤κ/s^2. Let s = x/r, f(x)r/x = g(s) = g(1) + ∫_1^s g^' (t) dt ≤ g(1) + ∫_1^s κ/t^2 dt ≤ g(1) + κ ≤ rC_r + κ. Therefore, f(x) ≤ (C_r + κ/r )·x. Similarly, we can show that -f(x) ≤ (C_r + κ/r )·x. Therefore, for any x≥ r, |f(x)| ≤ (C_r + κ/r )·x. Let D=max_x≤ r |f(x)|, we have for any x, |f(x)| ≤ (C_r + κ/r)·x + D. We have completed the proof of <Ref>. Fixing c>1, then for every 0<x ≤1/c, we have x ≤-1/log (cx). This is equivalent to show that x log (cx) ≥ -1. Let s(x) = x log(cx), then s^' (x) = 1 + log(cx). Hence s(x) is decreasing when 0<x<1/ce and is increasing when x ≥1/ce. The minimum of s(x) is achieved at x = 1/ce, which is s( 1/(ce)) = -1/ce≥ -1. This completes the proof of <Ref>. Given 0<b≤1/2 and a>0, we have 1+a/1-b≤ (1+2a+2b). This is equivalent to show that (1+a) ≤ (1+2a+2b)(1-b) = 1+ 2a +b - 2ab - 2b^2. This is equivalent to 2b(a+b) ≤ (a+b). Since a+b>0 and b≤1/2, this is true. Now we have completed the proof of <Ref>. Given c>e, we have for any x>2clog c, log x/x≤1/c. It's equivalent to show that x - clog x ≥ 0. Let g(x) = x - clog x. g'(x) = 1-c/x. When x>2clog c >2c, g'(x)<0. Hence, the minimal is g(2clog c). Note that g(2clog c) = 2c log c - clog c - clog 2 - cloglog c = c log c - clog 2 - c loglog c = clogc/2log c. Now we want to show that c>2log c. Let h(y) = y - 2log y. h'(y) = 1 - 2/y>0 when y>e. h(e) = e - 2>0. Hence h(c)>h(e)>0 and g(2clog c) >0. This leads to g(x) >0. Then, we complete the proof of <Ref>. Given ℓ (x) = log(1+e^-x) and c>0, we have for any x, ℓ(x+c) ≥exp(-c) ℓ (x). Let g(x) = ℓ(x+c) - exp (-c) ℓ(x). Then, we have g^' (x) = -1/1+exp(x+c) + 1/exp(c) + exp(x+c) <0. Therefore, g(x) is monotonically decreasing. When x→∞, we have lim_x→∞ g(x) = lim_x→∞ [ℓ(x+c) - exp (-c) ℓ(x)] = exp(-x-c) -exp(-c) exp(-x) = 0. Therefore, g(x) ≥ 0 for any x. Now, we complete the proof of <Ref>. Assume ℓ(x)= log(1+e^-x). If ℓ(x+c) ≤ 2 ℓ(x), we have c ≤ln 3 + |x|. Note that ℓ(x+c) - 2 ℓ(x) = log1+e^x+c/1+2e^x + e^2x≤ 0. Then, 1+e^x+c/1+2e^x + e^2x≤ 1 e^c ≤ 2 + e^x ≤ 2+ e^|x|≤ 3 e^|x|. Therefore, c≤ln3 + |x|. This completes the proof of <Ref>.
http://arxiv.org/abs/2406.08748v1
20240613021218
Learning in Feature Spaces via Coupled Covariances: Asymmetric Kernel SVD and Nyström method
[ "Qinghua Tao", "Francesco Tonin", "Alex Lambert", "Yingyi Chen", "Panagiotis Patrinos", "Johan A. K. Suykens" ]
cs.LG
[ "cs.LG", "cs.AI", "stat.ML" ]
[ Learning in Feature Spaces via Coupled Covariances: Asymmetric Kernel SVD and Nyström method equal* Qinghua Taoequal,esat Francesco Toninequal,epflF Alex Lambertesat Yingyi Chenesat Panagiotis Patrinosesat Johan A.K. Suykensesat esatESAT-STADIUS, KU Leuven, Belgium epflLIONS, EPFL, Switzerland (most of the work was done at ESAT-STADIUS, KU Leuven) Qinghua Tao, Francesco Toninqinghua.tao@esat.kuleuven.be, francesco.tonin@epfl.ch. kernel svd, feature maps, asymmetry, covariances 0.3in ] § ABSTRACT In contrast with Mercer kernel-based approaches as used e.g., in Kernel Principal Component Analysis (KPCA), it was previously shown that Singular Value Decomposition (SVD) inherently relates to asymmetric kernels and asymmetric Kernel Singular Value Decomposition (KSVD) has been proposed. However, the existing formulation to KSVD cannot work with infinite-dimensional feature mappings, the variational objective can be unbounded, and needs further numerical evaluation and exploration towards machine learning. In this work, i) we introduce a new asymmetric learning paradigm based on coupled covariance eigenproblem (CCE) through covariance operators, allowing infinite-dimensional feature maps. The solution to CCE is ultimately obtained from the SVD of the induced asymmetric kernel matrix, providing links to . ii) Starting from the integral equations corresponding to a pair of coupled adjoint eigenfunctions, we formalize the asymmetric method through a finite sample approximation to speed up training. iii) We provide the first empirical evaluations verifying the practical utility and benefits of and compare with methods resorting to symmetrization or linear SVD across multiple tasks.[This work has been accepted at the 41st International Conference on Machine Learning (ICML), 2024. The previous preprint version can be found at <https://arxiv.org/abs/2306.07040> and contains useful discussions and insights on KSVD.] § INTRODUCTION Feature mappings can transport the data in a Hilbert space of a typically higher dimension. They are intimately linked through inner products with reproducing kernels <cit.> and thus often associated with symmetric learning. One can for example think of kernel principal components analysis (KPCA, <cit.>) where one tries to find orthonormal directions in the feature space that maximize the variance associated to a symmetric Gram matrix, or kernel canonical correlation analysis (KCCA, <cit.>) where the maximization of a correlation based on two different views of the data leads to an optimization problem governed by two symmetric Gram matrices. In many real-world applications however, there is an inherent degree of asymmetry. Among others, directed graphs of citation networks <cit.>, biclustering <cit.>, attention in Transformers <cit.> typically involve an asymmetry that cannot be captured when working with reproducing kernels. Often the asymmetric matrices are first symmetrized before applying some matrix decomposition such as singular value decomposition (SVD, <cit.>) so that only one set of eigenvectors is obtained. As a fundamental linear algebra tool, SVD can process arbitrary non-symmetric matrices and jointly learns both left and right singular vectors, e.g., embeddings of source and target nodes <cit.>. However, SVD alone lacks flexibility for nonlinear feature learning. <cit.> propose asymmetric kernel SVD (), a variational principle based on least-square support vector machines (LSSVMs) that leads to the matrix SVD and mentions that nonlinear extensions can be obtained when the SVD is applied to an asymmetric kernel matrix rather than the given data matrix. However, their formulation only allows finite-dimensional feature mappings to induce the kernel and its variational objective is unbounded unless the regularization hyperparameters are properly selected. Yet, <cit.> does not provide numerical evaluations on the practical utility and applications of , leaving this topic largely unexplored. While infinite-dimensional feature maps are common in all kernel methods, including the asymmetric ones, e.g., <cit.> focus on the understandings of the asymmetric dot-product attention kernel resulting from the queries and keys through a pair of Banach spaces in the supervised setting, little literature addresses learning with generic asymmetric kernel machines with infinite-dimensional maps. Differently, our work provides a new asymmetric learning paradigm for unsupervised feature learning based on the CCE allowing two generic datasets. Kernel methods additionally suffer from efficiency, as they require processing a kernel matrix that is quadratic in the sample size. Many approaches have been proposed to improve the efficiency, among which the Nyström method has been widely applied <cit.>. The Nyström method of subsampling arises from the approximate eigendecomposition of an integral operator associated with a symmetric kernel <cit.>, which restricts the existing method to only Mercer kernels. In <cit.>, -like methods for matrix compression or approximation are discussed by directly applying the symmetric method to estimate left and right singular vectors, yet ignoring the asymmetry constraints. In <cit.>, though the asymmetric method is mentioned in the proposed nonparametric KCCA method, it still leverages the existing symmetric method in implementation for the eigenvectors of two symmetric positive definite kernels and can only deal with square matrices. Hence, the analytical framework of the method to asymmetric kernel machines remains to be formalized and is of particular interest for the efficient computation of . The research question that we tackle in this paper is "How can we learn directions in the feature space in an asymmetric way while controlling the computational complexity of our method ?" The technical contributions of this work are summarized as: * We first present a new asymmetric learning paradigm based on coupled covariances eigenproblem (CCE) allowing infinite-dimensional feature maps. We show that its solution leads to the problem associated with a specific asymmetric similarity matrix that blends in two feature maps. * We leverage the integral equations involving the pair of adjoint eigenfunctions related to the continuous analog of SVD and derive an extension to the Nyström method able to handle asymmetric kernels, which can be used to speed up training without significant decrease in accuracy of the solution. * We conduct extensive experiments to demonstrate the performance of the CCE asymmetric learning scheme in unsupervised feature extraction and different downstream tasks with real-world datasets. The efficacy of the proposed Nyström method is also verified to efficiently compute the . Note that we do not claim to propose the algorithm, as it was already sketched in the letter by <cit.>. Rather, we give a novel asymmetric learning problem based on two covariance operators in the feature space, whose solution coincides with a with infinite-dimensional feature maps, a case that was not previously possible. § LEARNING IN FEATURE SPACES WITH ASYMMETRY We begin this section by reviewing in Section <ref> the concept of asymmetric similarity that is critical to this work, before introducing in Section <ref> the Coupled Covariances Eigenproblem (CCE) that allows us to learn in feature spaces with asymmetry as the solution is ultimately obtained from the SVD of an asymmetric similarity matrix. We conclude in Section <ref> with some remarks about related work. §.§ Asymmetric Similarity Typically, a kernel κ̂𝒳×𝒳→ℝ is induced by a single feature map ϕ̂ on a single data set whose samples lie in a space 𝒳 and is symmetric. However, in practice, asymmetric similarities are widely used such as in directed graphs (where similarity is directional) as exemplified in Fig. <ref>. Each node acts as source and target and is associated with two feature vectors x_i,z_i, possibly from different spaces, for its source and target role, respectively. One can thus extract two sets of features for each node, one related to the nodes to which it points and one for the nodes that point to it. In general, an asymmetric kernel κ𝒳×𝒵→ℝ describes a similarity between elements from two different spaces 𝒳, 𝒵. Despite the utility of asymmetry, classical Mercer-kernel methods, KPCA, only deal with symmetric similarities induced by a single feature map, and thus one has to resort to symmetrizing an asymmetric similarity matrix K, which can be done by considering (K^⊤+K)/2, KK^⊤, or K^⊤K. Compared to the literature on Mercer kernels, asymmetric kernels are less studied. They have been mostly applied in supervised learning, e.g., regression <cit.> and classification <cit.>. Some works do not resort to symmetrization: <cit.> applies two feature mappings to the given samples and maintains an asymmetric kernel in the LSSVM classifier. <cit.> applies the variational objective from <cit.> as an auxiliary regularization loss to the model for low-rank self-attention in Transformers are built as the asymmetric similarity between queries and keys. Relaxations of the Mercer conditions have also been generalized to learning in reproducing kernel Banach spaces <cit.> and Kreĭn spaces <cit.>. Other related but orthogonal approaches include <cit.> for robust SVD estimation with Gaussian norm in the original space, and <cit.> for tensor data where SVD is applied to the symmetric kernel in each mode. §.§ Coupled Covariances Eigenproblem The goal of this section is to gradually define and solve the Coupled Covariances Eigenproblem (CCE). Our goal is to provide a new tool able to learn in (infinite-dimensional) feature spaces and take advantage of asymmetry. Notation. Given a bounded linear operator Γ between Hilbert spaces, its adjoint is referred to as Γ^*. The Frobenius norm of a matrix is denoted by ·_. The identity matrix of size r is I_r. Set the spaces 𝒳 = ℝ^m and 𝒵 = ℝ^n. We assume access to two sets of samples {x_i}_i=1^n ∈^n and {z_j}_j=1^m ∈^m. We consider two mappings ϕ𝒳→ℋ and ψ𝒵→ℋ whose outputs lie in a common feature space ℋ. We moreover assume that the feature maps are centered. In practice, given the training samples, one can realize the centering by the translated feature maps ϕ̃(x) = ϕ(x) - 1/n∑_i=1^n ϕ(x_i) and ψ̃(z) = ψ(z) - 1/m∑_j=1^m ψ(z_i), and then the similarity matrix of interest [G̃]_ij = ⟨ϕ̃(x_i) , ψ̃(z_j)⟩ can be computed straightforwardly, e.g., G̃ = (I_n-1/n1_n 1_n^⊤)G(I_m-1/m1_m 1_m^⊤). Construction of the Subspaces in . In CCE, the goal is to learn a pair of r directions in the feature space that solve a coupled eigenvalues problem. The sough-after directions are collected in vectors W_ϕ∈^r, W_ψ∈^r as follows: W_ϕ = [w^ϕ_1, …, w^ϕ_r], W_ψ = [w^ψ_1, …, w^ψ_r]. Denote by Σ_ϕ, Σ_ψ∈() the empirical covariance operators described by Σ_ϕ = 1/n∑_i=1^n ϕ(x_i) ϕ(x_i)^*, Σ_ψ = 1/m∑_j=1^m ψ(z_j) ψ(z_j)^*. While performing KPCA would result in solving two eigenvalue problems independently for both covariance operators and using the top r eigenvectors of each to compute interesting directions, we propose to intricate the learned directions in the feature space by solving the following CCE problem: Find W_ϕ∈^r, W_ψ∈^r such that Σ_ϕW_ψ = Λ W_ϕ, Σ_ψW_ϕ = Λ W_ψ, for some diagonal matrix Λ∈^r × r with positive values. Even if is infinite-dimensional, we can parameterize the directions W_ϕ, W_ϕ using matrices. Indeed, given that a solution exists, it holds that for any l ∈{1, …, r} Σ_ϕ w^ψ_l = 1/n∑_i=1^n ⟨ϕ(x_i), w^ψ_l⟩ϕ(x_i) = λ_l w^ϕ_l. Thus all directions {w^ϕ_l}_l=1^r lie in {ϕ(x_i)}_i=1^n. Consequently, we can parameterize the directions W_ϕ over the {ϕ(x_i)}_i=1^n by a matrix of coefficients B_ϕ∈^n × r. A similar argument holds for the directions W_ψ over the {ψ(z_j)}_j=1^m with coefficients B_ψ∈^m × r so that for all l ∈{1, …, r} w^ϕ_l = ∑_i=1^n b^ϕ_ilϕ(x_i), w^ψ_l = ∑_j=1^m b^ψ_jlψ(z_j). Projection Operators. Let Γ_ϕ^r →^n × r and Γ_ψ^r →^m × r be linear operators acting on some directions W ∈^r in the following way: [Γ_ϕ W]_il = 1/√(n)⟨ϕ(x_i), w_l ⟩, [Γ_ψ W]_jl = 1/√(m)⟨ψ(z_j), w_l ⟩. These operators compute the inner products between the chosen directions and the feature maps associated with the data. As Γ_ϕ and Γ_ψ are bounded linear operators they admit adjoint operators whose action can be made explicit: for any B ∈^n × r, Γ_ϕ^* B = 1/√(n)[∑_i=1^n b_ilϕ(x_i)]_l=1^r ∈^r and Γ_ψ^* can be treated similarly. This observation allows us to rewrite Equation <ref> under the form W_ϕ = Γ_ϕ^* B_ϕ, W_ψ = Γ_ψ^* B_ψ. We also remark that the covariance operators Σ_ϕ and Σ_ψ can be expressed using these projection operators, so that Equation <ref> can be reformulated using matrix variables B_ϕ, B_ψ as Γ_ϕ^* Γ_ϕΓ_ψ^* B_ψ = Γ_ϕ^* B_ϕΛ, Γ_ψ^* Γ_ψΓ_ϕ^* B_ϕ = Γ_ψ^* B_ψΛ. *Asymmetric Kernel Matrix. The operators Γ_ψΓ_ϕ^* and Γ_ϕΓ_ψ^* are of particular interest and their action can be described by related matrices as formalized in the following. Let G ∈^n × m such that g_ij = 1/√(nm)⟨ϕ(x_i), ψ(z_j) ⟩. For all B_ϕ∈^n × r and B_ψ∈^m × r, it holds that Γ_ψΓ_ϕ^* B_ϕ = G^⊤ B_ϕ, Γ_ϕΓ_ψ^* B_ψ = G B_ψ. This proposition resembles the celebrated kernel trick but induces an asymmetry in what is an equivalent of the Gram matrix associated with an asymmetric kernel κ(x, z) = ⟨ϕ(x), ψ(z) ⟩. This kernel permits to avoid the explicit computation of the feature mappings. Because most classical kernel functions require that the two inputs have compatible dimensions, there are a few challenges associated with the computation of κ(x,z) when and are different by nature. In this case, we can transform the two inputs x, z into the same dimension through a compatible linear transformation C ∈(, ). For Euclidean spaces we can find matrices C, such that C^⊤ x is compatible with z in dimensions, and then apply existing (symmetric) kernel functions thereafter. We consider different alternatives to attain the compatibility matrix C as follows: a_0) the pseudo-inverse of the tackled data matrix; however, it can be computationally unstable and expensive, thus we propose the following a_1-a_3. a_1) PCA projection on x_i; it finds the projection directions capturing the most variance of data samples <cit.>. a_2) randomizing the projection C; the random linear transformation has been shown to retain the main patterns of the data matrix <cit.>. a_3) learnable C w.r.t. the downstream tasks; it gives the optimal C by optimizing the downstream task objective, e.g., classification loss. a_2 is very computationally efficient while learning the optimal C in a_3 can take more computation, up to the task and its optimizer, e.g. SGD optimizer with backpropagated C as experimented in Section <ref>. Note that a_0-a_2 can be applied under general unsupervised setups for feature learning, while a_3 is commonly used when considering end-to-end training for the downstream tasks under supervised setups. *Solution to the CCE. Solving the CCE gives rise to a generalized shifted eigenvalue problem, as shown in the following proposition. Let G ∈ℝ^n × m be the asymmetric kernel matrix from Proposition <ref>. The directions (W_ϕ, W_ψ) ∈^r respectively parameterized by the matrices (B_ϕ, B_ψ) ∈^n × r×^m × r are solution to the CCE problem if and only if (B_ϕ, B_ψ) are solution to the generalized shifted eigenvalue problem G^⊤ G B_ψ = G^⊤ B_ϕΛ, G G^⊤ B_ϕ = G B_ψΛ, where Λ∈ℝ^r × r is a positive diagonal matrix. According to Lanczos’ decomposition theorem <cit.>, Problem <ref> can be solved by taking for B_ϕ, B_ψ the top-r left and right singular vectors of the matrix G. Let B_ϕ^svd (resp.  B_ψ^svd) be top-r left (resp. right) singular vectors of G. Then W_ϕ= Γ_ϕ^* B_ϕ^svd, W_ψ= Γ_ψ^* B_ψ^svd is a solution to the CCE. We have shown that solving the CCE reduces to an problem, with an asymmetric similarity matrix that involves both feature maps. Once the directions are learned, if we are given some new data x ∈ or z ∈ we can compute the projected feature scores [⟨ϕ(x) , w^ψ_l⟩]_l=1^r, [⟨ψ(z) , w^ϕ_l ⟩]_l=1^r, and use these in downstream tasks. CCE versus 2KPCA. The proposed CCE problem gives a new understanding of the set of directions of interest in the feature space, namely W_ϕ and W_ψ from Proposition <ref>, arising from the SVD of the asymmetric kernel matrix G and the feature maps ϕ, ψ. We note that these directions can also be interpreted as the principal directions associated to the covariance operators of two symmetrized kernels in two separate KPCA problems arising from feature maps x ↦Σ_ψ^1/2ϕ(x) and z ↦Σ_ϕ^1/2ψ(z), respectively. In the dual, this corresponds to taking the SVDs of G G^⊤ and G^⊤ G, which is equivalent to taking the SVD of G. We refer to this interpretation as 2KPCA. From a computational standpoint, performing 2KPCA or CCE yields the same singular vectors. However, they are significantly different in the modelling from the following perspectives. * In 2KPCA, one takes the principal components associated to kernels built via complicated entanglement of ϕ and ψ. In CCE, the empirical covariances associated to both feature maps appear free from any other factor. * The coupling between the two input variables within the feature maps of 2KPCA is realized through the square root of the other covariance, while in CCE the coupling of the input variables naturally arises by crossing the learned directions in <Ref>. * For principal component extraction, we need to compute the projections on the singular vectors W_ϕ and W_ψ in ℋ, which are essential in downstream tasks to extract the principal components of test points. This can be easily accomplished in CCE with explicit directions, while it is not as clear in 2KPCA. §.§ Related work We now discuss research areas that are tangent to our topic: asymmetric kernel SVD () and symmetric kernel approaches such as KPCA or KCCA. Asymmetric Kernel SVD. Given a data matrix A ∈ℝ^n × m, <cit.> regards it w.r.t. either the collection of rows {A[i,:] ≜ x_i ∈𝒳}_i=1^n or the collection of columns {A[:,j] ≜ z_j ∈𝒵}_j=1^m. In the example in Fig. <ref>, 𝒳 denotes the outgoing edges of source nodes, while 𝒵 denotes the incoming edges of the target. <cit.> proposes a variational principle for SVD with two linear mappings ϕ(x_i)=C_1^⊤ x_i, ψ(z_j)=C_2 z_j with compatibility transformations C_1, C_2 on the rows and columns of A. Provided that the compatibility condition AC_1C_2A=A holds, the stationary solutions correspond to the SVD of A. The two mappings can be extended to construct the n × m matrix G_ij=k(ϕ(x_i), ψ(z_j)), where k is a kernel function allowing to be nonlinear and asymmetric. Stationary solutions are then linked to the SVD of G when the regularization hyperparameters are fixed as the top singular values of G. The algorithm therefore finds singular vectors of features non-linearly related to the input variables through the SVD of the non-symmetric rectangular matrix K. Symmetric Kernel Approaches with Covariances. Our new construction makes it easier to compare with other common algorithms based on finding the best approximation of some covariance quantity, which instead work with symmetric kernels in contrast to our work. KPCA applies a nonlinear feature mapping ϕ to a set of data samples x_i and considers projections a_ϕ_1^⊤ϕ(x_i) for maximal variances w.r.t. a single covariance cov(Φ a_ϕ_1, Φ a_ϕ_1). KPCA can also be tackled through a symmetric PSD kernel k_ϕ:=ϕ^⊤(·) ϕ(·) <cit.>, while works with two covariances coupled by each other. We note that doing two KPCA with ϕ(x_i) and ψ(z_j) lead to two decoupled covariances and lead to two symmetric kernels ϕ^⊤(·) ϕ(·) and ψ^⊤(·) ψ(·) w.r.t. x_i and z_j, respectively. This is significantly different from , as shown in Figure <ref>, as is associated with two coupled covariances and essentially works with an asymmetric kernel ϕ^⊤(·) ψ(·). KCCA deals with samples from two data sources and only considers projections of each data source. seeks maximal variances of two sets of projections from a single matrix. Specifically, KCCA considers projections a_ϕ_1^⊤ϕ(x_i) and a_ψ_1^⊤ψ(z_i) and couples them in a single covariance cov(Φ a_ϕ_1, Ψ a_ψ_1). In our formulation, we consider a_ϕ_1^⊤ψ(z_j) and a_ψ_1^⊤ϕ(x_i) leading to two covariances cov(Ψ a_ϕ_1, Ψ a_ϕ_1), cov(Φ a_ψ_1, Φ a_ψ_1). KCCA leads to two separate symmetric PSD kernels k_ϕ:=ϕ^⊤(·) ϕ(·), k_ψ:=ψ^⊤(·) ψ(·), while couples two feature mappings inducing a single asymmetric kernel κ:=ϕ^⊤(·) ψ(·). Our construction is therefore key to allow for asymmetric kernels w.r.t. KCCA and contrasts with earlier constructions, where drawing parallels with related approaches such as KCCA was notably challenging due to the lack of a covariance and subspace interpretation. § NYSTRÖM METHOD FOR ASYMMETRIC KERNELS We adapt the celebrated method to asymmetric kernels, the goal being to speed up the computation of the left and right singular vectors of G from Section <ref>. The existing method approximates eigenfunctions of the integral operator associated with a symmetric kernel <cit.>. <cit.> discusses the treatment of the integral equations with an asymmetric kernel for the continuous analog of SVD <cit.>. In this section, we base our formulation upon the pair of adjoint eigenfunctions originally studied in <cit.>, namely singular functions, and start from the corresponding integral equations <cit.> to formally derive the asymmetric method in a similar spirit with the widely adopted symmetric method <cit.>. Adjoint Eigenfunctions With an asymmetric kernel κ( x, z), u_s( x) and v_s( z) satisfying [ λ_s u_s( x) = ∫_𝒟_zκ( x, z) v_s ( z) p_z( z) d z,; λ_s v_s( z) = ∫_𝒟_xκ( x, z)u_s ( x) p_x( x) d x ] are called a pair of adjoint eigenfunctions corresponding to the eigenvalue λ_s with λ_1 ≥λ_2 ≥…≥ 0, where p_x( x) and p_z( z) are the probability densities over 𝒟_x and 𝒟_z. Note that <cit.> works with the reciprocal of λ_s, which is called a singular value by differentiating from the eigenvalues of symmetric matrices <cit.>. The integral equations in (<ref>) do not specify the normalization of the adjoint eigenfunctions, which correspond to the left and right singular vectors with finite sample approximation, while generally in SVD the singular values are solved as orthonormal. Thus, to correspond the results of the adjoint eigenfunctions to the orthonormal singular vectors in SVD, the scalings determining the norms are implicitly included in (<ref>). For normalization, we incorporate three scalings l_λ_s, l_u_s, l_v_s for λ_s, u_s( x), v_s( z), respectively, into (<ref>), such that l_λ_sλ_s l_u_s u_s( x) = ∫_𝒟_zκ( x, z) l_v_s v_s ( z) p_z( z) d z and l_λ_sλ_s l_v_s v_s( z) = ∫_𝒟_xκ( x, z) l_u_s u_s ( x) p_x( x) d x. Nyström Approximation for the Adjoint Eigenfunctions Given the i.i.d. samples { x_1, …, x_n } and { z_1, …, z_m }, similar to <cit.>, from the probability densities p_x( x), p_z( z) over 𝒟_x, 𝒟_z, the two integral equations in (<ref>) over p_x( x) and p_z( z) are approximated by an empirical average: [ λ_s u_s( x) ≈l_v_sm l_λ_s l_u_s∑_j=1^m κ( x, z_j) v_s ( z_j),; λ_s v_s( z) ≈l_u_sn l_λ_s l_v_s∑_i=1^n κ( x_i, z) u_s ( x_i), ] where s=1, …, r, which corresponds to the rank-r compact SVD on a kernel through the Lanczos’ decomposition theorem <cit.>: [ G^(n, m) V^(n, m) = U^(n, m)Λ^(n, m),; (G^(n, m))^⊤U^(n, m) = V^(n, m)Λ^(n, m), ] where G^(n, m)∈ℝ^n× m is the asymmetric kernel matrix with entries G_ij = κ( x_i, z_j) and r ≤min{n, m}, V^(n, m) = [ v^(n, m)_1, …, v^(n, m)_r] ∈ℝ^m× r, U^(n, m) = [ u^(n, m)_1, …, u^(n, m)_r] ∈ℝ^n × r are column-wise orthonormal and contain the singular vectors, and Λ^(n, m) = diag{λ_1^(n, m), …,λ_r^(n, m)} denotes the positive singular values. To match (<ref>) against (<ref>), we first require the scalings on the right side of the two equations in (<ref>) to be consistent, i.e., l_v_s / (m l_λ_s l_u_s) ≜l_u_s / (n l_λ_s l_v_s), which yields l_v_s = (√(m) / √(n)) l_u_s and l_v_s / (m l_λ_s l_u_s) ≜l_u_s / (n l_λ_s l_v_s) = 1/ (√(mn)l_λ_s ). When running all samplings x_i and z_j in (<ref>) to match (<ref>), we arrive at: u_s( x_i) ≈√(√(mn)l_λ_s) U^(n, m)_is, v_s( z_j) ≈√(√(mn)l_λ_s) V^(n, m)_j s, λ_s ≈ (1 /(√(mn)l_λ_s))λ^(n, m)_s. The Nyström approximation to the s-th pair of adjoint eigenfunctions with an asymmetric kernel κ( x, z) is obtained for s=1, …, r: u_s^(n, m)( x) ≈ (√(√(mn)l_λ_s)/ λ_s^(n, m) ) ∑_j=1^m κ( x, z_j) V^(n, m)_j s, v^(n, m)_s( z) ≈ (√(√(mn)l_λ_s) /λ_s^(n, m)) ∑_i=1^n κ( x_i, z) U^(n, m)_i s, which are also called the out-of-sample extension to evaluate new samples, where the norms of u_s^(n, m), v_s^(n, m) are up to the scaling l_λ_s. In (<ref>), it explicitly formalizes the approximated adjoint functions (left and right singular vectors) with the asymmetric kernel κ (G). Nyström Approximation to Asymmetric Kernel Matrices With the asymmetric Nyström approximation derived in (<ref>), we can apply CCE to a subset of the data with sample size n <N and m< M to approximate the adjoint eigenfunctions at all samplings { x_i}_i=1^N and { z_j}_j=1^M. We assume the kernel matrix to approximate from is G ∈ℝ^N× M and denote λ̃_s^(N, M), ũ_s^(N, M), and ṽ_s^(N, M) as the Nyström approximation of the singular values, and left and right singular vectors of G, respectively. We then utilize the Nyström method to approximate the singular vectors of G through the out-of-sample extension (<ref>): ũ_s^(N, M) = (√(√(mn)l_λ_s)/ λ_s^(n, m) ) G_N,m v_s^(n, m), ṽ_s^(N, M) = (√(√(mn)l_λ_s)/ λ_s^(n, m) )G_n, M^⊤ u_s^(n, m), with λ̃_s^(N, M) = (1/√(mn)l_λ_s)λ^(n, m)_s for s=1, …, r, where u^(n, m)_s, v^(n, m)_s are the left and right singular vectors to the s-th nonzero singular value λ_s^(n, m) of an n× m sampled submatrix G_n,m, G_N,m∈ℝ^N× m is the submatrix by sampling m columns of G, and G_n, M∈ℝ^n× M is by sampling n rows of G. More remarks on the developed asymmetric method and comparisons to the existing symmetric one are provided in <Ref>. § NUMERICAL EXPERIMENTS This section aims to give a comprehensive empirical evaluation of SVD in feature spaces with asymmetric kernels in the formulation discussed above. In existing works, the potential benefits in applications remain largely unexplored w.r.t. advantages of asymmetric kernels. The following experiments do not claim that asymmetric kernels are always superior to symmetric ones as it can be problem-dependent. We consider a variety of tasks, including representation learning in directed graphs, biclustering, and downstream classification/regression on general data. A key aspect of our setup is that we can use the solutions B_ϕ, B_ψ to express the nonlinear embeddings without explicitly computing the feature mappings {ϕ(x_i)}_i=1^n, {ψ(z_j}_j=1^m, which in our derivation, and differently from previous work, are allowed to be infinite-dimensional. The effectiveness of our new asymmetric method is also evaluated. §.§ Directed Graphs Setups Unsupervised node representation learning extracts embeddings of nodes from graph topology alone. We consider five benchmark directed graphs <cit.>. is compared with its closely related baselines, i.e., PCA, SVD, and KPCA, and also with node embedding algorithms DeepWalk <cit.>, a well-known random walk-based approach, HOPE <cit.>, which preserves the asymmetric node roles with two embedding spaces using network centrality measures, and also Directed Graph Autoencoders (DiGAE) <cit.>. All compared methods are unsupervised and require only the adjacency matrix; note that this is different from the common setup of graph neural networks <cit.> that use additional node attributes on top of graph topology and operate in semi-supervised setups. We evaluate the downstream applications of node classification and graph reconstruction. With (K)PCA and DeepWalk, we only obtain one set of embeddings. With SVD, , HOPE, and DiGAE two sets of embeddings are obtained and then concatenated. As the adjacency matrix is square, there is no compatibility issue. We compute ℓ_1, ℓ_2 norms (lower is better (↓)) for graph reconstruction and Micro- and Macro-F1 scores (higher is better (↑)) for node classification using an LSSVM classifier averaged over 10 trials on the extracted 1000 components following <cit.>. KPCA employs the RBF kernel and employs the asymmetric kernel κ_SNE( x, z) = exp(-x- z_2^2/γ^2)/∑_z^'∈𝒵exp(-x- z^'_2^2/γ^2), also known as the SNE kernel <cit.>, which can be seen as an asymmetric extension of RBF , and conduct 10-fold cross-validation for the kernel parameter in the same range. Detailed experimental setups are provided in <Ref>. Results In <Ref> for node downstream classification, the results indicate consistent improvements over both SVD and KPCA, verifying the effectiveness of employing nonlinearity (to SVD) and asymmetric kernels (to KPCA). The graph reconstruction task reflects how well the extracted embeddings preserve the node connection structure. The adjacency matrix is reconstructed with the learned embeddings and then compared to the ground truth with ℓ_1, ℓ_2 norms. Asymmetric kernels greatly improve SVD, further illustrating the significance of using nonlinearity. KPCA achieves better performance than SVD, showing that considering the asymmetry alone, i.e., SVD, is not enough and nonlinearity is of great importance. Although DeepWalk, HOPE, and DiGAE are designed specifically for graphs, the simpler shows competitive performance, demonstrating great potential in representation learning for directed graphs. §.§ Biclustering Setups Biclustering simultaneously clusters samples and features of the data matrix, e.g., cluster documents and words. SVD has long been a common method by clustering rows and columns through right/left singular vectors. KPCA can be applied either to the rows or the columns at a time, due to its symmetry. We apply k-means to the extracted embeddings from SVD, KPCA, and . We also compare with the biclustering methods EBC <cit.>, based on ensemble, and the recently proposed BCOT <cit.>, based on optimal transport. In the considered benchmarks <cit.>, the rows relate to documents, where the NMI metric can be used. The columns relate to terms, where the Coherence index is used <cit.>. Other settings are as in <Ref> and we use a_1 for the compatibility matrix. Results In Table <ref>, outputs considerably better clustering compared to KPCA, which can only perform clustering on a single data view at a time. Despite the algorithm not being specialized for this task, it consistently achieves competitive or superior performance compared to BCOT and EBC, both specifically designed for biclustering. This experiment further emphasizes the significance of asymmetric feature learning and its potential to boost the performance of downstream tasks in applications. §.§ General Data Setups Since asymmetric kernels are more general than symmetric ones, the features learned with asymmetric kernels can help boost performance in generic feature extraction. We evaluate on general data from UCI <cit.>. First, we extract embeddings with kernel methods, and then apply a linear classifier/regressor and report results on test data (20% of the dataset). Besides SNE, we employ RBF and note that the resulting kernel matrix G in (<ref>) is still asymmetric, as the kernel is applied to two different sets 𝒳 and 𝒵, i.e. κ( x_i, z_j)≠κ( x_j, z_i). Data matrices are generally non-square, so we need the dimensionality compatibility C as in <Ref>. C is realized by A^† in previous work <cit.>; we denote this approach a_0. We compare a_0 with our proposed approaches a_1,a_2 in unsupervised settings, and with our a_3 with learnable C, optimized by SGD on the downstream task objective. Results In Table <ref>, maintains the best overall results with all alternatives a_0-a_3, showing promising potentials of applying asymmetric kernels on general data for downstream tasks. Under unsupervised setups, the alternatives a_1-a_2 for C all lead to comparable performance to the expensive pseudo-inverse a_0. For fair comparisons with learnable C, we also evaluate KPCA with optimized C, i.e., we use κ̂(C^⊤ x, C^⊤ x) in KPCA. With a_3, asymmetric kernels consistently outperform KPCA, while, for KPCA, a learnable C only provides marginally improved or comparable results. The matrix C can be viewed as a transformation for dimensionality compatibility providing additional degrees of freedom to learn enhanced embeddings. §.§ Asymmetric Nyström Method We evaluate the proposed asymmetric Nyström method against other standard solvers on problems of different sizes. We compare with three common SVD solvers: truncated SVD (TSVD) from the ARPACK library, the symmetric Nyström (Sym. Nys.) applied to GG^⊤ and G^⊤ G employing the Lanczos Method <cit.> for the SVD subproblems, and randomized SVD (RSVD) <cit.>. For all used solvers, we use the same stopping criterion based on achieving a target tolerance ε. The accuracy of a solution Ũ=[ũ_1,…,ũ_r], Ṽ=[ṽ_1,…,ṽ_r], is evaluated as the weighted average η = 1/r∑_i=1^r w_i ( 1 - | u_i^⊤ũ_i/ũ_i| ) + 1/r∑_i=1^r w_i ( 1 - | v_i^⊤ṽ_i/ṽ_i| ), with w_i=λ_i and U=[ u_1,…, u_r], V=[ v_1,…, v_r] the left and right singular vectors of G from its rank-r truncated SVD. The stopping criterion for all methods is thus η≤ε. This criterion is meaningful in feature learning tasks as the aim is to learn embeddings of the given data as scalar products with the singular vectors, rather than approximating the full kernel matrix. We use random subsampling for all Nyström methods and increase the number of subsamples m to achieve the target ε, where we use m=n as the kernel matrices are square; we employ the SNE kernel and set r=20. Table <ref> shows the algorithm running time at tolerance level ε=10^-1. We also show the speedup w.r.t. RSVD, i.e., t^(RSVD)/t^(Ours), where t^(RSVD), t^(Ours) denote the training time of RSVD and our asymmetric solver. Our solver shows to be the fastest and our improvement is more significant with larger problem sizes. In Appendix <ref>, we present the results at tolerance level ε=10^-2, also verifying our advantages. Further, we consider that a solver's performance may depend on the singular spectrum of the kernel. We vary the bandwidth γ of the SNE kernel on Cora to assess how the singular value decay of the kernel matrix affects performance, where an increased γ leads to spectra with faster decay, and vice versa. In Fig. <ref>, we vary γ and show the required subsamples m to achieve the given tolerance and the runtime speedup w.r.t. RSVD. Our method shows overall speedup to RSVD, and our asymmetric Nyström requires significantly fewer subsamples on matrices with faster singular spectrum decay, showing greater speedup in this scenario. In Fig. <ref>, the node classification F1 score (Macro) is reported for several values of subsamples m, where employs the asymmetric Nyström method and KPCA uses the symmetric Nyström on the same RBF kernel. It shows superior performances of the asymmetric method at all considered m without significant accuracy decrease due to the subsampling. Additional results are provided in <Ref>. § CONCLUSION This work presents a novel learning scheme for asymmetric learning in feature spaces. We establish that the solution to the coupled covariances eigenproblem (CCE) can be obtained by performing SVD on an asymmetric kernel matrix, providing a new perspective on grounded in covariance operators. In addition, the resulting computations can be sped up on large-scale problems, thanks to the formally derived asymmetric Nyström method. Numerical results show the potential of the retained asymmetry and nonlinearity realized in and the effectiveness of the developed asymmetric Nyström method. The insights and methodologies in this work pave the way for further exploration of asymmetric kernel methods in machine learning. § ACKNOWLEDGEMENTS This work is jointly supported by ERC Advanced Grant E-DUALITY (787960), iBOF project Tensor Tools for Taming the Curse (3E221427), KU Leuven Grant CoE PFV/10/002, and Grant FWO G0A4917N, EU H2020 ICT-48 Network TAILOR (Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization), and Leuven.AI Institute. This work was also supported by the Research Foundation Flanders (FWO) research projects G086518N, G086318N, and G0A0920N; Fonds de la Recherche Scientifique — FNRS and the Fonds Wetenschappelijk Onderzoek — Vlaanderen under EOS Project No. 30468160 (SeLMA). We thank the anonymous reviewers for constructive comments. § IMPACT STATEMENT This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. icml2024 § FURTHER COMPARISONS WITH RELATED WORK §.§ and discussions with related work Our main interest in this work is to derive a new formulation for , to promote more insights into nonlinear feature learning with considerations to asymmetry. We start from a new asymmetric learning paradigm based on coupled covariances eigenproblem (CCE) and show that the solutions to CCE leads to the problem associated with a specific asymmetric similarity matrix that blends in two feature maps. Our formulations involve two covariance operators allowing to work with infinite-dimensional feature mappings with induced asymmetric kernels, aiming to provide a vigorous formalization equipped with interpretations w.r.t. both the covariance matrix and kernel matrix. attains an asymmetric kernel matrix G simultaneously coupling two sets of mapping information, which is intrinsically different from KPCA. Through this work, we would also like to convey that although the solutions of PCA and KPCA can be computed numerically by the linear algebra tool of SVD, PCA is essentially different from SVD, and so is KPCA from . The solution of leads (<ref>) in terms of an asymmetric kernel G instead of the given data matrix A, and is therefore related to the compact SVD as a solution to (<ref>). <cit.> revisits the compact matrix SVD with a variational principle under the setups of least squares support vector machines (LSSVM), where the dual solution leads to a shifted eigenvalue problem regarding the given data matrix A. It focuses on the (linear) matrix SVD; although it mentions the possibility with nonlinearity by transforming A into some asymmetric kernel matrix of the same size, it cannot deal with infinite-dimensional feature spaces nor nor connect to the covariances, where it neither formalizes the derivations to the kernel trick, nor mentions possible applications with any experimental evaluations. In <cit.>, the asymmetric self-attention is remodelled for low-rank properties through the finite-dimensional feature mappings with neural networks.The queries and keys are regarded as two data sources and directly tackle the self-attention by applying the variational objective proposed in <cit.> as an auxiliary regularization loss into the optimization objective, which is iteratively minimized to approach zero and cannot provide the singular vectors nor singular values. In the early work of Schmidt <cit.>, the shifted eigenvalue problem is also discussed w.r.t. the integral equations regarding a pair of adjoint eigenfunctions in the continuous cases with function spaces. Hence, we can see that there can be multiple frameworks that can lead to a solution in the form resembling a shifted eigenvalue problem either on the given data matrix or an asymmetric kernel matrix as derived in , whereas different goals are pertained in the addressed scenarios and the methodologies are also varied with different optimization objectives and interpretations. Moreover, to get the terminology of clearer, we additionally discuss the differences to a few other existing works that share some similarities in naming the methodology. In <cit.>, it considers a new algorithm for SVD that incrementally estimates each set of robust singular values and vectors by replacing the Euclidean norm with the Gaussian norm in the objective. Different from kernel-based methods, <cit.> operates in the original space, not in the feature space, where the kernel is only used in the objective for the estimator and the data are not processed with any nonlinearity in the feature space. Despite the similarity in names, the tasks and methodologies in <cit.> and are intrinsically different. In <cit.>, it presents how to apply asymmetric kernels with LSSVMs for supervised classification with both input samples and their labels, and is derived with finite-dimensional feature spaces. In particular, unlike our construction with and , <cit.> can only consider a single data set under the context of its supervised task, exploring the supervised learning for the row data and possibly missing full exploitation of the asymmetry residing in the data. Accordingly, the asymmetry in <cit.> only comes from the choice of the asymmetric kernel function, while our asymmetry also comes from jointly handling two different sets. In <cit.>, KPCA is extended to tensor data to analyze the factors w.r.t. each mode of the tensor, where SVD is applied to solve the eigendecomposition of the KPCA problem in each mode and the left singular vectors (i.e., eigenvectors) are obtained as the nonlinear factor for each mode. <cit.> still only considers the symmetry in feature learning but extend it to higher-order tensors. Hence, the data processing, the kernel-based learning scheme, the optimization framework, and also the task are all different from the ones considered in the present work. §.§ Asymmetric method and related work §.§.§ Background The existing method starts from the numerical treatment of an integral equation with a symmetric kernel function κ̂(·, ·) such that λ u(x) = ∫_a^b κ̂( x, z)u ( x) d x, i.e., the continuous analogue to the eigenvalue problem, where the quadrature technique can be applied to formulate the discretized approximation <cit.>. Concerning the more general cases with multivariate inputs, the probability density function and the empirical average technique of finite sampling have been utilized to compute the approximated eigenfunctions that correspond to the eigenvectors <cit.>. To better illustrate the differences to the established asymmetric , we provide more details on the symmetric method for reference, based on the derivations from <cit.>. Given the i.i.d. samples { x_1, …, x_q } from the probability density p_x( x) over 𝒟_x, an empirical average is used to approximate the integral of the eigenfunction with a symmetrick kernel: λ_s u_s( x) = ∫_𝒟_xκ̂( x, z)u_s ( x)p_x( x) d x ≈1/q∑_i=1^q κ̂( x, x_i)u_s ( x_i), where u_s is said to be an eigenfunction of κ̂(·, ·) corresponding to the eigenvalues with λ_1≥λ_2 ≥…≥ 0. By running x in (<ref>) at { x_1, …, x_q }, an eigenvalue problem is motivated, such that G^(q)U^(q)=U^(q)Λ^(q), where G^(q)∈ℝ^q× q is the Gram matrix with G_ij^(q)=κ̂( x_i, x_j) for i,j =1, …, N, U^(q)=[ u^(q)_1, …, u^(q)_q] ∈ℝ^q× q is column orthonormal and the diagonal matrix Λ^(q)∈ℝ^q× q contains the eigenvalues such that λ_1^(q)≥…≥λ_q^(q)≥ 0. In this case, the approximation of eigenvalues and eigenfunction from the integral equation (<ref>) arrives at: λ_s ≈λ_s^(q)/q, u_s( x_i) ≈√(q) U^(q)_i,s, which can be plugged back to (<ref>), leading to the approximation to the i-th eigenfunction: u_s( x) ≈√(q)λ^(q)_s∑_i=1^q κ̂( x, x_i) U^(q)_i,s, with ∀ s: λ^(q)_s >0. With the technique in (<ref>), one can use different sampling sets to approximate the integral (<ref>). Thus, given a larger-scale Gram matrix G^(N)∈ℝ^N× N, for the first p eigenvalues and eigenfunctions, a subset of training data q≜ n< N can be utilized to attain their approximation at all N points for the kernel matrix G^(N) with (<ref>): λ̃^(N)_s ≜Nnλ_s^(n), ũ_s^(N)≜√(nN)1λ_s^(n) G_N,n u^(n)_s, where λ̃^(N)_s and ũ_s^(N) are the approximation of the eigenvalues and eigenvectos of G^(N). Here u^(n)_s are eigenvectors corresponding to the s-th eigenvalues λ_s^(n) of an n× n submatrix G_n,n and G_N,n is the submatrix by sampling n columns of G^(N). §.§.§ Discussions We provide the following remarks elaborating on the existing method w.r.t. the eigenvalue problem for Mercer kernels and our extended method w.r.t. the SVD problem for asymmetric kernels. * Integral equations. As shown in Section <ref> above, the existing method starts from a single integral equation with a symmetric kernel κ̂(·, ·), corresponding to an eigenvalue problem in the discretized scenarios <cit.>. Thus, the existing method is derived only for Mercer kernels with symmetry constraints on the tackled matrix. Differently, the proposed asymmetric method deals with an asymmetric kernel κ(·, ·) and starts from a pair of adjoint eigenfunctions, which jointly determine an SVD problem in the discretized scenarios <cit.>. In <cit.>, the matrix compression is discussed with -like methods to general matrices. However, the method in <cit.> is formulated to approximate subparts of the left and right singular vectors, and still applies the symmetric method to heuristically approximate the asymmetric submatrix twice for the corresponding subparts; <cit.> directly applies the symmetric method and resembles its formulas to approximate left and right singular vectors of general matrices, ignoring the asymmetry constraints. Rather than working with singular vectors, <cit.> utilizes the technique in <cit.> to the submatrix blocks to approximate a surrogate attention matrix in Transformers for computation efficiency. Hence, the analytical framework of the asymmetric method has not been formally formulated yet. In our paper, the explicit rationale of leveraging the technique is provided for the asymmetric matrices through the finite sample approximation to the pair of adjoint eigenfunction, which incorporates the asymmetry constraints on the tackled matrix, so that from analytical and practical aspects it becomes viable to directly apply the asymmetric method to the cases that pertain the asymmetric nature. * Special case with symmetry. In the derivations on the finite sample approximation, three scalings l_λ_s, l_u_s, and l_v_s are introduced to the singular values λ_s, right singular vectors u_s( x), and left singular vectors v_s( z) in Eq. (7) in Section 4 in the paper, for the considerations on their norms. Meanwhile, the constant coefficients in the two equations in Eq. (8) in the paper are required to be the same in scalings to proceed the derivations that match the SVD problem. In the existing symmetric method, the scaling issue of the approximated eigenfunction does not appear with l_λ_sλ l_u_s u( x) = ∫κ̂ ( x, z) l_u_s u( x)p_x( x) d x, as the scaling l_u_s is cancelled out in the two sides of this equation, i.e., the Eq. (<ref>) above. Thus, in (<ref>) it implicitly sets the scaling of the eigenvalue as l_λ_s=1 <cit.>, while in (<ref>) l_λ_s is set as 1/ N in the application of the method to speedup the eigenvalue problem on a larger Gram matrix G^(N). Note that, for feature learning, we only need to find the singular vectors in Eq. (10) or (11) in the paper, which are taken as embeddings of the given data for downstream tasks. The computation of the singular values can be omitted, so that we can simply implement the scaling through normalization in practice. The numerical computation of the approximated kernel matrix is also not necessary for the considered feature learning tasks. When considering the special case where the kernel matrix G in is square (N=M) and symmetric (G = G^⊤), the numbers of samplings to the rows and column are the same (n=m), and the scaling l_λ_s is set the same, the asymmetric method boils down to the existing method. * Another alternative derivation. We consider an asymmetric kernel function κ(x,y), and define the induced kernel operator and its adjoint by (Gg)(x) =𝔼_p_x(x)[ κ(x,Y)g(Y)], (G^*f)(y) =𝔼_p_y(y)[ κ(X,y)f(X)], for L^2-integrable functions f and g, where we denote the two datasets in the matrix form by arranging the samples row-wisely in X and Y, respectively. Then, the left and right s-th singular functions u_s(·) and v_s(·) of the kernel operator κ(x,y) satisfy (G^*u_s)(y) =λ_s v_s(y), (Gv_s)(x) =λ_s u_s(x). Given n samples x_1,…,x_n drawn from p_x(x) and m samples y_1,…,y_m drawn from p_y(y), the relations can be approximated as v_s(y) = 1/σ_s (G^*u_s)(y) ≈1/nλ_s∑_i=1^n κ(x_i,y)u_s(x_i), u_s(x) = 1/σ_s (Gv_s)(x) ≈1/mλ_s∑_j=1^m κ(x,y_j) v_s(y_j). As G= [κ(x_i, y_j)] =UΛ V^T ∈ℝ^n× m, we then scale the kernel matrix by 1/√(mn), and the left and right singular vectors by 1/√(n) and 1/√(m), respectively, yielding the approximated estimates of the pair of adjoint eigenfunctions: v^(n,m)_s(y) ≈1/nλ_s∑_i=1^n κ(x_i,y)U_is^(n,m), u_s^(n,m)(x) ≈1/mλ_s∑_j=1^m κ(x,y_j)V_js^(n,m), such that v_s(y) ≈1/nλ_s∑_i=1^n κ(x_i,y)u_s(x_i)⇒√(mn)/Nλ_s∑_i=1^n κ(x_i,y_j)√(n) U_is≈√(m) V_js⇒1/λ_s∑_i=1^n κ(x_i,y_j) U_is≈ V_js, u_s(x) ≈1/mλ_s∑_j=1^m κ(x,y_j)v_s(y_j) ⇒√(mn)/mλ_s∑_j=1^m κ(x_i,y_j)√(m) V_js≈√(N) U_is⇒1/λ_s∑_j=1^m κ(x_i,y_j) V_js≈ U_is, which indeed correspond to matrix SVD in (<ref>). Note that while this alternative above can also derive the asymmetric method, it is different from the techniques presented in Section <ref>. In contrast, the derivation in (<ref>) starts from the integral equations of the pair of adjoint eigenfunctions with asymmetric kernels. One of our goals is to align and compare w.r.t. the symmetric in <cit.>, which is widely adopted in machine learning, which views the approximation <cit.> originally from the integral equations with symmetric kernels, as presented in Section <ref> in the Appendix, where thorough comparisons on the connections and differences are discussed. § ADDITIONAL NUMERICAL RESULTS §.§ Additional ablations on To further study the effect of simultaneous nonlinearity and asymmetry in , we design the following experiment. We first make some non-linear encoding in a preprocessing step to the samples x_i (i.e., rows of the given data matrix A) and then compute SVD, and compare the downstream classification/regression results with SVD on the asymmetric kernel matrix. Specifically, we consider polynomial features with degree 2 of the samples x_i as φ(x_i) and then apply SVD to φ(A)=[φ(x_1), …, φ(x_N)]^⊤ as φ(A) = U_A Σ_A V_A^⊤ and use U_A as the learned embeddings. Correspondingly, employs the polynomial kernel of degree 2 k_poly(x,z)=(x^⊤ z+1)^2 and applies SVD to the asymmetric kernel matrix G_ij=k(x_i, z_j) and we use the singular vectors B_ϕ as the learned embeddings for fair comparisons. The embeddings are then fed to a linear classifier/regressor for the downstream classification/regression tasks as in <Ref> in the main paper. This experiment shows the additional benefit brought by the construction on row space and column space, as 𝒳, 𝒵 in our derivations, and with the asymmetric kernel trick, instead of simply applying SVD to a matrix which is attained by applying some nonlinear transformation to the rows of the data matrix A. In fact, our experiments show that is an effective tool to learn more informative embeddings when the given data physically present asymmetric similarities as in <Ref> in the main paper, and it also shows better performance for general datasets as experimented in <Ref> in the main paper. §.§ Additional results on the asymmetric method In <Ref> in the main paper, the node classification F1 score is reported for multiple number of subsamplings m, where (green line) employs the asymmetric method and KPCA (blue line) uses the symmetric , both employing the RBF kernel. Note that, as explained in the main paper, the resulting kernel matrix G in maintains the asymmetry even with the (symmetric) RBF function, as the kernel is applied to two different inputs, i.e., 𝒳 and 𝒵. Note that the data matrix is square, so we can set m=n for the subsamplings of the asymmetric . In addition, we provide the corresponding Micro F1 scores on Cora and also add the evaluations on Citeseer and Pubmed. The asymmetric -based kernel method shows superior performances at all considered m compared to KPCA without significant decrease in accuracy of the solution due to the subsampling. In Table <ref>, we provide extensional results on Table <ref> for the tolerance levels ε=10^-1 and 10^-2, showing the training time and the speedup w.r.t. RSVD, i.e. t^(RSVD)/t^(Ours), where t^(RSVD), t^(Ours) is the training time of RSVD and our asymmetric solver, respectively. Our solver maintain the fastest than the compared solvers and our improvement is more significant with larger problem sizes. NystromF1/.style= width=0.99, xlabel=m, height=3.75cm, legend pos=south east, legend cell align=left, xtick pos=left, ytick pos=left, xtick align=outside, ytick align=outside, every axis plot/.append style=no marks, line width=1.5pt, every axis legend/.code=, every tick label/.append style=font=, , NystromF1Cora/.style= NystromF1, xmin=900, xmax=2900, , NystromF1Citeseer/.style= NystromF1, xmin=900, xmax=3450, , NystromF1Pubmed/.style= NystromF1, xmin=600, xmax=20500, xtick=1000,9500,18000, scaled x ticks=false , We further experiment on large-scale datasets with millions of samples and features in Table <ref> below, showing the classification performance (AUROC) of KPCA/with RBF with subsampling m=1000, where N is the number of samples and M is the number of variables. We employ alternative a_2 for the compatibility matrix C. In Table <ref>, achieves the best performance also in real-world large datasets, further verifying the effectiveness and scalability. § EXPERIMENTAL DETAILS Details of the experimental setups are provided below. Experiments in <Ref> are implemented in MATLAB 2023b, and Python 3.7 is used in <Ref>. Experiments are run on a PC with an Intel i7-8700K and 64GB RAM, and experiments in <Ref> use a single NVIDIA GeForce RTX 2070 SUPER GPU. §.§ Feature learning experiments In the experiments, we conduct 10-fold cross validation for determining kernel hyperparameters with grid searches in the same range for fair comparisons. The employed nonlinear kernels in the experiments are κ̂_RBF( x, z) = exp(-x- z_2^2/γ^2) and κ_SNE( x, z) = exp(-x- z_2^2/γ^2)/∑_z'∈𝒵exp(-x- z'_2^2/γ^2) with hyperparameter γ. In the node classification experiments, we denote A =: X=[ x_1, …, x_N]^⊤ as the asymmetric adjacency matrix with X_ij as the directed similarity between node i and node j. KPCA is conducted for feature extraction in the following way: we compute symmetric kernel matrix Ĝ s.t. Ĝ_ij=k̂( x_i, x_j), with (symmetric) RBF kernel k̂, and its top 1000 eigenvectors are taken as the extracted features taken as input to the LSSVM classifier, following <cit.>. PCA is conducted similarly by taking the linear kernel k̂( x_i, x_j)= x_i^⊤ x_j. For all methods, we employ an LSSVM classifier with regularization parameter set to 1 and we utilize the one-vs-rest scheme. We use the original implementations of the authors for all baselines and the best parameters reported in their papers. Graph reconstruction is a typical task in node representation learning and is helpful to evaluate how well the learned representations preserve neighborhood information in embedding space. Graph reconstruction reconstructs all existing edges by reconstructing the full adjacency matrix from embedding space. In this task, with the feature embeddings extracted by all tested methods, we recover the matrix that reflects the edges between nodes and then the connections between each node. For a given node v with the out-degree k_v, the closest k_v nodes to v in feature space are searched to reconstruct the adjacency matrix. The ℓ_1, ℓ_2 norms between X and its reconstruction are evaluated. In biclustering tasks, the closely related baseline methods, i.e., SVD and KPCA, are compared with , where the kernel setup is the same as above. Specifically, we apply SVD and on the data matrix with attained left and right singular vectors and then k-means is adopted for performing the biclustering task with extracted features, where we use the scikit-learn in Python to implement k-means. We note that as KPCA only works with symmetric kernels, so KPCA is applied twice. We also compare with the biclustering method EBC <cit.> based on ensemble and the recently proposed BCOT method <cit.> based on optimal transport. We follow the data setups and evaluations in <cit.> with official sources in <https://github.com/chakib401/BCOT>: the rows relate to the clustering of documents, where the ground truth can be compared through the popular clustering metric NMI; the columns relate to the clustering of terms, where the Coherence index is used <cit.>. For the compatibility matrix C, we use alternative a_1. The results of BCOT are taken from its orignal paper <cit.>, and EBC are ran by its official codes provided in <https://github.com/blpercha/ebc> with threshold 10^-4. On the tested datasets, we provide their descriptions in Table <ref> and Table <ref>. The results of BCOT are from its paper <cit.>, and EBC are ran by official codes with threshold 10^-4. §.§ experiments In this part, we evaluate the efficiency of the proposed asymmetric method with comparisons to other standard solvers. The accuracy of a solution Ũ=[ũ_1,…,ũ_r], Ṽ=[ṽ_1,…,ṽ_r], is evaluated as the weighted average η = 1/r∑_i=1^r w_i ( 1 - | u_i^⊤ũ_i/ũ_i| ) + 1/r∑_i=1^s w_i ( 1 - | v_i^⊤ṽ_i/ṽ_i| ), with w_i=λ_i, where r is the rank of the low-rank approximation, U=[ u_1,…, u_r], V=[ v_1,…, v_r] are the left and right singular vectors of G from its rank-r compact SVD with singular values λ_1 ≥…≥λ_r. We compare our method with three common SVD solvers: truncated SVD (SVD) from the ARPACK library, Symmetric <cit.> applied to GG^⊤ and G^⊤ G, and randomized SVD (RSVD) <cit.>. We employ the Lanczos method at rank r <cit.> for the SVD subproblem of symmetric , and we employ RSVD at rank r for the SVD subproblem of asymmetric . Truncated SVD is run to machine precision for comparison. For a given tolerance ε, we stop training when η < ε, with η being the accuracy of a solution. In <Ref> in the paper, we evaluate multiple tolerances, i.e., ε=10^-1, 10^-2. In particular, for RSVD, we increase the number of oversamples until the target tolerance is reached. For the methods, we increase the number of subsamples m until the target tolerance is reached. We use random subsampling for all methods. The tolerance used in <Ref> is ε=10^-2. In <Ref>, the SNE kernel bandwidth is set as γ=k√(Mγ_x), with γ_x the variance of the training data and data-dependent k (k=1 for Cora and Citeseer, k=0.5 for Pubmed); e.g., for Cora γ_x=0.0002 and γ=k√(Mγ_x)≈ 0.74. This gives an indication on the scaling w.r.t γ in <Ref>. In <Ref>, we consider that a solver's performance may depend on the singular spectrum of the kernel matrix, so we vary γ as shown in the horizontal axis in <Ref>, where an increased γ leads to spectra with faster decay, and assess training time. Our approach shows overall speedup compared to RSVD, and our asymmetric requires significantly fewer subsamples on the matrices with faster decay of the singular spectrum, showing greater speedup w.r.t. RSVD in this scenario. In the experiments of Fig. <ref> in this Appendix and of <Ref> in the main body, we compare the node classification performance of KPCA using symmetric against using our proposed asymmetric . We use the RBF kernel for both KPCA and , with γ tuned via 10-fold cross validation. Note that achieves higher performance at all considered subsamplings m, even if both methods use the RBF kernel. Similarly, even when symmetric kernel functions are chosen, the resulting G matrix in the solution w.r.t. (<ref>) in the paper still maintains the asymmetry, as the two inputs of the kernel is applied to 𝒳 and 𝒵, respectively. §.§ General data experiments In the experiments on general datasets, we consider three common classification datasets, including Diabetes of size 768, Ionosphere of size 351, Liver of size 583, and three commonly used regssion datasets, including Cholesterol of size 303, Yacht of size 308, and Physicochemical-protein of size 45730. Note that, though only the embeddings for samples are needed in prediction, i.e., the right singular vectors in and the eigenvectors in KPCA, the embeddings by are learned on an asymmetric kernel with two feature maps, while in KPCA they are learned with a symmetric kernel relating to a single feature map. To implement a learnable C matrix in , i.e., the alternative a_3 in Remark 3.2 in the paper, we utilize the backpropagation learning scheme with stochastic gradient descent (SGD) based optimizers for minimizing the loss in the downstream tasks. Correspondingly, we set C matrix as learnable parameters that can be backpropagated and optimized by SGD-based optimizer in an end-to-end manner. To make C learnable, we set GV as the learned features on the data samples to the downstream classifier/regressor, where V is chosen as the top-4 right singular vectors of G. In this manner, gradient can be backpropagated, where V is alternatively updated through the SVD on G. To be specific, we adopt an iterative training scheme for conducting SVD on the asymmetric kernel matrix G and updating other parameters: i) for input X and Z, which is given as Z:=X^⊤ C in this case, we compute the asymmetric kernel matrix G:=[κ( x, z)], x∈ X, z∈ Z, and then conduct SVD on G to obtain V s.t. GV=UΛ. ii) As C can only be backpropagated through G, we detach the gradient of V computed in previous step and fix it, we then forward X, Z to update G and send the projected features of samples from , i.e., GV, to the classification or regression head with the computed loss (cross-entropy loss or the mean squared error loss), and update all the parameters except V. In other experiments using KPCA or fixed C, i.e., a_0, a_1, a_2, we also train these methods with SGD-based optimizers, which makes our comparable to the learnable C case in a_3 for fair and consistent evaluations. Here, the difference lies in that we only need to update the classification/regression head, as the projected features of all samples (GV) is fixed with the given input data. We adopt SGD as the optimizer for the linear classification or regression head, where the learning rate is set to 10^-3 for all experiments except Cholesterol (10^-1) and Physicochemical-protein (10^-4). We choose the first 4 right singular vectors, i.e., GV_[:,:4], to feed forward to the classification or regression head. When RBF kernel is used, γ^2 is selected as 1e7 in most cases except for Physicochemical-protein dataset, which is with 1e6. When SNE kernel is used, γ^2 is selected as 1e5 in most cases except for Ionosphere dataset with 1e6, Liver dataset with 1e4. Moreover, since Physicochemical-protein is a larger dataset, we utilize batch-training mode where we fix the batch size to be 500. All experiments are run for 2000 iterations. § ALGORITHM FOR C Algorithm <ref> details the realization of the compatibility matrix discussed in <Ref> in the main paper. Below, we consider the case M > N, where we construct the projection matrix C_x ∈ℝ^M × N such that XC_x ∈ℝ^N × N. If N > M, we rather construct C_z ∈ℝ^N × M such that ZC_z ∈ℝ^M × M. The construction of C_z mirrors the algorithm for C_x with the appropriate changes. In the case of square matrix with N=M, C=I_N, with I_N the identity matrix of size N × N. § PROOF OF PROPOSITION 2.2 Let B_ϕ∈^n × r. [Γ_ψΓ_ϕ^* B_ϕ]_jl = 1/√(m)⟨ψ(z_j) , 1/√(n)∑_i=1^n b^ϕ_ilϕ(x_i) ⟩ = ∑_i=1^n 1/√(nm)⟨ϕ(x_i), ψ(z_j) ⟩ b^ϕ_il = [G^⊤ B_ϕ]_jl The proof for Γ_ϕΓ_ψ^* is similar. § PROOF OF PROPOSITION 3.3 Apply on the left respectively Γ_ϕ and Γ_ψ to both equations from <Ref> combined with Proposition 3.2. § PROOF OF PROPOSITION 3.4 Perform the substitution of the proposed W_ϕ, W_ψ in the CCE problem with the knowledge that B^svd_ϕ, B^svd_ψ come from the SVD of G.
http://arxiv.org/abs/2406.08354v1
20240612160016
DocSynthv2: A Practical Autoregressive Modeling for Document Generation
[ "Sanket Biswas", "Rajiv Jain", "Vlad I. Morariu", "Jiuxiang Gu", "Puneet Mathur", "Curtis Wigington", "Tong Sun", "Josep Lladós" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Eyes Wide Unshut: Unsupervised Mistake Detection in Egocentric Video by Detecting Unpredictable Gaze Michele Mazzamuto, Antonino Furnari, and Giovanni Maria Farinella University of Catania Catania, Italy michele.mazzamuto@phd.unict.it, antonino.furnari@unict.it, giovanni.farinella@unict.it June 11, 2024 ===================================================================================================================================================================================================== § ABSTRACT While the generation of document layouts has been extensively explored, comprehensive document generation—encompassing both layout and content—presents a more complex challenge. This paper delves into this advanced domain, proposing a novel approach called DocSynthv2 through the development of a simple yet effective autoregressive structured model. Our model, distinct in its integration of both layout and textual cues, marks a step beyond existing layout-generation approaches. By focusing on the relationship between the structural elements and the textual content within documents, we aim to generate cohesive and contextually relevant documents without any reliance on visual components. Through experimental studies on our curated benchmark for the new task, we demonstrate the ability of our model combining layout and textual information in enhancing the generation quality and relevance of documents, opening new pathways for research in document creation and automated design. Our findings emphasize the effectiveness of autoregressive models in handling complex document generation tasks. § INTRODUCTION Recent advancements in generative models <cit.> have made significant impacts on language, image, and multimodal content generation. There is an increasing focus on vector graphic document generation <cit.> within this realm, where these models support users in creating, modifying, publishing, and designing both business and artistic documents. Documents differ from standard natural images as they contain structured layers of text and media content. The field of document generation presents unique challenges in seamlessly integrating visual elements such as style, layout, and multimedia with textual content, posing new problems for the vision community. Document layout generation <cit.> has played a crucial role in numerous applications, ranging from automated report creation to dynamic webpage design, significantly impacting how information is perceived and interacted with by users. With large language models (LLMs) <cit.> becoming more and more capable of compositional reasoning of visual concepts <cit.>, it opens further avenues for exploiting autoregressive approaches in the automatic end-to-end generation of both document content and layout structure. Moreover, synthetic document generation <cit.> has gained attention in recent times owing to lack of multi-domain large-scaled layout annotated datasets necessary for document pre-training <cit.>. However, end-to-end pixel-based approaches <cit.> suffer from low-resolution generated outputs where the textual content can be hardly extracted. In this work, we introduce DocSynthv2 to seamlessly generate layout structure with integrated text, essential to convey specific information and context, completing the communication objective of the document. This work contributes to document generation research in three different folds: 1) We curate a large-scale extended benchmark called PubGenNet tailor-made for document generation and completion task. 2) We introduce a simple and flexible autoregressive approach for generating high-resolution document outputs, capable of handling sequences of arbitrary lengths. 3) We outline future challenges and opportunities in evaluating document generation, setting the stage for advancements in this evolving field. § RELATED WORK Document Layout Generation Recently, there has been a surge in research on layout generation. Foundational works like LayoutGAN <cit.> and LayoutVAE <cit.> have been influential in synthesizing layouts by modeling geometric relations of different 2D elements and then rendering them in the image space. Document layout generation has received extensive interest in recent years owing to its integration in tasks such as content generation <cit.> and graphic web designs <cit.>. While  <cit.> attempts to generate document layouts given user-conditioned prompts (eg. input reference image, keywords, and category of the document),  <cit.> proposed an approach to construct hierarchies of document layouts and later sample and generate them using a recursive VAE. The aforementioned method was further extended using graph autoencoder networks <cit.> with optional design constraints for further improvement. Gupta et. al. <cit.> proposed with self-attention <cit.> which is the most relevant to this work. They used a next element prediction objective (i.e. layout completion) using a transformer architecture in an autoregressive manner to produce layout tokens, including class labels and bounding boxes of document objects.  <cit.> tried to combine these generative transformers <cit.> with VAE's to learn better layout refinement and prediction. Synthetic Document Generation The Computer Vision community has also captured emerging interest to generate synthetic realistic scene images with plausible layouts from a user provided reference layout <cit.>, emphasizing particularly on high-resolution image outputs. The DocSynth framework <cit.> introduced the first image-to-image translation pipeline for creating synthetic document image datasets for augmenting real data during training for document layout analysis tasks <cit.>. In this work, we move a step forward towards generating synthetic data with content preservation. § METHOD In this section, we introduce our proposed approach for the document generation task. We first discuss our representation of document elements essential for model understanding. Next, we discuss the DocSynthv2 framework and show how we can leverage the knowledge of both layout elements and their corresponding content to model the probability distribution of an overall page structure. Lastly, we discuss the learning objectives we have used to train the whole network. §.§ Overview Document Representation The document layout of a page can comprise multiple sets of elements, where each element can be described by its category c, left and top coordinate x and y, as well as width w and height h. The continuous attributes x, y, w and h are often quantized, which has proven to be useful for graphic layout generation approaches <cit.>. Following the FlexDM approach <cit.>, we represent document 𝒟 as a vector consisting of a tuple of layout components (D_1, D_2, …, D_S), where S is the number of elements in 𝒟. Each element D_i={d_i^k | k ∈ℰ} can represent either element type , position, style attributes, or raw text content where k represents the indices of the attributes. Contrary to FlexDM <cit.>, we do not use any embeddings in the input sequence but rather use only the element's layout information or its content attributes. We concatenate the layout information along with the text attribute tokens for every element as shown in Equation <ref>. Here, N represents the total number of elements, while ⟨sos⟩ and ⟨ eos ⟩ are special tokens which denote the start and end of a sequence. Also, a special token appears when d_i^k is inevitably missing (e.g., font type for a non-text element), or padding variable-length sequences when training a mini-batch. 𝒟={⟨sos⟩ c_1 x_1 y_1 w_1 h_1 t_1 … c_N x_N y_N w_N h_N t_N⟨eos⟩} Representing with Discrete Variables Following LayoutTransformer <cit.>, we applied an 8-bit uniform quantization on every document element (image region or text) and modelled them using Categorical distribution. We note that while converting coordinates into discrete values leads to some loss of precision, this approach enables the modeling of multiple kinds of distributions, which is crucial for document layouts. Every document object (text or non-text) is projected to the same dimension such that we can concatenate every element (c_N, x_N, y_N, w_N, h_N, t_N) in a single linear sequence of their element values. The overall structure of a page can then be represented by a sequence of m latent vectors where m is decided by the total number of tokens encoded in the input sequence S. For conciseness, we use θ_j, j ∈{1, …, m} to represent any document element in the above sequence. We model this joint distribution as a product over a series of conditional distributions using the chain rule as shown in Equation <ref>. p(θ_1: m)=∏_j=1^m p(θ_j |θ_1: j-1) §.§ Model Architecture is a document generation transformer pre-trained on document datasets containing multiple elements with a combined set of layout and text attributes. The model learns neural representations of document data, capturing both physical and logical relationships of the document elements with the previously predicted element. Our overall architecture of DocSynthv2 is shown in Figure <ref>. Training: Given an initial set of T visible tokens as input containing attributes representing: 1) Layout Category (eg. Table, Table Cell, Paragraph, Title, Caption etc.) 2) Position 3) Font Style 4) Text Content, the model tries to predict the next element with an autoregressive GPT-2 Transformer decoder <cit.>. Each of these GPT blocks consists of a masked multi-head attention (MHA) and a feedforward network (FFN) as shown. The output at the final layer corresponds to the next parameter. Inference: During inference, both the position and text tokens are synthesized auto-regressively for the fixed category token (i.e. a reference layout you would like to generate). During both training and inference, the ground-truth sequences have been used to train the model more efficiently as done in  <cit.>. Losses: Since the model has both continuous and discrete sets of parameters as already discussed, we use a variational loss to minimize KL-Divergence between the softmax predictions for all discrete parameters as in  <cit.>. § EXPERIMENTS §.§ Datasets Our evaluation of DocSynthv2 primarily utilizes two vector graphic document datasets, Crello <cit.> and DocGenNet, our curated version of PubLayNet <cit.> streamlined for the task of document generation. Crello: Originating from an online design platform, this dataset encompasses a broad range of design templates, including but not limited to social media posts, banner ads, blog headers, and printed materials. We use a similar experimental setting as used in FlexDM <cit.>. The released dataset by the authors was partitioned into 18,738 training instances, 2,313 for validation, and 2,271 for testing. Detailed definitions of each attribute can be found in the original paper <cit.>. PubGenNet: For experimental validation, we generated a new benchmark called "PubGenNet," a large-scaled extended dataset curated to advance the field of document generation. This dataset was assembled by extracting a diverse array of samples from the original PubLayNet dataset <cit.>, which itself is derived from an extensive collection of scientific publications available in PubMed Central. To ensure a comprehensive set of text attributes (eg. font type) along with raw textual content, we utilized a PDF extraction procedure, using the PyMuPDF library enabling us to align this extracted data with the original COCO annotations. In summary, the overall curation process involved extraction and processing of layout and text data from a set of documents represented in the PubLayNet format. After obtaining the document-specific attributes, the processed data was compiled into a structured dataset suitable for training and evaluating document generation models. The resulting training and validation instances are similar to the dataset statistics in PubLayNet with 335,703 document samples for training and 11,245 instances for validation. §.§ Tasks The primary motivations for our model are to address the key aspects of document design and generation. We have selected the evaluation tasks based on: (1) Creating a new document or completing a partially finished one, focusing on maintaining coherence, appearance, and relevance to the intended content. (2) Test the model's ability in layout design, specifically its understanding of spacing, alignment, and the interplay between text and other elements. Document Completion: This task requires the model to analyze the current layout elements and content within the document (eg. text, title, tables, figures etc.) and logically predict what elements should follow to maintain the coherence and plausible structure of a document. Single and Multiple Text Box Placement: This task in terms of next element prediction requires the model to identify optimal locations and sizes for text boxes within a document, based on the existing layout and design principles. It assesses the model's capability to seamlessly incorporate new text elements, ensuring they align with the document's structure and visual appeal. §.§ Quantitative Evaluation Table <ref> summarizes the performance comparison of DocSynthv2 over the existing SOTA transformer decoder-only models. Our full model (with text attributes) gives us boost in performance over the layout-only model, demonstrating that utilizng the raw text can help guide models for layout generation when avaialble. Although our model is a lightweight decoder-only architecture, it can perform on par with LayoutFormer++ <cit.> which is an encoder-decoder-based transformer. Our results with high Alignment and Overlap scores also suggest that layout generation and completion models gain substantial improvement when trained on sequences integrating textual content. In Table <ref>, we summarize the performance of Single and Multiple Text Box Placement in the Crello dataset. The results show that the model does worse for text placement in the Single Text box condition, likely due to the weaker multimodal features compared to <cit.>. However, it performs on par for IoU and outperforms for BDE in the Multiple condition, which may be due to the raw text in our model. §.§ Qualitative Evaluation Figure <ref> shows example of our applied for text synthesis and document completion on the Crello and PubGenNet datasets. In the Crello Text prediction example, it can be seen that the text is aligned with the layout showing a plausible flyer title for the heading section followed by an address and date in the sub text fields. For the Document Completion Task, we have the model generate the text within in an existing Table structure. The filled text maintains coherence across the two table columns, filling it with Authors names and reference information on the left and text of the right. In this example the text coherence could likely be improved by LLMs. § FUTURE SCOPE AND CHALLENGES In conclusion, DocSynthv2 demonstrates that integrating text with layout sequences into an autoregressive framework enriches the data representation and provides additional context, leading to improved stability and performance in generating coherent and contextually appropriate document content and motivates future work. First, the integration of layout and text needs to advance beyond current capabilities to address the diversity of document styles and industry-specific standards. We believe future work may benefit from visual-language models <cit.> that can understand multimodal content or code generation models <cit.> that can learn complex structure from a wide array of document formats and content types. We also believe, the evaluation of document generation systems remains a critical challenge. There is a pressing need for evaluation frameworks which can effectively measure the usefulness of generated documents in terms of both their visual layout and textual content. These frameworks must encompass metrics that evaluate coherence, relevance, readability, and visual appeal, reflecting the multi-functional nature of documents. § ACKNOWLEDGEMENT The resources and support from the Adobe Document Intelligence Lab (DIL) team were instrumental in the successful completion of this project. Special thanks to Ani Nenkova, Joe Barrow, Varun Manjunatha and Chris Tensmeyer, whose guidance and expertise were invaluable throughout the internship. Additionally, Sanket Biswas expresses his gratitude to Nora Graichen for her constant assistance and perceptive criticism, particularly during the last phases of submission. ieeenat_fullname
http://arxiv.org/abs/2406.09179v1
20240613144100
Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning
[ "Qizhou Wang", "Bo Han", "Puning Yang", "Jianing Zhu", "Tongliang Liu", "Masashi Sugiyama" ]
cs.LG
[ "cs.LG" ]
Detection-Rate-Emphasized Multi-objective Evolutionary Feature Selection for Network Intrusion Detection Zi-Hang Cheng1,2 Haopu Shang1,2 Chao Qian1,2 Received Month dd, yyyy; accepted Month dd, yyyy ======================================================================================================== § ABSTRACT The compelling goal of eradicating undesirable data behaviors, while preserving usual model functioning, underscores the significance of machine unlearning within the domain of large language models (LLMs). Recent research has begun to approach LLM unlearning via gradient ascent (GA)—increasing the prediction risk for those training strings targeted to be unlearned, thereby erasing their parameterized responses. Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning, resulting in various undesirable model behaviors, such as catastrophic forgetting, that diminish their practical utility. In this paper, we suggest a set of metrics that can capture multiple facets of real-world utility and propose several controlling methods that can regulate the extent of excessive unlearning. Accordingly, we suggest a general framework to better reflect the practical efficacy of various unlearning methods—we begin by controlling the unlearning procedures/unlearned models such that no excessive unlearning occurs and follow by the evaluation for unlearning efficacy. Our experimental analysis on established benchmarks revealed that GA-based methods are far from perfect in practice, as strong unlearning is at the high cost of hindering the model utility. We conclude that there is still a long way towards practical and effective LLM unlearning, and more efforts are required in this field. § INTRODUCTION Large language models (LLMs), like Llama <cit.> and GPT <cit.>, have exhibited remarkable proficiency in general-purpose language generation and understanding <cit.>. These advancements are largely credited to the development of Transformer-based architectures <cit.> with billions of parameters and to the extensive pre-training on web-sourced corpora with trillions of tokens <cit.>. However, on the other side, scaling up models aggravates the risk of memorizing effects <cit.> and sourcing from the web makes LLMs inherent its inaccuracies and biases <cit.>. It raises the invoking concerns for LLM privacy and fidelity, posing a long array of undesirable LLM behaviors sourced from training corpora <cit.>, including copyright <cit.>, hallucination <cit.>, fairness <cit.>, and toxicity <cit.>. How to Erase Undesirable Data Behaviors in LLMs? Machine unlearning <cit.> offers a general solution. As highlighted by <cit.>, machine unlearning is a cost-effective alternative to more demanding methods such as re-training from scratch and reinforcement learning from human feedback <cit.>. In the context of LLMs, the general goal of unlearning is to precisely remove the parameterized knowledge related to unlearning targets meanwhile maintaining model performance for non-targets <cit.>. The unlearning targets can range from individual data entries to broader knowledge such as harmful contents <cit.>. Regardless of varying scopes, the forgetting targets in LLMs are typically characterized by an unlearning set 𝒟_u={s_u=[x,y_u]}_n_u of size n_u <cit.>. Then, we need to develop methods upon 𝒟_u that meet the general goal of LLM unlearning. Currently, LLM unlearning has been recognized as particularly suitable for removing privacy and copyright contents <cit.>. It may also serve as a promising intermediate process for editing harmful behaviors <cit.>, potentially addressing their sensitivities to the changing prompts. Inheriting advances made in discriminative tasks, gradient ascent (GA) <cit.> has recently established as a foundational baseline for LLM unlearning. The modern usage of GA in LLMs <cit.> stems from the mathematical modeling regarding the goal of LLM unlearning—unlearning data targeted to be erased meanwhile maintaining the original responses for non-targeted ones—named gradient difference (GD). Its learning objective is given by -𝔼_s_u∼𝒟_uℓ(y_u|x;θ)_unlearning risk+𝔼_s∼𝒟_t\𝒟_uℓ(y|x;θ)_retaining risk, where ℓ(y|x;θ)=-log p(y|x;θ) is the prediction loss and 𝒟_t is the full training corpora. Eq. (<ref>) composes of two objectives, namely, the unlearning risk and the retaining risk. The unlearning risk increases the prediction losses for undesirable responses y_u of targeted data, aligning with gradient ascent when updating LLMs in the first order. However, employing the unlearning risk alone results in unintended changes of responses for non-targeted ones. To prevent this, the retaining risk is further adopted to maintain the original model functioning, ensuring that responses for non-targeted data remain unchanged with a small prediction risk. Generally, GD employs Eq. (<ref>) at the post-training phase, adapting LLM parameters to impose the unlearning of parameterized knowledge within 𝒟_u. Nevertheless, GD is susceptible to excessive unlearning, wherein the unlearning risk after GD will escalate exceedingly, largely surpassing the extent necessary for effective unlearning. Excessive unlearning undermines the practical utility, as it can induce catastrophic forgetting <cit.>, manifesting degraded model performance on non-targeted data, as well as extreme confabulation, indicating largely distorted responses on targeted data (cf. Table <ref>). Sadly, in the literature of LLMs, the scenario of excessive unlearning has not been fully discussed. This oversight may stem primarily from the inadequacy of existing unlearning evaluation metrics, which often fail to encompass diverse criteria (cf. Section <ref>) that should be simultaneously met after unlearning. Therefore, in Section <ref>, we suggest a set of metrics that can properly characterize LLMs. These metrics focus on parameterization strengths <cit.> from multiple facets, allowing us to quantify the drawbacks of current unlearning methods in a general and actionable manner. We further assert that the evaluations of LLM unlearning should be conducted under the prerequisite that no excessive unlearning occurs. Hence, to comprehend the real-world efficacy of existing unlearning methods, it is essential to first control/calibrate the extent of excessive unlearning such that the common model functioning is not critically affected. To this end, we propose several controlling methods to modulate the extent of unlearning in Section <ref>. With proper controlling, one can then fairly evaluate and compare the true power of various LLM unlearning methods. Based on such an evaluation framework that considers the real-world utility, we conducted experiments on established benchmarks of TOFU <cit.>, studying the elimination of privacy-sensitive profiles. Through our comprehensive evaluations, we showed that GA-based methods achieve unlearning by aggressively altering the parameterized knowledge, which affects both targeted and non-targeted data. It renders the resulting LLMs completely useless, as its strong unlearning comes at the extreme cost of compromising model functioning. Overall, we conclude that the current achievements for LLM unlearning are limited, highlighting the need for more advanced unlearning schemes in the future. § LLM LEARNING AND UNLEARNING To begin with, we discuss the necessary backgrounds for LLM learning as well as LLM unlearning. LLM Learning. We study the LLM parameterized by θ, typically incorporating layer-wise self-attention structures <cit.>. Upon receiving an input s, the LLM estimates the probability distribution, denoted by p(·|s;θ), over the next possible tokens. The LLM is trained on a substantial web-scale corpora, denoted by 𝒟_t={s=[x,y]}_n_t of size n_t. During training, we aim at minimizing the prediction loss ℓ(y|x;θ)=-log p(y|x;θ) over 𝒟_t, where p(y|x;θ) is given by ∏_i=1^| y|p(y^i|[x,y^<i];θ) with y^i the i-th token and y^<i the prefix up to the i-th token. The resulting LLM is capable of properly handling a wide range of language generation tasks. Moreover, we default to employ greedy decoding <cit.>, selecting the most probable token at each step of next word generation. Here, we denote the generated string via greedy decoding by f(s;θ). LLM Unlearning. However, employing training corpora sourced from the wild heavily raises the risk that our LLMs will learn from sensitive/private information <cit.>, thereby precipitating a host of legal and ethical concerns <cit.>. These issues further necessitate the need for a post-training mechanism that enables our LLMs to eradicate any associated parameterized knowledge that is undesirable. This requirement motivates the recent research on LLM unlearning <cit.>, formalizing the above goal by involving so-called the unlearning set 𝒟_u={s_u=[x,y_u]}_n_u (n_u≪ n_t, typically). Overall, unlearning tends to make LLMs directly adapt its parameters θ such that the related content characterized by 𝒟_u can be properly erased. To ease our discussions, we distinguish between two types of data: targeted data, which are targeted to be unlearned (i.e., within the unlearning set 𝒟_u), and non-targeted data, which are required to be maintained (i.e., all other data within 𝒟_t\𝒟_u). Unlearning Goals. For a practical LLM unlearning method, it should satisfy 3 criteria as follows: * Memorization. The direct objective of unlearning is to remove parameterization associated with specific targeted data, ensuring that, at least, the parameterized mapping between x and y_u is erased. Meanwhile, we also need to preserve the memorization for all those non-targeted data, aiming to maintain the original model functioning. * Extrapolation. We further aim for unlearned LLMs to function right on data not encountered during training/unlearning. It indicates the goal to eliminate specific semantics/contents within [x,y_u], rather than merely their mapping. Specifically, unlearned LLMs should refrain from generating contents that semantically mirror the responses of targeted data, meanwhile LLMs should extrapolate well for non-targeted data as in the original LLMs. * Coherency. For both targeted and non-targeted data, we aim for minimal changes to LLM behaviors after unlearning. This focus highlights another aspect of eliminating precise content within targeted data, whereby LLMs only exclude those tokens involving key content within the unlearning set 𝒟_u yet preserving the common syntax and grammars. Overall, memorization quantifies the strength of corpora parameterized by LLMs and extrapolation further explores their ability to generalize, both of which require our prioritized attention. Additionally, ensuring coherency further benefits the practical usages of LLM unlearning, making the unlearned LLMs general and reliable for its subsequent usages (cf. Section <ref> and Appendix <ref>). Gradient Difference (GD). For the original GA, it utilizes the optimization method of gradient ascent to increase the prediction loss ℓ(y_u|x;θ)=-log p(y_u|x;θ) associated with the unlearning set 𝒟_u. By measuring the parameterized content via the prediction loss, GA aims to reduce the capability of language models to reproduce these undesirable outputs. GD further enhances the conventional GA by regularizing LLMs to maintain original model behaviors for some non-targeted data drawn from 𝒟_t\𝒟_u, ensuring that the prediction risk remains low upon them. The learning objective of GD is encapsulated in Eq. (<ref>). Overall, GD has been recognized as an efficient and promising way to realize LLM unlearning, as highlighted by some recent studies <cit.>. § EVALUATION METRICS There is an ongoing question regarding how to properly evaluate the algorithmic effectiveness for LLM unlearning. Classical unlearning metrics for discriminative models <cit.> are not readily applicable for generative tasks as for LLMs. Noticeably, inappropriate and biased evaluations could misguide the research community and weaken the real-world significance. To rigorously quantify model behaviors for LLM unlearning, we suggest an array of metrics crafted for assessing unlearning algorithms from multiple facets. These metrics quantify the strength of parameterized knowledge within generative models <cit.>, which are general, actionable, and broadly applicable. We begin by introducing the formal definition of the parameterization strength (PS). Given the model f, the string s, and the metric between two strings as , we quantify the strength of parameterization of s within f by the shortest prefix s^<k that enables f to sufficiently recover the remaining suffix s^≥ k via greedy decoding, i.e., | s|-min_k {k|(f(s^<k;θ),s^≥ k)≤ϵ}, where ϵ is the allowed difference between the model responses f(s^<k;θ) and the suffix s^≥ k. With different choices of , one can quantify the strength, or extent, for specified knowledge characterized by the string s within the model f. Generally, a larger value in Eq. (<ref>) indicates stronger parameterization. Now, based on PS, we suggest a series of metrics to cover the criteria in Section <ref>. Memorization. The original purpose of PS is to measure the memorization of learned corpora <cit.>. Specifically, it involves measuring the shortest prefix s^<k that enables f to exactly predict the remaining suffix s^≥ k. This goal corresponds to the case where is the Hamming distance and ϵ is 0. Such a specific form of PS is referred to as PS-exact and can be rewritten as (𝒟)=𝔼_s∼𝒟(1-1/| s|min_k {k|f(s^<k;θ)=s^≥ k}), where we compute the minimal-required proportion of the prefix to accommodate variations in string lengths. PS-exact should be calculated for both the unlearning set 𝒟_u and the retaining set 𝒟_r, where we prefer a smaller value for (𝒟_u) and a relatively larger value for (𝒟_r). Extrapolation. The measurement of extrapolation is resemble to that of the memorization. However, it requires changing original data to that not encountered during training/unlearning. Specifically, given the original string s=[x,y], we further consider its paraphrased version denoted by s'=[x',y], which maintains the same content yet exhibits a different syntax compared with the original input x. Then, we define PS-perturb with respect to the dataset 𝒟 following (𝒟)=𝔼_s∼𝒟(1-1/| s'|min_k {k|f(s'^<k;θ)=s'^≥ k}). Similar to PS-exact, PS-perturb should be calculated for both the unlearning set 𝒟_u and the retaining set 𝒟_r, where we prefer a smaller (𝒟_u) yet a higher (𝒟_r). Note that previous works <cit.> have suggested a simple strategy of paraphrasing by prompting GPT-4 <cit.>, which will be adopted in our generation of s'. Coherency. Recalling that, to achieve coherency, our objective is to induce smaller changes in model behaviors, even for targeted data. Accordingly, one can compute the minimally required prefix in a manner similar to Eq. (<ref>), yet with the focus on preserving most of tokens. Here, we assume the ROUGE-L score, measuring the longest co-occurring between strings, and ϵ=0.5, a reasonable extent for tolerating difference (cf. Appendix <ref>). The definition of PS-similar is given by (𝒟)=𝔼_s∼𝒟(1-1/| s|min_k {k|(f(s^<k;θ),s^≥ k)≥0.5}), where denotes the ROUGE-L score. Overall, PS-similar measures if the predicted string f(s^<k;θ) is sufficiently similar—not necessarily identical—to the true suffix s^≥ k. We aim for coherency with respect to both targeted and non-targeted data, indicating that both larger values of (𝒟_u) and (𝒟_r) are desirable. Please refer to Appendix <ref> for detailed discussions about our metrics as well as the comparisons with previous evaluation strategies. § EXCESSIVE UNLEARNING Based on the unlearning criteria in Section <ref> and the associated evaluation metrics in Section <ref>, we can review the effectiveness of GA-based unlearning methods comprehensively. As a case study, we specifically focus on the method of GD and study its scenarios of excessive unlearning, which are common, overseen, yet detrimental for the real-world utility of existing GA-based works. Excessive Unlearning is Common. GD facilitates LLM unlearning by directly increasing the unlearning risk, defined by the prediction loss on targeted data. However, the prediction loss is unbounded, resulting in the unlearning risk to become extremely large after GD. Such a scenario is common in practice, where we conducted experiments, as an illustration, refining the unlearning setups of TOFU fictitious unlearning (cf. Appendix <ref>). Figure <ref>(a) presents the curves of the unlearning risk throughout the GD procedure. Our observations highlighted an exponential increase of the unlearning risk, escalating from about 0.08 to about 88.8, a straight signal of excessive unlearning. Excessive Unlearning is not Preferred for GD. One may believe that excessive unlearning is not necessarily a bad thing, as an extremely large value of the unlearning risk may indicate that the associated parameterized knowledge has been completely removed. In fact, some previous works truly suggest excessive updating for GA-based works <cit.>. However, the unlearning risk alone cannot fully characterize the multiple facets for the general goal of unlearning (cf. Section <ref>), and thus allowing its extreme values may largely hinder the real-world utility for the resulting LLMs. Table <ref> (step 250) presents several examples for the failure cases of LLM responses after GD, and we further summarize the observed consequences of excessive unlearning in the following. * Catastrophic Forgetting. Excessive unlearning may induce rapid updates to LLMs, adversely impacting the parameterized knowledge for non-targeted data, well-known as catastrophic forgetting <cit.>. Consequently, LLMs may produce erroneous and distorted responses for targeted data (cf. Table <ref>, Case 1, step 250), largely impairing the common model functioning after unlearning. It is worth noting that the retaining risk cannot fully resolve this issue. The primary reason is that, compared with the unlearning risk, values of the retaining risk are much smaller, accounting for less attentions in parameter updates, thus failing to preserve the original model behaviors. Other potential factors contributing to the deficiencies of the retaining risk may include insufficient traversal of non-targeted data during GD and dissenting gradient directions between the unlearning and retaining risks. * Extreme Confabulation. Excessive unlearning also has negative impacts on targeted data. A direct consequence is the degradation of their outputs into incoherent sequences of tokens with extremely high confidence (cf. Table <ref>, Case 2, step 250), where we refer to such scenario as extreme confabulation. Ideally, model responses should exhibit minimal changes and maintain proper behaviors for targeted data, allowing for the potential use of unlearning as an intermediate step for downstream applications <cit.> as well as further purposes <cit.>. The skewed behaviors may also lead to unintentional unlearning of syntax shortcuts <cit.> and the Streisand effect <cit.>. Please refer to Appendix <ref> with further discussions for the drawbacks of extreme confabulation. Quantifying Excessive Unlearning. The above-mentioned drawbacks can be further characterized using the evaluation metrics in Section <ref>: PS-exact and PS-perturb scores on non-targeted data for catastrophic forgetting, and PS-similar scores on targeted data for extreme confabulation. We employed the same unlearning setup as that for Figure <ref>(a) and traced PS scores throughout GD in Figure <ref>(b)-(c). Basically, we observed that GD is capable to erase the undesirable knowledge, which is evident from the clear decrease in PS scores for targeted data during unlearning. However, this approach also significantly affects the parameterized knowledge for non-targeted data, to the extent that even common syntax and grammar cannot be preserved. Therefore, we conclude that GA-based methods achieve unlearning by indiscriminately perturbing parameterized knowledge within original LLMs, affecting both targeted and non-targeted data. It largely degrades the performance of LLMs and often raises the scenarios of excessive unlearning, detrimental for their real-world utility. Please refer to Appendix <ref> for detailed analysis and more results of Figure <ref>. § AN LLM EVALUATION FRAMEWORK As highlighted in Section <ref>, excessive unlearning compromises the practical utility of LLMs, rendering subsequent evaluations of unlearning efficacy futile. Also, it is unfair to compare different unlearning methods without accounting for their varying impacts on the original model functioning. Therefore, to reflect the true powers of current unlearning methods and ensure a fair comparison among them, we suggest to first control the extent of unlearning such that its consequences, i.e., catastrophic forgetting and extreme confabulation, do not remarkably occur. Then, we evaluate the efficacy of unlearning for these calibrated models via PS scores on targeted data. Such an evaluation strategy facilitates a systematic framework, evaluating unlearning while ensuring their practical utility. The challenge, however, lies in how to control the extent of unlearning in a general manner. Here, we suggest a suite of simple strategies that can modulate the extent of excessive unlearning. Objective Capping (OC). Before unlearning, we can limit the extent of excessive unlearning by imposing an upper threshold on the learning objective, i.e., objective capping. Considering GD as an example, the capped objective can be expressed as - 𝔼_(x,y_u)∼𝒟_umin{ℓ(y_u|x;θ),δ}+𝔼_(x,y)∼𝒟_t\𝒟_uℓ(y|x;θ), where δ is the maximally allowed value for the unlearning risk. Generally, a larger δ indicates a weaker control, where excessive unlearning is more likely to occur. Note that our focus is on controlling the unlearning risk, as it primarily contributes to excessive unlearning. Early Stopping (ES). During unlearning, controlling can be achieved by implementing early stopping, of which the power is evidenced from Figure <ref>. Here, we introduce the controlling parameter t to represent the number of steps executed during unlearning. Generally speaking, the extent of excessive unlearning is less severe during the initial phases, i.e., a smaller t, than the later phases, i.e., a larger t, indicating an effective approach to modulate the unlearning procedure. Model Mixing (MM). After unlearning, the resulting LLMs can also be controlled due to the properties of parameter disentanglement <cit.>—mixing parameters from two models can make the resulting one inherits characteristics from both. Considering the original LLM parameters θ_o and the unlearned ones θ_u, we can mix their parameters via (1-α)θ_o+αθ_u with 0≤α≤1 the mixing factor. Here, a lower α emphasizes the parameterized knowledge of the original model, whereas a higher α highlights those of the unlearned one. Since θ_o will not induce excessive unlearning, a lower α will shrink the extent of excessive unlearning. The Evaluation Framework. Overall, we claim evaluations of LLM unlearning should be conducted under the prerequisite that the common model functioning is not notably impacted. Accordingly, we suggest a two-step evaluation framework: First, controlling unlearning methods/unlearned LLMs to prevent catastrophic forgetting and extreme confabulation; second, testing the efficacy in unlearning targeted data. We further discuss the details of our suggested evaluation framework as follows. * Calibrating Unlearning. The extent of excessive unlearning can be controlled by either OC, ES, or MM. With proper δ, t, or α, one can ensure the measurements of model utility—PS-exact/PS-perturb on the leave-out non-targeted data and the PS-similar on the targeted data—should be sufficiently close to those of the original LLMs (i.e., their difference is smaller than the threshold), preventing catastrophic forgetting and extreme confabulation. * Assessing Unlearning. For LLMs calibrated to avoid excessive unlearning, one can properly evaluate their ability to erase parameterized knowledge targeted to be unlearned. The efficacy of unlearning can be characterized by PS-exact and PS-similar on targeted data, both of which the smaller values indicates more effective unlearning. Moreover, in situations where multiple LLMs (with different controlling parameters) are achieved after calibration, we will select and report the LLM that exhibits the lowest PS-exact score on targeted data. Overall, we emphasize that the evaluation and the comparison of unlearning methods should be fair, under the condition that the original model functioning is sufficiently maintained. Moreover, we compare between different controlling methods in Section <ref>, showing that MM is more effective and general than the other two methods. Therefore, we sugget to default adopt MM in controlling the extent of excessive unlearning. Please refer to Appendix <ref> for more details about our framework. § EXPERIMENTS We conducted experiments to comprehend the real-world utility of existing LLM unlearning methods, revealing that excessive unlearning is prevalent for GA-based methods and the calibrated LLMs actually perform poorly in erasing parameterized knowledge targeted to be unlearned. Experimental Setups. Our evaluations were based on the well-established benchmarks of TOFU fictitious unlearning <cit.>, incorporating two popular LLMs, including Phi-1.5 <cit.> and Llamma-2-7B <cit.>. For the unlearning setups, original training data are separated into targeted and non-targeted parts, of which the adopted proportions are 1:99 (1% unlearning), 5:95 (5% unlearning), and 10:90 (10% unlearning). Moreover, we slightly changed the original setups suggested by <cit.>, separating 400 non-targeted data that are not involved in the unlearning procedure for proper evaluations, reflecting real-world situations where it is not feasible to go through all non-targeted data during the unlearning process. For baseline methods, we mainly considered a range of GA-based works suggested in <cit.>, named GA, GD, and KL (cf. Appendix <ref>), following their default hyper-parameters: the AdamW optimizer <cit.>, the learning rate 1e^-5, the batch size 4 for both the targeted and non-targeted data, the epoch number 5, and the linear warm-up for the first epoch. All our experiments were realized by PyTorch 2.01 with CUDA 12.1, using a series of servers equipped with NVIDIA-A100-80GB GPUs and Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz Processors. Results for Evaluation Metrics. We begin by evaluating three GA-based methods with our PS-based metrics suggested in Section <ref>. The results for various unlearning setups and LLMs are summarized in Table <ref>. These GA-based works demonstrated a complete erasure of memorization from the original LLMs on targeted data, with exactly 0 for both PS-exact and PS-perturb scores. However, such strong results did not necessarily signify the effectiveness of GA-based methods, due to their adverse consequences of catastrophic forgetting. Such a conclusion is evidenced by the notably low values of PS-exact and PS-perturb on non-targeted data. Additionally, GA-based methods also induced extreme confabulation, impacting syntactic structures for both targeted and non-targeted data with PS-similar scores close to 0. It generally suggests that GA-based methods fail to accurately identify the critical content within 𝒟_u that should be unlearned. Instead, they disrupt the mapping between x and y_u in brute force, detrimental for the real-wold utility of the resulting LLMs. Results for Controlling Methods. To reveal the effectiveness of our controlling methods, we reported on various PS-based metrics across a range of values for δ, t, and α, as detailed in Appendix <ref>. As an example, Figure <ref> presents the controlled results of GD under the 10% unlearning setups with Phi-1.5 and Llamma-2-7B. Therein, all our suggested controlling methods work well in calibrating the extent of unlearning. We also observed that maintaining utility is feasible when the controlling parameters δ, t, and α are set to smaller values, indicating that their scenarios of excessive unlearning can be quite severe. Please refer to Appendix <ref> for the more results with GA and other unlearning setups. In a comparison of our three controlling methods, MM emerged as the most effective one, offering smoother control and broader coverage over degrees of excessive unlearning. Moreover, MM is simple and efficient, which is independent on different training setups and learning objectives. Therefore, we will mainly use MM in calibrating the extent of unlearning for the subsequent experiments. Results for our Evaluation Framework. We further calibrated the extent of excessive unlearning to assess the real-world efficacy of GD, establishing a criterion that permits the maximum drop from the original performance by 10% for PS-exact and PS-perturb on non-targeted data, and 50% for PS-similar on targeted data (cf. Appendix <ref>). For all LLMs that satisfy the criterion, we report the results that exhibit the strongest unlearning, i.e., the smallest PS-exact on targeted data. The calibrated results are summarized in Table <ref>, detailing PS scores across various GA-based methods controlled by MM. Most notably, the practical efficacy of GA-based methods fell short of our expectations, especially in scenarios where a lot of data should be unlearned (e.g., 5% and 10%). Moreover, when comparing two LLMs with different sizes, we found that smaller LLMs tend to be more effective in unlearning, evidenced by larger drops of PS-exact and PS-perturb scores on targeted data for Phi-1.5 than that for Llamma-2-7B. Among different unlearning methods evaluated, GD proved to be the best one across all unlearning setups and LLM sizes, highlighting the contributions of the retaining risk in preserving original model functioning. Nonetheless, even for GD, there remained a substantial decrease in PS-exact and PS-perturb scores on non-targeted data, signifying a need for further exploration and development of more sophisticated unlearning strategies, which should not only enhance the power of data unlearning but also ensure the capability of the original performance. § CONCLUSION Pre-trained LLMs, despite their advanced capabilities, can inadvertently memorize sensitive and biased data, raising a bunch of ethical and privacy concerns. LLM unlearning, a promising technique to tackle these issues, explores effective solutions for removing undesirable data behaviors. Preliminary research typically implemented unlearning via GA and its variants. However, we argued that these methods broadly lead to excessive unlearning, rendering the resulting LLMs useless, regardless of how much undesirable knowledge is erased. Therefore, to better reflect the true efficacy of various unlearning methods, we proposed a comprehensive evaluation framework. This framework first modulates the unlearning process to prevent excessive unlearning, and follows by assessing the unlearning efficacy for the calibrated LLMs. This framework includes methods to control the extent of unlearning at various stages and employs PS-based metrics to reflect multiple facets of the parameterized content within LLMs. We conducted extensive experiments to assess the real-world efficacy of GA-based methods, showing that their strong unlearning capabilities came at the extreme cost of excessive unlearning, thereby their practical effectiveness is actually limited. There remains a long way to go in advancing LLM unlearning. First, there is an evident lack of a formal definition for the objective of LLM unlearning. The concepts of machine unlearning, established for discriminative models, do not seamlessly apply to generative LLM tasks, which will hinder subsequent theoretical achievements. The developments of improved metrics and the construction of realistic unlearning datasets are also crucial, facilitating the model assessment and ensuring the real-world reliably for unlearning processes and unlearned models. For unlearning paradigms, it is crucial to develop new methods that minimally impact the common model functioning, especially in real-world scenarios where continuous unlearning requests may occur throughout the lifecycle of LLMs. Potential methodologies may involve identifying specific tokens, neurons, and components primarily responsible for data targeted to be unlearned, then exerting unlearning adjustments upon them. Moreover, controlling the scope of data to be unlearned also holds practical significance: In some cases, unlearning may need to focus on specific content like a privacy-sensitive message, while in others, broader concepts like illegal responses must be addressed. Potential solutions may involve instructing LLMs on which specific tokens should be emphasized during unlearning and exploring innovative unlearning strategies like meta unlearning. These potential solutions will provide context-sensitive unlearning capabilities to meet diverse practical needs. plain § REAL-WORLD UTILITY OF LLM UNLEARNING Unlearning presents a versatile solution to edit trained LLMs. This mechanism can be broadly applied to serve various purposes, particularly enhancing the ethical integrity and the factual fidelity. Ethical Integrity. LLM unlearning can be employed to eradicate private, copyrighted, and sensitive content that has been inadvertently parameterized by LLMs. It makes LLM unlearning a promising tool for complying with privacy regulations, copyright laws, and ensuring models without harmful biases or illegal activities. Therein, private information includes, but is not limited to, phone numbers, addresses, passwords, and personal IDs. Copyrighted content includes protected literary excerpts, books, blogs, and chat records. Sensitive information pertains to responses with biases, discrimination, illegal, and violent advice. From this perspective, our primary focus should be on memorization and extrapolation: Unlearned LLMs must not only remove the related training data (i.e., targeted data) that has been parameterized but also ensure that undesirable content will not appear in response to related prompts. Moreover, original functionality of models, in terms of memorization and extrapolation for non-targeted data, should be sufficiently preserved. When having ensured proper behaviors of memorization and extrapolation, the maintenance of coherency can receive our secondary attention. If model responses to targeted data degrade into nonsensical tokens (i.e., not coherent), it may induce data security concerns related to the Streisand effect <cit.>—a phenomenon in which excessive efforts to conceal information can inadvertently make it more conspicuous. For example, if model responses to targeted data are very different from those to non-targeted data, adversaries could gather a broad array of corpora from the internet. Then, they could discern which data have been unlearned by simply filtering out those data that produce nonsensical outputs. Such exposure of targeted data will pose new data security concerns, violating the original intention of unlearning. Factual Fidelity. LLM unlearning can potentially be employed to update outdated information and rectify model errors raised by hallucination <cit.>, which may benefit future studies. For example, for real events that are happening and changing, we need to update the parameterized knowledge about current facts. Moreover, when LLMs produce responses that are coherent yet factually incorrect, i.e., hallucination, it is crucial to replace these misleading responses with correct ones. Existing works typically achieve these goals via model editing <cit.>, covering upon old parameterized knowledge by updating specific parameters or introducing new learnable modules. However, as demonstrated in <cit.>, varying prompts can make LLMs produce contents resemble to those before editing, where LLM unlearning may mitigate such drawbacks. Specifically, LLM unlearning can be utilized as an intermediate step to first eradicate previously undesired knowledge, potentially following the strategy of “first unlearning then editing”. In this scenario, memorization and extrapolation remain essential as for ethical integrity, since it is critical for unlearning methods to properly eradicate the specific knowledge for editing. Meanwhile, the requirement of coherency becomes more critical than in cases of ethical integrity, where minimal changes to model behaviors during unlearning should be preferred—only the important content is removed while maintaining syntax and grammar. Thereby, one can ease the difficulties for the subsequent editing, facilitating smooth adjustment. We leave the formal analysis and algorithmic design of such a “first unlearning then editing” scheme as a promising direction for the future study. § MORE DISCUSSIONS ABOUT THE UNLEARNING METRICS We further discuss about our PS-based metrics to enhance the understanding of our materials. §.§ Unlearning Criteria For the real-world utility of LLM unlearning, we suggest a set of criteria that should be simultaneously met. Specifically, the criteria of memorization and extrapolation are fundamental to the objectives of unlearning, characterizing the original parameterization and its generalization respectively. Moreover, we suggest another criterion of coherency to ensure that model responses remain reasonable after unlearning, where we expect that responses for targeted data should preserve some syntactic structures, rather than degenerating into a stacking of random tokens. However, one may raise the concern that maintaining coherency could induce hallucination, as LLMs might replace the removed key content with wrong, fabricate ones. However, we believe that hallucination is more acceptable than nonsensical outputs, as the later case deviates from the common model behaviors, thereby impacting the common use of LLMs. Moreover, largely destroying the original model functioning will also raise further challenges when unlearning is adopted as an intermediate step for model editing. §.§ Evaluation Metrics Our metrics cover 3 criteria—memorization, extrapolation, and coherency—to evaluate model efficacy after unlearning. These metrics are applicable to both targeted and non-targeted data. However, it requires to be further mentioned that the metrics should be applied to original targeted data adopted during unlearning, and to non-targeted data that are not involved during unlearning. Such a experimental setup slightly deviated from the original one suggested in <cit.>, where no separated evaluation data are involved for non-targeted data. Our adopted setup is more realistic, given that it generally impossible to traverse over all non-targeted data for the retaining risk (since n_u≪ n_t) during unlearning, otherwise the computational costs can be prohibitively high. Below, we present further discussions about our evaluation metrics. Memorization for LLMs. The strength of parameterization is directly linked to the memorization of models, especially for PS-exact scores. However, the memorization effect is generally viewed negatively within the machine learning community <cit.>, as it leads to sensitivity to outliers as well as poor generalization. However, for language generation tasks, proper memorization has its beneficial aspects. For example, it is often a desirable case for LLMs to accurately recall specific details for historical events and describe precise pipelines for scientific experiments. Considering the general usage of LLMs across diverse real-world tasks, we conjecture that preserving the original memorization for non-targeted data should at least be preferred. It leads to our evaluation framework in Section <ref>, assessing the real-world efficacy of various unlearning under the conditions that the original memorization should be maintained. Thresholds for PS-similar. The threshold ϵ in Eq. (<ref>) is empirically determined based on the ROUGE-L scores from a series of exemplary responses collected from our unlearned LLMs. The primary criterion for setting this threshold is to ensure that LLM responses should at least preserve some common grammar. Table <ref> presents exemplary responses that have completely lost coherence, all scoring below 0.30. Therefore, the threshold ϵ must be set higher than 0.30 to filter those responses that are completely disordered. Moreover, in Table <ref>, we list exemplary responses that behaved right with common grammar, while the degree in preserving syntactic structures gradually increased. We chose to adopt the Case 3 as a borderline case and selected ϵ=0.5 as the final threshold within PS-similar. One can also use larger values for a more conservative condition of coherency, while we found that doing so may limit the selection space for those properly calibrated LLMs. Connections between Metrics and Excessive Unlearning. We mentioned two consequences of excessive unlearning, i.e., catastrophic forgetting and extreme confabulation, in our main context. For catastrophic forgetting, our primary focus is on the memorization and the extrapolation within non-targeted data, suggesting the need to ensure their PS-exact and PS-similar to be sufficiently large. Moreover, for extreme confabulation, it can be characterized by the coherency of data, where we need to maintain high PS-perturb values especially for targeted data. Comparison with Previous Metrics. Previous literature <cit.> has suggested a variety of metrics to assess the efficacy of LLM unlearning, such as and . However, is an entangled metric that encompasses various factors and lacks precise practical meanings, making it hard to use for accurately identifying the drawbacks of existing unlearning methods. Furthermore, is a computationally demanding metric that necessitates training LLMs from scratch, excluding targeted data, to establish a gold standard. Although can be calculated within the TOFU setups (cf. Appendix <ref>), it is not a viable and general metric that can benefit future research in the community, especially for scenarios where new unlearning datasets will be crafted from large-scale original training corpora. Limitations. Note that our suggested metrics are not without flaws. For example, PS-perturb might be affected by biases originating from specific paraphrasing techniques, and PS-similar serves only as a necessary—not a sufficient one—for assessing coherency. However, empirically in Section <ref>, the trends of our metrics align well with the model behaviors presented in Table <ref>. It indicates that our metrics are currently adequate, but further efforts are necessary to investigate the failure cases of our metrics and to develop more advanced ones. § MORE DISCUSSIONS ABOUT THE EVALUATION FRAMEWORK We provide more details about our LLM evaluation framework that considers the real-world utility. Criterion in Assessing Excessive Unlearning. The central opinion in our evaluation framework is to prevent the notable occurrence of excessive unlearning. To formalize this goal, it requires a quantitative measurement to assess whether catastrophic forgetting and extreme confabulation happen. To this end, we establish a criterion whereby the PS scores (i.e., PS-exact scores on non-targeted data, PS-perturb scores on non-targeted data, and PS-similar scores on targeted data) should be quite similar to that before unlearning, where we set a threshold stipulating that the scores should not decline by more than 10% for PS-exact and PS-perturb on non-targeted data, and not decline by more than 50% for PS-similar on targeted data. For example, for PS-exact of non-targeted data with original Phi-1.5 as 0.668, the maximally allowed drop is 0.0668 and the minimally allowed PS-exact should be 0.6012. Then, all those controlled models of which the PS-exact on non-targeted data are higher than 0.649 will be considered as without excessive unlearning. Similar procedure should be adopted for PS-perturb scores on non-targeted data and PS-similar scores on targeted data. Selection of the Controlling Parameters. We conducted experiments using various controlling methods—OC, ES, and MM—each with different choices of their controlling parameters. We implemented an bi-level tuning strategy that includes both coarse-grained and fine-grained searching. In the coarse-grained phase, we equally select 10 values from the original candidate parameter space and assess the unlearned performance for these candidate values. In the fine-grained phase, we select the region that satisfies the criterion in avoiding excessive unlearning meanwhile potentially has the lowest PS-exact scores on targeted data. From the selected region, we further equally select 10 candidate values and report the controlled LLMs with the lowest PS-exact scores as the well calibrated LLM. For example, for MM under the 10% unlearning setup with GD (cf. Tables <ref>-<ref>), the candidate α during coarse-grained controlling should be {0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0}. Then, by the criterion in assessing excessive unlearning, we find that the candidate α during fine-grained controlling should be {0.10,0.12,0.14,0.16,0.18,0.20,0.22,0.24,0.26,0.28,0.30}. Then, we select the controlling parameter that leads to the lowest PS-exact value on targeted data, under the condition that the criterion in assessing excessive unlearning remains to be satisfied. Limitations. To modulate the extent of excessive unlearning, it is essential to evaluate various controlling parameters and identify the most suitable one for optimal calibration. This process demands additional computational resources, especially for controlling methods of OC and ES, which involve adjustments to the unlearning rules. Fortunately, in Section <ref>, we have shown that MM is more effective than both OC and ES. MM only requires manipulation of the fixed LLM parameters before and after unlearning, largely reducing the time required for various unlearning rules. This efficiency ensures that the time required remains manageable within our evaluation framework. § MORE ANALYSIS ABOUT EXCESSIVE UNLEARNING We empirically justified the common existence of excessive unlearning through our experimental results in Figure <ref>, where we reported the risk values during unlearning, as well as PS-exact, PS-perturb, and PS-similar. We present a more detailed analysis in the following. Risk. The standard scenario of excessive unlearning is characterized by extremely large values of the unlearning risk after GA, as illustrated in Figure <ref>(a). Here, the unlearning risk escalated dramatically from the initial value of 0.08 to a substantial 88.8. Another notable observation is a sudden stick out of the retaining risk at around the 60-th step, coinciding with the rapid rise in unlearning risk. It suggests that the increase of the unlearning risk hindered the original model functioning. Although the retaining risk quickly reverted, it does not mean catastrophic forgetting does not occur. The reason is that, in Figure <ref>, the retaining risk was calculated to those non-targeted data that were involved during GA. Actually, as shown in Figure <ref>(b)-(c), model behaviors for other non-targeted data that were not involved were far from the original degree. Memorization and Extrapolation. In Figure <ref>(b), we depicted the changes between PS-exact and PS-perturb, reflecting the memorization and extrapolation capabilities of LLMs. Notably, there was a clear positive correlation between PS-exact and PS-perturb values, indicating stronger/weaker memorization will induce stronger/weaker generalization for LLMs. Moreover, comparing PS scores between targeted and non-targeted data, we observed that unlearning is at the high cost of unintentional forgetting, rendering the resulting LLMs useless. In the later GD stages, unlearned LLMs may recover some previously forgotten knowledge. This phenomenon occurred when the unlearning risk no longer decreased and the impact of retaining risk during unlearning was enlarged. We conjecture that, despite of catastrophic forgetting, unlearned LLMs still retain some of their original pre-training knowledge. However, the degree of this retained knowledge is largely lower than that of the original LLMs, indicating that catastrophic forgetting remain detrimental. Memorization and Coherency. We further depicted the relationship between PS-exact and PS-similar in Figure <ref>(c), also aiming to highlight their trade-off. Similar to the observations in Figure <ref>(b), PS-exact and PS-similar exhibited a positive correlation, where the decrease in memorization hindered the response coherency. It indicates that GA-based methods may fail to identify the key content within the unlearning set, indiscriminately removing the mapping between x and y_u regardless of content or syntax. We also observed that the coherency of responses for non-targeted data was unintentionally destroyed and the similar trend for the coherency in recovering as that for extrapolation in Figure <ref>(b). R0.4 < g r a p h i c s > The response confidence throughout the GD procedure, following the same unlearning setups as Figure <ref>. Response Confidence. We further examined the confidence of LLM responses in Figure <ref>, following the same unlearning setups as that in Figure <ref>. Formally, the confidence of responses can be characterized by p(f(x;θ)|x;θ)^| f(x;θ)|^-1, which is normalized over string lengths following <cit.>. As we can see, confabulation came with the scenarios of over-confidence for targeted data, which quickly approached 1 in the later unlearning stages. It is a problematic scenario since the LLMs should not be confident on their responses that are unlearned. Moreover, over-confident responses also refrain us from filtering those erroneous responses based on its confidence. Furthermore, we noted a large decrease in model confidence when responding to non-targeted data with catastrophic forgetting, suggesting that unlearning also negatively affects model calibration. § BASELINE METHODS GA and its variants have emerged as a promising line to facilitate LLM unlearning, primarily due to their simplicity and efficiency. Here, we discuss the implementations of GA that are commonly referenced in <cit.>, albeit with renaming to align with our content. We argue that there are several fundamental objectives underlying these baselines, where specific implementations/variants of GA are particular combinations of these fundamental objectives. We identify 3 of these objectives, namely, the unlearning risk, the retaining risk, and the maintaining risk. As elaborated in the main content, the unlearning risk can be written as ℒ_u(θ)=-𝔼_(x,y_u)∼𝒟_uℓ(y_u|x;θ), corresponding to gradient ascent when using first-order optimization. The retaining risk is given by ℒ_r(θ)=𝔼_(x,y)∼𝒟_t\𝒟_uℓ(y|x;θ), which are used to retain the original functioning for data that are not targeted to be unlearned. The maintaining risk, similar to the retaining risk, aims to ensure the resulting LLM to behave right on targeted data. However, the maintaining risk achieves this goal by preserving the original model outputs for non-targeted data, quantified by measuring the Kullback–Leibler (KL) divergence between the predictions of the original model and the current model. Specifically, the maintaining risk is ℒ_m(θ) = 𝔼_(x,y) ∼𝒟_t\𝒟_u[ 1/| y|∑_k(p(y^<k| x; θ) ‖ p(y^<k| x; θ_0)) ], which averages the KL divergence with respect to a sequence of prefixes. The current implementations of GA-based methods are typically the combinations for the above fundamental objectives: * GA solely utilizes the unlearning risk, following ℒ_u(θ). * GD further incorporates the retaining risk, following ℒ_u(θ)+ℒ_r(θ). * KL employs the maintaining risk instead of the retaining risk, following ℒ_u(θ)+ℒ_m(θ). Each of them adjusts their particular focuses on unlearning, tailoring model behaviors regarding data retention and unlearning. However, these strategies cannot appropriately address our mentioned consequences of excessive unlearning: GA lacks a mechanism to maintain common model functioning, thus susceptible to catastrophic forgetting. Although GD and KL incorporate mechanisms to preserve model responses for non-targeted data, they still suffer from catastrophic forgetting due to excessive unlearning raised by ℒ_u(θ). The drawbacks of above baselines can be further justified from our experimental results in Section <ref>, underscoring their general drawbacks in excessive unlearning. Beyond GA. There are alternatives to GA-based methods such as preference optimization (PO) <cit.>. It aims to mitigate the drawbacks of the unlearning risk by targeting a new outcome, y_idk=“I don't know.”, which is implemented through the learning objective of ℒ_idk(θ)+ℒ_u(θ), where ℒ_idk(θ)=𝔼_(x,y_u)∼𝒟_uℓ(y_idk|x;θ). In Appendix <ref>, we revealed that this non-GA method is not effective, further highlighting the ongoing challenge in advancing LLM unlearning. § EXPERIMENTAL SETUPS AND MORE RESULTS We provide detailed information about our experimental setups along with additional results. §.§ TOFU Fictitious Unlearning Our experimental setups primarily followed TOFU, focusing on LLMs trained with a series of fictitious author profiles. These profiles were created by prompting GPT-4, which were further refined to be completely orthogonal to real author profiles. The resulting data disentangled the impacts of other variates, thus allowing us to concentrate specifically on the process of unlearning. To test the effectiveness of unlearning, we first fine-tuned the original LLMs with fictitious profiles, ensuring that the associated profiles have been parameterized into our LLMs. Then, we separated these data of profiles into targeted and non-targeted data, facilitating us to evaluate various unlearning methods. The ratios of targeted to non-targeted data considered are 1:99, 5:95, and 10:90, corresponding to the scenarios to unlearn 1%, 5%, and 10% unlearning setups, respectively. It is important to emphasize that we made a slight adjustment for the original experimental setups suggested by <cit.>. Here, we separated a part of non-targeted data of size 400 to evaluate the efficacy of unlearning. Such non-targeted data for evaluation was adopted during fine-tuning yet not involved during unlearning. Then, this part of the non-targeted data, along with all those targeted data, are employed for evaluations with respect to various PS-based metrics. Such a unlearning setup aligns with the real-world situations where one can hardly go through all of those pre-training data that are not targeted to be unlearned during the unlearning procedure. §.§ Comparison between Different Controlling Methods Tables <ref>-<ref> and Figures <ref>-<ref> summarize the results of various PS-based scores using different controlling methods with varying controlling parameters. Overall, we found that all proposed controlling methods effectively control the extent of excessive unlearning. Moreover, among them, MM stands out as the most effective one. It offers smoother control and superior capability in preserving the original model functioning. Additionally, MM does not depend on specific training paradigms or learning objectives, making it a versatile and robust choice. Therefore, we recommend adopting MM as the default method to control the extent of excessive unlearning. §.§ Results of MM Controlled LLMs Tables <ref>-<ref> present detailed PS scores for unlearning methods controlled by MM, incorporating both coarse- and fine-grained parameter adjustments. Note that, in addition to the GA-based methods, we also conducted experiments with PO, and the comparative results with GD are summarized in Table <ref>. As we can see, GD outperformed PO in general, although GD still struggled with the issue of excessive unlearning. Hence, further efforts should be paid for more advanced unlearning schemes. § A FINAL GUIDELINE TO EVALUATE LLM UNLEARNING Our proposed LLM evaluation framework considers the practical effectiveness of the resulting LLMs, with a series of PS-based scores that are general when facing different unlearning tasks. However, a limitation of our framework is the high computational cost associated with calculating these PS scores. Although some efficient realizations, such as the Bisection method, can accelerate this process, the costs remain substantial, especially for cases involving long token outputs. Therefore, to save computational costs, we recommend concentrating primarily on the PS-exact scores for non-targeted data when selecting appropriate controlling parameters. The general validity of this simplified strategy is justified by the linear correlations observed between PS-exact and PS-perturb on non-targeted data (cf. Figures <ref>-<ref>), coupled with the relatively loose constraints for PS-similar on targeted data. Then, after determining proper controlling parameters, other PS scores can be computed (when feasible) to further reflect the effectiveness of the unlearning methods. Note that it is crucial, especially when involving the retaining or maintaining risks, to ensure that non-targeted data used for unlearning are disjoint with those data involved in evaluations. This separation can avoid the unintentionally sampling bias during unlearning to the evaluations of the resulting LLMs. Moreover, we recommend setting MM as the default method for controlling the extent of unlearning, using relatively small candidate values of α, e.g., smaller than 0.5, during calibration.
http://arxiv.org/abs/2406.08653v1
20240612213132
BaSeNet: A Learning-based Mobile Manipulator Base Pose Sequence Planning for Pickup Tasks
[ "Lakshadeep Naik", "Sinan Kalkan", "Sune L. Sørensen", "Mikkel B. Kjærgaard", "Norbert Krüger" ]
cs.RO
[ "cs.RO" ]
Interacting holes in a gated WSe_2 quantum channel: valley correlations and zigzag Wigner crystal Pawel Hawrylak ================================================================================================== empty empty § ABSTRACT In many applications, a mobile manipulator robot is required to grasp a set of objects distributed in space. This may not be feasible from a single base pose and the robot must plan the sequence of base poses for grasping all objects, minimizing the total navigation and grasping time. This is a Combinatorial Optimization problem that can be solved using exact methods, which provide optimal solutions but are computationally expensive, or approximate methods, which offer computationally efficient but sub-optimal solutions. Recent studies have shown that learning-based methods can solve Combinatorial Optimization problems, providing near-optimal and computationally efficient solutions. In this work, we present  - a learning-based approach to plan the sequence of base poses for the robot to grasp all the objects in the scene. We propose a Reinforcement Learning based solution that learns the base poses for grasping individual objects and the sequence in which the objects should be grasped to minimize the total navigation and grasping costs using Layered Learning. As the problem has a varying number of states and actions, we represent states and actions as a graph and use Graph Neural Networks for learning. We show that the proposed method can produce comparable solutions to exact and approximate methods with significantly less computation time. The code, Reinforcement Learning environments, and pre-trained models will be made available on the project webpage[<https://lakshadeep.github.io/basenet/>]. § INTRODUCTION Mobile Manipulators (MMs) are widely used for pick-up tasks across different domains including logistics, manufacturing, service, home automation, elderly care, and hospitality applications <cit.>. Pickup tasks involve determining suitable robot base poses for object pick-up, followed by navigating to the base pose and picking up the objects using the manipulator <cit.>. In challenging environments, multiple base poses are generally required for grasping all the objects (see Fig. <ref>). In such situations, the robot must plan the optimal sequence of base poses for picking up the objects such that the total navigation and grasping time is minimized. This is a Combinatorial Optimization (CO) problem <cit.>, which can be addressed using exact methods such as dynamic programming, which ensure optimal solutions <cit.>. However, these methods are computationally expensive <cit.>, prohibiting their use in practice on robots as they often require re-planning due to changes in the object configuration, such as those caused by human interference or collisions with other objects during the picking process. Consequently, various approximate solutions <cit.> have been proposed, balancing optimality and computational efficiency <cit.>. Several recent works have explored using learning-based methods for solving CO problems, such as routing problems <cit.>. These methods offer near-optimal and computationally efficient solutions. Drawing inspiration from these works, we propose a learning-based approach for determining the optimal sequence of base poses for grasping all the objects in the scene. However, a significant distinction exists between the routing problem and determining the sequence of base poses for grasping. In the routing problem, costs depend on the node features (for example, city coordinates in the Travelling Salesman Problem). In the base pose sequence planning problem, node features consist of object poses and the cost depends on the base pose selected for grasping the object. Furthermore, each object can be grasped from several different base poses. Thus, in addition to learning the optimal sequence in which the objects should be grasped, the base pose for grasping each object also needs to be learned. This makes it more challenging to learn compared to routing problems. Furthermore, due to the limited view of the robot's onboard camera and uncertainty in the robot's self-localization, often only uncertain object poses are available for base pose sequence planning. Existing works that learn base poses for grasping have focused on grasping a single object <cit.>. For planning the sequence of base poses for grasping multiple objects, the current robot pose as well as the poses of all the objects in the scene also must be considered. This poses two main challenges: * Varying number of states and actions. As the number of objects in the scene can vary, the state and actions cannot be represented using a fixed-dimensional vector. * Sample inefficiency. Learning in such a high-dimensional state and action space requires a large amount of training data. We address the first challenge by representing states and actions as a graph and using Graph Neural Network (GNN) to encode a state into a fixed dimensional vector. To address sample inefficiency, we use Layered Learning (LL) <cit.> in combination with Reinforcement Learning (RL) similar to our previous work <cit.>. In Layer 1, we learn the grasp sequence policy which selects the next object to grasp among the remaining objects (see Fig. <ref> first row). In Layer 2, we learn the base pose policy which predicts the base pose for grasping the object selected by grasp sequence policy (see Fig. <ref> second row). We choose to ignore object pose uncertainties to simplify learning. Moreover, by augmenting robot onboard camera views with external cameras in the environment and temporal fusion, accurate pose estimates can be obtained for pre-grasp planning <cit.> such as the base pose sequence planning. To summarize, we make the following contributions: * We formulate the problem of base pose sequence planning to optimize total navigation and grasping costs as an RL problem. * We sequentially learn the base poses for grasping individual objects and the object grasp sequence using LL. * We address the variable state and action space challenge in grasp sequence planning by formulating the problem as a graph node regression problem. * Through experimental evaluation, we show that   can reduce the total planning and execution time by more than 50% compared to the best-performing baselines with almost the same success rate. § RELATED WORK §.§ Explicit base pose planning The selection of a base pose for grasping an object relies on the availability of valid Inverse Kinematics (IK) solutions to achieve the desired grasp pose. Searching for base poses with valid IK solutions in SE(2) can be computationally intensive. Therefore, existing works have suggested the utilization of Inverse Reachability Maps (IRM) <cit.>. IRM discretizes the base pose space using a grid-based approximation and stores the base poses from which IK solutions are available for the selected grasp pose in the offline phase. During online execution, a specific heuristic is employed to select a particular base pose. The availability of an IK solution does not guarantee that a valid trajectory can be planned to the desired end-effector pose, as trajectory planning depends on several factors such as self-collision, collision with other objects in the scene, joint limits, manipulability ellipsoid of the manipulator, etc. As a result, additional online validations are required to ensure that the trajectory can be planned from the selected base pose. Recent works have also proposed learning-based methods to predict the base pose for grasping single objects <cit.>. These methods have shown to be much more computationally efficient and do not suffer from grid-based approximation like IRM. Difference. In this work, we learn to plan the optimal base pose for grasping an object while also minimizing the combined cost of navigating to the selected base pose from the robot's current base pose and grasping the object. §.§ Grasp sequence planning Being a Combinatorial Optimization (CO) problem, grasp sequence planning can be solved using exact methods that provide optimal solutions at high computational cost <cit.>, or evolutionary <cit.> or heuristic methods <cit.> that offer sub-optimal solutions at low computational cost. Sørensen et al. <cit.> have employed dynamic programming with memoization to find optimal base pose sequences; however, the quality of obtained solutions directly depends on the action space resolution used for computing the costs. High action space resolution for cost computation produces better solutions but at a high computational cost. Most works that find sub-optimal but quick solutions using non-exact methods utilize IRM and make certain assumptions, such as all objects can be grasped from a single base pose <cit.>, or base pose orientation is fixed <cit.>, or the order in which the objects should be grasped is already known <cit.>, to simplify the complexity of the problem. Difference. In this work, instead of making any such assumptions, we let the robot itself explore the base pose space for grasping objects in the scene and learn the optimal base pose sequence. §.§ Combinatorial optimization and learning Exact methods, such as dynamic programming, can be applied to any generic CO problems to obtain optimal solutions, albeit at a very high computational cost. Conversely, approximate methods provide quick but sub-optimal solutions by making certain assumptions designed by domain experts to simplify the problem. Moreover, for similar problem instances, such as base pose sequence planning for the same workspace, the optimal solutions would be similar. Hence, learning techniques such as RL can be employed to search for heuristics using data instead of hand-crafted heuristics <cit.>. Initial works with learning-based solutions for CO problems, such as Pointer networks <cit.>, used supervised data to find the solutions. Later works, such as <cit.>, trained policies in an unsupervised manner using RL, attention mechanisms <cit.>, etc. In recent years, Graph Neural Networks (GNN) <cit.> have emerged as efficient state representations for CO problems. GNNs can learn the vector representation that encodes crucial graph structures required to solve CO problems efficiently <cit.>. Difference. In this work, we employ the Graph Attention Layers <cit.> to learn a vector representation that encodes relevant grasp scene information for learning the grasp sequence in an unsupervised manner using REINFORCE with greedy rollout baseline, similar to <cit.>. § PROBLEM FORMULATION We address the problem of picking up a set of N rigid objects from a table using a mobile manipulator robot. We assume that the objects can be grasped using an overhead (top-down) grasp and that the robot has a navigation stack <cit.> to navigate to the planned base pose and a manipulation stack <cit.> for grasping. It may not be possible for the robot to pick up all objects from one base pose n (∈SE(2)) and hence may have to move to a sequence of base poses = {1, 2, ...N}, to pick up N different objects. Our objective is to determine the optimal sequence for grasping objects and the corresponding base pose for each object, ensuring time-efficient completion of the pickup task. We formulate this as an RL task. During the training stage, N objects are randomly placed on the table. Each RL episode consists of a maximum of N steps. At each step, the robot predicts the next base pose n (n∈ [1, N]) and the object to grasp m. The objective of the RL agent is to complete the task efficiently, minimizing the total execution time for navigation and grasping . Learning such a policy requires information about the current robot base pose and the object poses. Thus, the state space consists of: = {, {1, 2, ...M}}, where ∈SE(2) represents the robot base pose in the table frame , m∈SE(2) denotes the m-th object pose in the table frame , and M ≤ N is the number of objects yet to be grasped. The action space consists of actions: = {n, {1, 2, ... M}}, where n∈SE(2) represents the predicted next robot base pose in the frame of the object selected for grasping, and m signifies the probability of grasping object m. An episode ends when the agent exceeds N steps, collides with the table, or when all the objects are grasped. Further, all states and actions are internally represented with a time variable t, which, however, is omitted in our notations for convenience. § We decompose the task of learning to plan a base pose sequence into two sub-tasks: * Selecting the next object m to be grasped from among the M remaining objects using the grasp sequence policy (Layer 1 in Fig. <ref>). * Determining base pose n for grasping the selected object m using the base pose policy (Layer 2 in Fig. <ref>). Both sub-tasks are learned within the LL framework as shown in Fig. <ref>. The base pose policy is learned before the grasp sequence policy as n is required to perform the action predicted by . In the following sections, we describe learning to estimate the base pose for grasping (Section <ref>) and grasping sequence (Section <ref>). §.§ Learning to estimate base pose for grasping The base pose policy is learned using the Soft Actor-Critic (SAC) algorithm <cit.> as a single-step policy in Layer 2 (see Fig. <ref>(b)). Each episode consists of a single step wherein the object m is randomly placed on the table and the robot is randomly placed in the room within the 3m radius of the table. Given the object pose m and the robot base pose the agent learns to predict the base pose n for grasping the object m: n∼( · |, m; ϕ_base), where ϕ_base are learnable parameters. The base pose n is predicted in the object m frame; i.e., it is a transformation from the object frame m to the robot base frame b; mb. The reward is defined as: [t] (m,n) = +γ_1 ·1 (collision(m,n)) + 1(IK(m, n)) ·[ γ_2 + γ_3/1+ + γ_4/1+], where 1(collision(m,n)) is 1 if there is a collision with the table after moving to n and 0 otherwise; 1(IK(i, n)) is 1 if IK solutions are available to grasp the object m after moving to the base pose n and 0 otherwise; is the time required to navigate from current robot base pose to the next base pose n; is the time required to grasp the object m from the base pose n and γ_1, γ_2, γ_3, and γ_4 are hyper-parameters. §.§ Learning to estimate grasping sequence In Layer 1, a probability for grasping each object m (among the M remaining objects on the table) is learned while using the policy already learned in Layer 2 for taking action n. Thus, the agent uses a composite policy for exploration, and only the parameters ϕ_seq of are learned here (see Fig. <ref> (a)): i∼ ( · | i, {j}_j, ; ϕ_seq), i,j ∈{1...M} j ≠ i , k = _i{i}_i, n∼ ( · |, k; ϕ_base), where i is the object for which the grasp probability is being calculated, {j}_j are other objects in the scene that need to be grasped and ϕ_seq are learnable parameters. Use of the already learned base pose policy reduces the exploration space as only the grasp sequence order needs to be explored such that the reward over the entire episode is maximized. The reward for learning grasping sequence, , is calculated as: (, ) = -γ_5 ·, where is the time required to navigate from the current robot base pose to the predicted base pose and γ_5 is a hyper-parameter. As the number of objects in the scene is not fixed, the state for learning the grasp sequence policy cannot be represented using a fixed-dimensional vector similar to the state for the base pose policy . Since Graph Neural Networks (GNN) are invariant to node permutations, we use GNN for encoding state into fixed dimensional vector. Encoder. We use Graph Attention Layers <cit.> to encode relevant information into a context embedding and formulate the grasp sequence policy as a graph node regression problem. We use a heterogeneous graph with three different types of nodes: the robot , the object under consideration for grasping i, and other objects to be grasped j. In the first layer, context embeddings for each node are generated as shown in Fig. <ref>. All three types of nodes , i, j are initially projected to a higher dimensional space using weights w_r, w_g, and w_o respectively. Attention coefficients α are then calculated to determine the level of attention to be given to other objects in the scene j while encoding the context embedding for the object under consideration for grasping i. Thus, the context embedding for the object under consideration for grasping i is encoded as: h_i = w_g ·i + w_r · + ∑_j ∈Ω(i)α_i, j· w_o ·j, where Ω(i) are the object neighbors of the object i (other objects to be grasped in Fig. <ref>). Decoder. Each episode consists of N steps (number of objects to grasp). At each time step, the grasp probability i is calculated for all objects in the scene that have not yet been grasped. First, the encoder processes relevant information into a fixed-dimensional context embedding for each object. Subsequently, the decoder, which is a Multi-Layer Perceptron (MLP), predicts the grasp probability for each object using the encoded embedding (Graph Node Regression). The object to be grasped is selected by sampling from a categorical distribution parameterized by the grasp probabilities i over objects. The base pose policy for the selected object class is used to determine the base pose and complete the pickup. Fig. <ref> presents a decoding example for a 4 objects scene. The grasp sequence policy is learned using REINFORCE <cit.> with a Greedy Rollout Baseline similar to <cit.>. The gradient of loss ℒ(ϕ_seq|S) for optimizing learnable parameters ϕ_seq in state S is calculated as: ∇ℒ(ϕ_seq|) = - ((, ) - ()) ·∇logk, where k is the grasp probability for the selected object k and (S) is the baseline reward <cit.>. The baseline reduces variance and accelerates learning. Attempting to learn using actor-critic proved unsuccessful due to difficulties in effectively representing the state and actions for learning the value function (critic). § EXPERIMENTAL SETUP §.§ Experiment and implementation Details For evaluating our work, we created an environment in NVIDIA Isaac Sim, using the mobile manipulator platform and a rectangular table (2.0m×0.8m) with up to 10 YCB benchmark <cit.> objects (belonging to 5 different classes) on it, as shown in Fig. <ref>. Overhead (top-down) grasp poses for grasping objects of different classes were pre-defined. For the base pose policy learning, we used a Multi-Layer Perceptron (MLP) with three hidden layers in both the Actor and Critic networks in SAC <cit.>. Each hidden layer comprised 256 neurons with ReLU activation. The learning rate was set to 3e-4. The reward hyper-parameters γ_1, γ_2, γ_3 and γ_4, were empirically set to -2e5, 1e6, 5e5, and 5e5, respectively. For the grasp sequence policy learning, we used a 64-bit vector representation learned through five 64-bit Graph Attention Layers <cit.> followed by ReLU activation. This vector representation served as input to an MLP with two hidden layers, each comprising 64 neurons with ReLU activation. Both networks were optimized simultaneously using an Adam optimizer <cit.> with gradients computed via REINFORCE with a greedy rollout baseline <cit.> and a learning rate of 1e-3. The reward hyper-parameter γ_5 was empirically set to 1e3. In each episode, the robot's starting pose was randomly sampled within a 2.5-3m radius around the table. For an episode with N objects on the table, the robot can select up to N base poses for grasping all the objects. The episode terminates when the robot has visited N base poses or has grasped all the objects or the predicted base pose cannot be reached as it will result in a collision with the table. The state was calculated based on the states provided by the simulator. To expedite training, instead of using a navigation stack to move to the predicted base poses, the robot was teleported. The navigation cost was computed using the approximation based on the linear and angular travel distance <cit.>. For grasping, the grasp execution time was determined using the Lula Trajectory generator available in NVIDIA Isaac Sim, which provides an approximate grasp trajectory execution time without executing the actual trajectory, while IRMs were computed using the Lula Kinematics Solver with a discretization of 10cm and 45^∘. For the algorithmic implementation of our approach, we used the Mushroom RL library <cit.>. All the experiments were carried out on the workstation equipped with Intel Core i9-13900KF 24-Core processor, 64 GB RAM, and NVIDIA GeForce RTX 4090 24GB GPU. §.§ Experiment objectives and baselines The experiments aim to verify whether the proposed learning-based method can produce solutions comparable to those obtained using exact and approximate methods in a shorter computation time (Section <ref>). The following baselines are considered: Proximity-Based Greedy selection (PBG): IRMs are used to obtain a set of base poses from which each object can be grasped. The robot selects the object closest to it for grasping next and uses the greedy selection strategy based on the navigation cost to select the base pose for grasping. Minimum Base Poses (MBP): IRMs are used to obtain a set of base poses from which each object can be grasped. The base poses are then selected to minimize the number of base poses required for grasping all objects. Subsequently, the robot employs a greedy selection strategy based on navigation cost to determine the next base pose. This is similar to <cit.> without making any assumptions regarding the action space such as fixed base orientation. Dynamic Programming (DP): IRMs are used to acquire a set of base poses from which each object can be grasped. Then navigation and grasp execution costs are computed for all the base poses. Dynamic Programming with memoization is then used to plan the optimal sequence of base poses similar to <cit.>. PBG and MBP are approximate methods, whereas DP is an exact method. As IRM does not guarantee that a grasp trajectory can be planned from the base pose, we also present results for PBG and MBP by first validating the set of selected base poses and only considering the base poses from which trajectory can be planned for grasping the object. These baselines are referred to as PBG-GC and MBP-GC. In addition, we present ablation studies in Section <ref> to validate our design choices. First, we compare the learning performance of base pose policy when the base pose n is predicted in the frame of the object m selected for grasping and in the table frame . Second, we compare the learning performance of grasp sequence policy (i) with and without using a greedy rollout baseline with REINFORCE and (ii) with and without using attention coefficients α for encoding context embedding. § RESULTS §.§ Experiment 1: Planning and execution time analysis In Table <ref>, we present the mean and standard deviation values for planning time, execution time (navigation and grasping ), total time, and the percentage of objects grasped for the five selected baselines and . We considered two tasks: `5-objs' and `10-objs', each with 5 and 10 objects to grasp, respectively. We evaluated each task over 50 random scenes. During navigation, the maximum base linear and angular velocities were set to 0.5m/s and 0.5rad/s. During manipulation with the UR5e manipulator, the maximum velocities for shoulder and elbow joints were set to 1.0rad/s, and for wrist joints, it was set to 2.0rad/s. In real-world setups, several challenges related to the robot's perception can contribute to additional execution time and failures. These include planning robot camera views to accurately estimate object poses before grasping, performing 6D pose estimation, etc. Furthermore, inaccurate pose estimates can lead to grasp failures. Since these challenges are not addressed in this work, to avoid their influence, we used a simulated environment for evaluation. Table <ref> shows that, as expected, DP, being an exact method, produces the most optimal solutions in terms of total execution time, with more than 97% of objects successfully grasped. PBG-GC and MBP-GC, which are approximate methods, also produce near-optimal solutions. However, all these baselines have very high planning time. The majority of the planning time is attributed to the computation of navigation and grasping costs for the action space indicated by the IRM. PBP and MBP have very low planning time because they do not involve any cost computation, assuming that trajectories can be planned to grasp the object from all the base poses in the IRM. However, as this assumption doesn't always hold true, especially for 6 DOF manipulators like UR5e, they tend to perform poorly. produces solutions comparable to those produced by PBG-GC and MBP-GC in terms of total execution time and the percentage of objects grasped, but with significantly lower planning time. While DP produces better solutions in terms of execution time, it also has high planning time. Therefore, when it comes to total planning and execution time   outperforms all the baselines for both the tasks. §.§ Experiment 2: Qualitative results In Fig. <ref>, we present qualitative results for a random scene using all the baselines and our method . It can be observed that the base poses n planned by  are farther away from the table compared to the DP solutions. This occurs because the base pose policy learns to maintain a safe distance from the table, given the high penalty for collision with the table. Consequently,  has longer grasping time as the manipulator requires longer trajectories to grasp the objects. All baselines have a discrete base pose space with the discretization of 10cm and 45^∘. Consequently, the base poses planned by the baselines are perfectly aligned with each other, requiring only linear robot motions to move between them. In contrast,  predicts base poses in the continuous space and hence they are not perfectly aligned. As a result, the robot requires both linear and angular motion to move between them. This incurs significant navigation costs and explains the higher navigation times for . In this work, we have used generic rewards. Both of the above issues can be addressed through task and robot-specific reward engineering. §.§ Experiment 3: Ablation analysis Learning performance of . Fig. <ref> compares the performance of base pose policy using the object frame m and the table frame for predicting base poses n during training. When using the table frame for predicting base poses, quickly learns highly optimal base poses for grasping objects in specific regions of the table. However, it fails to effectively explore the base pose space for objects placed anywhere on the table. Conversely, the use of object frame leads to very stable learning. This observation is further supported by Fig. <ref>, which shows the learned base poses (colorbar colors) in both cases for 1000 random object poses (orange) on the table (red rectangle). It can be seen that base poses learned in object frame are more generic and have a grasp success rate of over 96%, compared to base poses learned in the table frame, which achieve only around a 91% grasp success rate. These results can be attributed to the fact that in the object frame, the agent only needs to explore the region around the object to predict base poses, simplifying the learning process. Learning performance of . Fig. <ref> compares the performance of the grasp sequence policy with and without the Greedy Rollout Baseline and the use of attention coefficients α during training. The use of the baseline accelerates learning and results in better policies with higher rewards. This improvement occurs because the baseline reduces variance during learning <cit.>. Additionally, we observe that learning attention coefficients (blue) leads to stable learning compared to learning without attention coefficients (green). The attention coefficients learn to determine how much attention should be given to other objects in the scene, thus generating more informative context embeddings. § CONCLUSION AND FUTURE WORK In this work, we have presented , a learning-based approach for planning mobile manipulator base pose sequences for pick-up tasks while optimizing total navigation and grasping time. We compared our work with three baselines (+2 variations) that use exact and approximate methods for solving the problem. Our experiments show that produces comparable solutions in significantly less computation time. In this way,   allows the robots to quickly re-plan when the object configuration in the scene changes, either due to human intervention or collision with other objects in the scene. A limitation of this work is it doesn't consider uncertainty in the object poses and the robot's self-localization. However, execution failures can be prevented by estimating the uncertainties <cit.> and assessing whether the estimated errors are acceptable for the successful execution of the planned action <cit.>. If uncertainties exceed acceptable thresholds, the robot can defer the action execution until uncertainties reduce to acceptable levels. Future works should investigate the inclusion of pose uncertainties in the state space during learning so that robots can plan the base poses considering uncertainties. -0cm § ACKNOWLEDGMENT This work was supported by the Innovation Fund Denmark's FacilityCobot project and the European Union's Fluently project. IEEEtran
http://arxiv.org/abs/2406.08737v1
20240613014709
Field investigation of 3D snow settling dynamics under weak atmospheric turbulence
[ "Jiaqi Li", "Michele Guala", "Jiarong Hong" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Preprint jhong@umn.edu Saint Anthony Falls Laboratory, University of Minnesota, Minneapolis, MN, USA Department of Mechanical Engineering, University of Minnesota, Minneapolis, MN, USA Department of Civil, Environmental, and Geo- Engineering, University of Minnesota, Minneapolis, MN, USA § ABSTRACT Research on the settling dynamics of snow particles, considering their complex morphologies and real atmospheric conditions, remains scarce despite extensive simulations and laboratory studies. Our study bridges this gap through a comprehensive field investigation into the three-dimensional (3D) snow settling dynamics under weak atmospheric turbulence, enabled by a 3D particle tracking velocimetry (PTV) system to record over a million trajectories, coupled with a snow particle analyzer for simultaneous aerodynamic property characterization of four distinct snow types (aggregates, graupels, dendrites, needles). Our findings indicate that while the terminal velocity predicted by the aerodynamic model aligns well with the PTV-measured settling velocity for graupels, significant discrepancies arise for non-spherical particles, particularly dendrites, which exhibit higher drag coefficients than predicted. Qualitative observations of the 3D settling trajectories highlight pronounced meandering in aggregates and dendrites, in contrast to the subtler meandering observed in needles and graupels, attributable to their smaller frontal areas. This meandering in aggregates and dendrites occurs at lower frequencies compared to that of graupels. Further quantification of trajectory acceleration and curvature suggests that the meandering frequencies in aggregates and dendrites are smaller than that of morphology-induced vortex shedding of disks, likely due to their rotational inertia, and those of graupels align with the small-scale atmospheric turbulence. Moreover, our analysis of vertical acceleration along trajectories elucidates that the orientation changes in dendrites and aggregates enhance their settling velocity. Such insights into settling dynamics refine models of snow settling velocity under weak atmospheric turbulence, with broader implications for more accurately predicting ground snow accumulation. Field investigation of 3D snow settling dynamics under weak atmospheric turbulence Jiarong Hong June 17, 2024 ================================================================================== § INTRODUCTION Understanding the intricacies of snow settling dynamics is critical for accurately modeling snow accumulation, which has various scientific and socio-economic implications. These include issuing natural hazard warnings such as avalanches <cit.> and snow-melt floods <cit.>, understanding snow hydrology and its influence on local climates <cit.>, and optimizing traffic management during snow events <cit.>. A crucial determinant in the rate of snow accumulation is the settling velocity of snow particles, which can vary significantly, ranging from 0.5 m/s to speeds exceeding 3 m/s <cit.>. This variability greatly influences the drift distance of snowflakes as they descend from clouds to the ground. Presently, weather forecast models often struggle with precise predictions of ground snow accumulation, leading to potential economic repercussions <cit.>. The variability in the settling velocity of snow particles in the atmosphere has been historically attributed to their morphology (e.g., size and shape), which poses a challenge in predicting their aerodynamic drag due to their complex and variable shapes <cit.>. Snow particle morphology is mainly determined by environmental conditions within clouds, such as temperature and humidity (i.e., supersaturation). The microphysics of ice crystal formation, extensively studied in works like <cit.> and <cit.>, reveals a variety of emerging crystal shapes. These range from disk-like plates and dendrites to thin-cylinder needles and columns. In conditions of high supersaturation, small, supercooled droplets can adhere to these crystals through a process known as riming, leading to the creation of sphere-like graupels. As these ice crystals fall from clouds to the ground, inter-particle collisions occur, resulting in increasingly complex particle structures such as fragments and aggregates. Besides, the interaction between air turbulence and snow particle settling has been often overlooked in simulations and laboratory experiments. Atmospheric turbulence is typically sustained by the large velocity gradients of the high Reynolds number atmospheric surface layer, where coherent structures across various scales emerge, and modulate the snow settling velocity <cit.>. Historically, measurements of snow particle fall speed did not account for the influence of atmospheric turbulence. The terminal fall speed, strictly defined in quiescent flow, was directly linked to aerodynamic drag and influenced by factors like particle size, shape, and mass. Various studies, including early research by <cit.>, have sought to empirically correlate fall speeds with particle sizes. They observed an increase in velocity with size for graupels, crystals with droplets, and needles, while noting that dendrites and powder snow typically fall at a slower rate (∼ 0.5 m/s), regardless of size. However, as their study was carried out in the laboratory setting, the snow particles might not reach their terminal velocity in a confined space. In a later study, <cit.> introduced equations for calculating the terminal velocities of different snow morphologies, based on field measurements of drag coefficients, aspect ratios, and densities. Following this, <cit.> developed a specialized measurement instrument under a 3.8-meter-high shielded tower, to ensure snow particles reached terminal velocity during measurement. Their extensive collection of over 300 varied snow particles led to the development of empirical equations based on dimensional power laws, each tailored to specific snow morphologies and dependent on particle size. These studies underscore the importance of size and shape in determining the varying terminal velocities of snow particles. Despite these advancements, a comprehensive understanding of the detailed settling kinematics for these diverse morphologies remains an area for further exploration. Kajikawa's extensive research from 1976 to 1997 laid a foundational understanding of snow particle dynamics, focusing on the free-falling behaviors of various snow particle types, such as columnar snow, early snow/aggregates, and plate-like snow <cit.>. In these laboratory experiments, they documented a spectrum of free-fall motions, ranging from stable, horizontal movement-free descents to more complex patterns like non-rotating glides, swings, rotating glides, and spiral motions. Notably, the spiral motions exhibit inherent frequencies that correlate with the particle's Reynolds number, providing insights into the free-fall dynamics of snow particles. More systematic studies investigated the falling dynamics of idealized anisotropic particles, including disks and thin cylinders. These studies revealed that due to their large aspect ratios, such particles often orient themselves to maximize their projected area downwards during stable falls, i.e., preferential orientation. However, this steady fall is not always maintained; instabilities can lead to fluttering and even tumbling motions. These falling dynamics were explored extensively through experiments and simulations by researchers like <cit.>, <cit.>, and <cit.>. Their work demonstrated the diverse falling styles of disks in quiescent flow, influenced by varying combinations of Reynolds number (Re) (or Galileo number, Ga; Archimedes number, Ar), and dimensionless moment of inertia (I^*). Similarly, thin cylinders, as studied by <cit.> and <cit.>, exhibit comparable settling dynamics in quiescent flow. It was observed that due to their larger aspect ratio, even minor disturbances could induce more pronounced instabilities, leading to complex spinning (rotation around the axis of symmetry) and tumbling (rotation around other axes) in these particles. These movements are important as they affect the settling of these particles through the air, potentially changing their frontal area and their drag coefficient, which in turn influences their settling velocity. As a result, particle morphology and falling styles are deeply interconnected. Air turbulence has been observed to modulate the settling velocity and spatial distribution of heavy inertial particles, regardless of their shape, simply due to their inability to follow exactly the motion of the fluid flow around them <cit.>. Most studies have focused on point particles or small spherical particles, trying to separate morphological effects from turbulence effects. As anisotropic particles already exhibit various dynamics in quiescent flow, turbulence introduces more disturbances, suggesting that the two effects can hardly be decoupled <cit.>. <cit.> conducted experiments on free-falling disks and observed unique and complex settling behavior in turbulent flows (slow tumbling & levitation). These motions displayed frequencies significantly lower than those of natural disks settling in still air. Interestingly, they noted an increase in settling velocity with greater horizontal velocity fluctuations and a decrease in oscillation frequency. Moreover, <cit.> conducted simulations on settling spheroids with various shape factors, including two extremes: disks (oblate spheroids) and needles (prolate spheroids), under various levels of turbulence. They observed that the preferential orientation of anisotropic particles is randomized by increasing level of turbulence, thus leading to more enhanced settling velocity (even though the morphology effect remained strong under weak turbulence). Despite the extensive numerical simulations and laboratory experiments, there remains a notable gap in field data that capture the complexity of realistic snow particles and atmospheric flow conditions, as compared to the usage of simplified model particles <cit.>, and controlled laboratory settings <cit.>. Therefore, field data is crucial for a deeper understanding of the settling dynamics of snow particles with varied morphologies in weakly turbulent conditions. Our group has been actively involved in field investigations of snow settling for the last decade. A significant advancement was the development of a super-large-scale particle image velocimetry system (SLPIV) by <cit.>. This system has been instrumental in visualizing flow structures in the wake of wind turbines <cit.> and characterizing the atmospheric turbulent boundary layer <cit.>. More recently, it has been applied to research on snow settling dynamics. <cit.> utilized this technology to quantify the settling trajectories of snow particles, measuring their Lagrangian velocity, acceleration, and aerodynamic properties. Their findings revealed a significant enhancement in settling velocity due to turbulence. Building on this, <cit.> explored snow settling and clustering under various conditions, noting clustering at near-critical Stokes numbers and an increase in settling velocity correlating with concentration and cluster size. These findings indirectly support the preferential sweeping mechanism. Further, <cit.> provided direct evidence of preferential sweeping in atmospheric turbulence by simultaneously using SLPIV and PTV for flow and snow trajectory quantification. They observed increased snow concentration and enhanced settling velocity on the downward side of vortices, directly supporting the preferential sweeping mechanism. However, these studies were limited by planar imaging, which restricts the observation of snow particles' 3D motion, especially the spanwise motion, and did not consider the morphology effect of the snow particles. Therefore, comprehensive 3D field investigations and simultaneous, detailed measurements of snow morphology are essential. In this study, we aim to bridge this gap by conducting field measurements during snow events using an imaging-based three-dimensional particle tracking velocimetry (3D PTV) system <cit.> for tracking 3D snow particle trajectories and a snow particle analyzer <cit.> for assessing snow morphology and density. Our objectives are threefold: to understand how snow morphology influences snow aerodynamic properties, to determine the impact of morphology on particle 3D settling kinematics, and to assess how these dynamics affect snow settling velocity. Section <ref> of this paper will detail the measurement instruments and data processing procedures. Section <ref> will discuss the results and findings, followed by conclusions and discussions in Section <ref>. § METHOD We conducted a series of field experiments at the EOLOS field research station (Figure <ref>) in Rosemount, MN, USA, spanning the winter seasons from November 2021 to April 2023. The research station is well-equipped with a meteorological tower, which includes sensors for wind velocity, temperature, and humidity. These instruments are crucial for assessing the atmospheric and turbulent conditions during our field experiments. The tower is fitted with four sonic anemometers (CSAT3, Campbell Scientific) at heights of 10, 30, 80, and 129 m. These anemometers, with a 20 Hz sampling rate and path lengths of 5.8 cm horizontally and 10 cm vertically, provide detailed wind velocity data. Additionally, six cup-and-vane anemometers, each with a 1 Hz sampling rate, are positioned at elevations of 7, 27, 52, 77, 102, and 126 m to complement the wind measurements. In each field deployment, we utilized a three-dimensional particle tracking velocimetry (3D PTV) system, as described by <cit.>, to capture the trajectories of settling snow particles. To characterize the morphology and density of these snow particles, we employed a digital inline holography (DIH) system integrated with a high-precision scale, known as a snow particle analyzer, following the methodology outlined by <cit.>. §.§ Experimental setup and data processing Figures <ref>a and c illustrate the setup of our 3D PTV system. This system consists of four wire-synchronized cameras (Teledyne FLIR, FLIR Black Fly S U3-27S5C color unit with Sony IMX429 sensor: 1464 × 1936 pixels, 4.5 m/px) strategically positioned around a light cone 5.5 m away, spanning a 90-degree angle range. This light cone is created by reflecting and expanding light from a searchlight using a curved mirror, similar to the setup used in our planar measurements. The cameras are tilted upward with 58-degree angles, leading them to image at a sample volume 10 m above ground. Each camera is then connected to its own data acquisition unit. These units are equipped with a board-level computer for issuing image capture commands to the cameras, a solid-state drive for storing both the system software and captured images, and a dedicated power supply. The cameras capture images with 2× decimation (732 × 968 pixels) to reach 200 Hz frame rate. As the standard checkerboard method cannot be applied to a field of view 10 m above ground after dark in the field, we use the wand calibration method described by <cit.> for camera calibration. We use two colored light-emitting diodes (LEDs), set at a fixed distance apart on a carbon fiber rod, to act as the “wand" attached to an unmanned drone. This calibration process is conducted multiple times before and after each deployment to ensure the same field condition between camera calibration and snow particle imaging. We have developed custom-designed camera control software that synchronizes the image capturing process across all four cameras. The calibration of the cameras is conducted using the open-access software easyWand <cit.>, which involves capturing images of the two colored LEDs as they move within the imaging volume. The software utilizes the trajectories of the two LEDs from all four cameras to conduct the calibration, resulting in a final reprojection error within 0.25 pixels. For tracking snow particle trajectories, we utilize an open-source implementation of the shake-the-box (STB) method <cit.>. This version builds on the original STB method proposed by <cit.>, with enhancements specifically in the identification and removal of ghost particles. These improvements make it particularly suitable for our field data, which feature relatively high noise levels and a large field of view. This approach enables our system to capture snow particle trajectories within a considerable volume of approximately 4 × 4 × 6 m^3. The system achieves a spatial resolution of 6.3 mm/voxel and a temporal resolution of 200 Hz, allowing for detailed and precise tracking of snow particle movements. The tracked snow particle trajectories are then re-oriented as a group to have the average streamwise direction as the x direction. Thus, the y direction is defined as the spanwise direction, and the z direction is defined as the vertical direction. From these trajectories, we obtain the Lagrangian velocity, u=(u_x,u_y,u_z), the Lagrangian accelerations, a=(a_x,a_y,a_z), and the resulting curvature, κ=u×a / u^3, where × represents the cross product. We use the second-order central difference method to calculate the Lagrangian velocity (first-order derivative) and acceleration (second-order derivative). The approximation introduces inherent errors, O(Δ t^2), which depends directly on the time step and is relatively small. However, the positioning errors of the snow particles can propagate and magnify in the velocity and acceleration calculation. As discussed by <cit.> and <cit.>, the iterative particle reconstruction, shake-the-box tracking, and trajectory filtering techniques significantly refine and reduce positioning errors. We quantify the root mean square of the difference between trajectory positions before and after filtering to be 0.3 pixels. This reduction potentially compensates for the positioning errors inherited from camera calibration, resulting in smaller errors in velocity and acceleration calculations. Consequently, the actual uncertainties in measuring velocity and acceleration are primarily influenced by the selection of filter length, which ranges from 45 ± 2 frames (see Appendix <ref>). This leads to an average acceleration uncertainty of 0.34 m/s^2. To complement our 3D PTV system, we also deployed a snow particle analyzer near the 3D PTV setup to assess the morphology and density of snow particles during each snow event (Figure <ref>b and d). All these measurements are crucial for accurately estimating the terminal velocity of snow particles in still air. As shown in Figure <ref>c, the snow particle analyzer employs a digital inline holography (DIH) system, which captures holograms of snow particles within a sample volume of 2.9 × 2.2 × 14.0 cm^3. This system achieves a spatial resolution of 14 m/pixel and a temporal resolution of 50 Hz. Through image analysis of the holograms, we obtain detailed information on particle size and shape, specifically the area equivalent diameter (D_eq), major axis length (D_maj), minor axis length (D_min), area (A_e), etc. We also classify the shape of each particle into one of six types: aggregates, graupels, dendrites, plates, needles, and small particles. We define the characteristic particle size (D_p) as the area equivalent diameter for aggregates, graupels, and small particles, and as the major axis length for dendrites, plates, and needles. Additionally, a high-precision scale measures the weight of snow particles passing through the DIH sample volume, allowing us to estimate the average density of the particles. We also perform conditional sampling to achieve measurement of the density of individual snow particles. For estimating the aerodynamic properties of snow particles, we follow the method proposed by <cit.>. This method involves calculating the Best number X (also known as the Davies number), a dimensionless number that incorporates only the physical properties of snow particles and ambient air, and represents the equilibrium between gravity and drag forces. The Best number is defined as: X=C_D Re_p^2=8 ρ_p V_p g ρ_a/πμ^2(A/A_e)^1 / 4. Noteworthily, unlike particle Reynolds number (Re_p=ρ_a W_0 D_p / μ) and drag coefficient (C_D), the definition of Best number eliminates the need to incorporate particle terminal velocity in the formulation, which is not readily available for complex snow particles. In equation <ref>, ρ_p and V_p are the density and volume of the snow particles, respectively, which are specific to the type of snow particle, as detailed in <cit.>. Specifically, we approximate the complex snow particles as spheroids (graupels and small particles), combinations of small spheroids (aggregates), disks with thickness in correlation with their diameter (plates and dendrites), and thin cylinders (needles and columns). Such a method minimizes errors in volume estimation as compared to the typical spherical assumption used in the snow measurement community, resulting in the uncertainties of the volume estimation within 10% for snow particles with irregular shape (aggregates) and uncertainties of density within 20% for all demonstration cases in <cit.>. A_e is the effective snow particle imaged area, and A is the circumscribed area of the enclosing circle or ellipse. Such an area ratio, A / A_e, serves as a simplified 2D measure of porosity and is instrumental in better predicting the drag of complex snow particles. As the snow morphological parameters are quantified by the snow particle analyzer while particles settle in various orientations, we assume that the ratio A/A_e remains constant regardless of orientation. Finally, ρ_a and μ are the density and viscosity of air, respectively. Following the definition of the Best number, the drag coefficient of snow particles is modeled as a function of the particle Reynolds number, accounting for the unique morphology of snow particles. This approach indirectly incorporates the effect of snow particle density, which contributes to increasing the settling velocity. According to Stokes' law for Re_p ∼ O(1), the correlation for the drag coefficient dependent on the particle Reynolds number is C_D = 24/Re_p. However, the Stokes' law becomes invalid as the Reynolds number increases, especially for complex snow particles. Researchers have made various attempts to model the drag coefficient of snow particles theoretically <cit.>. As suggested by <cit.> and references therein, the drag coefficient of snow particles is modeled by considering the boundary layer surrounding the snow particles as a whole: C_D=C_0(1+δ_0/R e_p^1 / 2)^2 , where C_0=0.6 is an inviscid drag coefficient and δ_0=5.83 is a parameter controlling the evolution of the particle boundary layer, likely depending on the particle surface roughness, both empirically estimated. Equation <ref> has the form of corrected Stokesian drag for a rigid sphere <cit.>, but with different coefficients, modulating the transition from a linear drag at low Re_p to a constant drag coefficient C_0 in the Re_p independent regime. The effect of different snow morphologies is included in the A/A_e term in equation <ref>, which is then used to predict the snow type specific drag coefficients, C_De=(A/A_e)^3/4C_D. The C_0 and δ_0 parameters have been more recently updated, along with the dependency on the area ratio, by <cit.> and <cit.>. Additional corrections considering turbulent boundary layer, temperature, humidity, and accounting for different snow particle types, have been discussed in <cit.> and <cit.>. As described in <cit.>, we then obtain a semi-analytical and semi-empirical equation for the particle Reynolds number by combining equations <ref> and <ref>: R e_p=δ_0^2/4((1+4 X^1 / 2/δ_0^2 C_0^1 / 2)^1 / 2-1)^2 . The terminal velocity of the snow particles (W_0) in quiescent air is then calculated from the Reynolds number, R e_p=ρ_a W_0 D_p / μ. Once the terminal velocity is obtained, the aerodynamic particle response time is defined as τ_p=W_0/g. The analyses described have been meticulously applied to each snow particle type, leveraging the unique physical properties of individual particles, captured by the snow particle analyzer. Through a detailed examination of the collected holograms, we identify and classify each particle, subsequently analyzing their specific inertial properties, namely D_p, ρ_p, A, and A_e. By employing these properties within the Böhm model <cit.>, we were able to estimate the aerodynamic properties of each particle. This rigorous method allows us to calculate the distribution and mean values of the terminal velocity (W_0) and drag coefficient for individual snow particles and specific snow types. §.§ Turbulence and snow conditions in the field Over the course of the winter seasons from December 2021 to April 2023, we successfully carried out eight field deployments, encompassing a diverse range of environmental conditions. These deployments allowed us to study four major types of snow particles: aggregates, graupels, dendrites/plates, and needles/columns. We encountered wind speeds varying from a gentle 0.6 m/s to a more intense 8.4 m/s. Based on these wind speeds, we categorized the conditions into three turbulence levels: weak turbulence (wind speed less than 3 m/s with turbulent kinetic energy, TKE, below 0.3 m^2/s^2), moderate turbulence (wind speed between 3 and 6 m/s with TKE ranging from 0.3 to 2.0 m^2/s^2), and relatively strong turbulence (wind speed exceeding 6 m/s with TKE above 2.0 m^2/s^2). These turbulent properties were measured using the sonic anemometer positioned at a height of 10 m. Details of the estimation methods of these quantities can be found in <cit.>. We use the second-order structure function of the streamwise velocity fluctuation to estimate the dissipation rate (ε). The Taylor microscale (λ) is then calculated as λ=u^'√(15 ν / ε), where u^'=√((u_x^' 2+u_y^' 2+u_z^' 2) / 3) is the representative scale of fluctuating velocity, and ν is the viscosity of air. Given the variety of snow particle types and wind speeds, our field data encompasses a total of 31 distinct conditions. To effectively separate the influences of snow morphology and atmospheric turbulence on snow settling velocity, a more systematic classification of the field snow and turbulence conditions is essential. We propose using the settling parameter S v_L=W_0 / u^', which quantifies the relative impact of turbulence on snow gravitational settling <cit.>. This parameter represents the ratio of the snow particle’s terminal velocity in still air (W_0) to the root-mean-square of the turbulent velocity fluctuations (u^'). A higher value of Sv_L indicates that the influence of turbulence on the snow settling velocity is relatively minor. In our analysis, we utilized the settling parameter, wind speed, and turbulent kinetic energy (TKE) as key criteria to categorize our 3D PTV and snow particle analyzer datasets. This approach led us to identify four distinct groups, which we labeled as ‘weak turbulence’ cases, with relatively smaller Taylor Reynolds number (Re_λ) and higher settling parameters (Sv_L) as shown in Figure <ref>. We assess the turbulence and micro-meteorological conditions for each group detailed in Table <ref>, employing estimation methods as outlined in the studies by <cit.> and <cit.>. Each group is dominated by one specific type of snow particle, which constitutes more than half of the snow population in the dataset. These types are aggregates, graupels, dendrites, and needles, as detailed in Table <ref> and illustrated in the size and shape distributions in Figure <ref>. We leverage the capabilities of the snow particle analyzer, as detailed in Section <ref>, to estimate these physical properties of snow particles. During a selected one-hour period characterized by dominant snow particle types, our analysis encompasses 200,000 holograms for each type of snow particle. This comprehensive dataset yields detailed information about approximately 28,000 aggregates, 13,000 graupels, 30,000 dendrites, and 21,000 needles. Complementing the snow particle analyzer measurements, our 3D PTV datasets include a total of 500 seconds of images for each dominant snow type, which are broken down into 50-second segments throughout the one-hour period selected. This rich dataset facilitates the identification of millions of snow particle trajectories, specifically around 322,000 for aggregates, 285,000 for graupels, 1,037,000 for dendrites, and 182,000 for needles, providing orders of magnitude more data than our previous studies. Specifically, our 3D PTV system measures the complete 3D velocity and particle acceleration components. The additional spanwise dimension of the data, compared to planar measurements, enables a thorough analysis of snow particle kinematics, including trajectory curvature and meandering. Furthermore, the integration of the 3D PTV system with the snow particle analyzer allows us to correlate the specific morphology of snow particles (e.g., size, shape, and type) with their settling behavior. For the detected snow particles, their size and shape are measured through image analysis described in Section <ref>. Given that these particles may present various orientations relative to the imaging plane, relying solely on the projected area (or equivalent diameter) falls short of providing a precise representation of the characteristic size of each particle, especially the non-spherical ones. We thus define the particle size as the equivalent diameter for aggregates and graupels, the major axis for dendrites (diameter) and needles (length), as detailed in <cit.>. Upon closer examination, we observed notable differences among these types. Graupels and needles, for instance, tend to have a more uniform size distribution, with a smaller average size and standard deviation compared to aggregates and dendrites. Aggregates and dendrites, on the other hand, are generally larger, and their datasets include a mix of other particle types, resulting in a broader size distribution. We also analyzed the aspect ratio (i.e., the ratio between the minor and major axes lengths, D_min/D_maj) of these snow particles, defined as the ratio of their minor to major axis lengths, as measured by the snow particle analyzer. Graupels predominantly exhibit aspect ratios greater than 0.8, indicating their near-spherical shape. In contrast, the aspect ratios for the other types vary significantly from one, suggesting more anisotropic shapes. In this respect, note that the 2D holograms do not allow to accurately capture the averaged thickness of plate-like crystals due to the random particle orientation, unless further analysis is performed on selected particle images as in <cit.>. Furthermore, we measured the average density (ρ_p), together with the average particle size (D_p) and aspect ratio (D_min/D_maj), of the four datasets using the snow particle analyzer. Needles, being solid crystals with minimal riming, have the highest average density of 360 kg/m^3. Dendrites follow with an average density of 280 kg/m^3, as it is influenced by the gaps between branches, which contribute to the overall porosity of the particles. Graupels have an average density of around 220 kg/m^3, aligning with our previous measurements. Moreover, aggregates exhibit the lowest density of around 90 kg/m^3, as expected, attributable to their larger size and higher porosity. § RESULTS Utilizing the snow particle analyzer, we have successfully measured both the morphology and density of snow particles, enabling us to accurately predict their aerodynamic properties. Additionally, our 3D PTV system has provided detailed 3D settling dynamics from millions of snow particle trajectories. Armed with this comprehensive data, we address three key questions in the following section: First, how does the morphology of snow particles influence their aerodynamic properties? Second, in what ways does morphology impact the settling kinematics of these particles? And third, how do the varying settling dynamics among different types of snow particles affect the overall settling velocity of snow? These inquiries form the core of our investigation, shedding light on the intricate interplay between snow particle morphology and their settling behavior through the atmosphere. §.§ Aerodynamic properties This section presents an in-depth examination of the aerodynamic characteristics, including their terminal velocity, drag coefficient, and settling velocity, for each snow particle type. Table <ref> consolidates key aerodynamic parameters derived from our analysis: the average settling velocity (W_s) obtained through 3D particle tracking velocimetry (3D PTV), the average estimated still-air terminal velocity (W_0) as outlined in Section <ref>, the velocity fluctuation (u^') and the Kolmogorov time scale (τ_η) of the flow, the particle's Stokes number (St_η=τ_p/τ_η), their settling parameter (Sv_L=W_0/u^'), and the Froude number (Fr_η=a_η/g, where a_η=u_η/τ_η is the Kolmogorov scale acceleration). Needles exhibit the highest terminal velocity among all four types. With the same particle size, the cylindrical-shaped needles have the smallest projected area and the highest density, leading to larger terminal velocities. The Stokes number gauges the particle's velocity response to sudden changes in flow, with values around one signifying a critical condition for turbulence-particle interactions. Settling parameters greater than one imply a weak influence of turbulence on the settling particles. The Froude number, a ratio of the characteristic flow acceleration (a_η=u_η/τ_η) to gravitational acceleration, suggests that gravitational settling is more pronounced than the turbulence effect on the particles <cit.>. Comparatively, the settling velocity enhancements from the terminal velocities are moderate, ranging up to 32% for aggregates, 13% for dendrites, 4% for needles, and 3% for graupels. These findings indicate that the turbulence effects (e.g., preferential sweeping and loitering) on particle settling is generally weak under the examined conditions. Variations in settling enhancement across snow types may be largely attributable to differences in particle size, shape, and density. Figure <ref> presents a comparative analysis of the probability density functions (PDFs) for settling velocity (W_s) and estimated still-air terminal velocity (W_0) across various snow particle types. The estimate of W_0 is based on the Best number, X=C_D Re_p^2, which does not directly depend on the settling velocity of the snow particles, but rather on their physical properties and the ambient air. Following <cit.> approach, summarized by equations <ref>-<ref>, we estimate the terminal velocity from measurable geometric and inertial properties by the snow particle analyzer. The PDFs for graupel, which are nearly spherical in shape, exhibit a close overlap between the settling and terminal velocities, indicating a minimal influence of turbulent eddies on their settling dynamics. Note that the `dent' in the distribution of the terminal velocity of graupels in Figure <ref>b reflects their size distribution. On the contrary, for the other snow types—characterized by non-spherical geometries—the PDFs diverge despite the mean settling and terminal velocities for needles displaying only a 4% discrepancy. This variation suggests that the aerodynamic behavior of non-spherical particles is considerably affected by the randomization of their orientation due to flow disturbances and unsteady behavior. In quiescent conditions, particles falling stably tend to orient themselves to maximize the aerodynamic drag (i.e., preferential orientation), potentially due to the inertial forces of the surrounding media, presenting their maximal cross-sectional area perpendicular to the fall direction <cit.>. However, in turbulent conditions, such a preferential orientation is not appreciable <cit.>, and the varying orientations result in a reduced effective cross-sectional area, potentially leading to an increased average settling velocity for non-spherical particles. Furthermore, while the settling velocity distributions for different snow types approximate a Gaussian profile, the estimated terminal velocities are rather skewed. This asymmetry arises from the inherent size distributions of the snow particles, which are typically modeled using a gamma distribution <cit.>. We also acknowledge the potential sampling differences between the 3D PTV measurements (likely under-representing the finest size fraction) and the snow particle analyzer data collection. Historical studies have demonstrated that the terminal velocity of snow particles exhibits a size-dependent characteristic, since the early research by <cit.>, <cit.>, and <cit.> fitting empirical data to establish a particle-mass-based approach to the settling. Specifically, <cit.> conducted a thorough investigation of various snow particle types, deriving power-law empirical formulas to represent the size-dependent terminal velocity, expressed as W_0=a D_p ^b, where a and b are constants that differ based on the snow particle type, based on shape and density. For our analysis, we employed the formulas relevant to aggregates of unrimed radiating assemblages of dendrite (W_0=0.8D_p^0.16), conical graupel (W_0=1.2D_p^0.65), rimed dendrites (W_0=0.62D_p^0.33), and rimed columns (W_0=1.1L^0.56, where L is the length). Such power-law equations can be empirically obtained by fitting the size distributions in Figure <ref> with the settling velocity in Figure <ref>. We optimize the linear coefficient with the same exponent to impose the same mean and similar distribution of the settling velocity for each snow type, and thus obtained the empirical equations: W_s=1.45D_p^0.16 for aggregates, W_s=1.2D_p^0.65 for graupels, W_s=0.92D_p^0.33 for dendrites, W_s=1.66L^0.56 for needles. We thus obtain the same equation for W_0 and W_s for graupels, suggesting close alignment in the mean values and distributions between the measured settling velocities and the estimated terminal velocities, the same as predicted using equations from <cit.>. This also confirms negligible effects by the specific atmospheric turbulence conditions monitored during the settling of graupels. In contrast, for other non-spherical types of snow, the linear coefficients for W_s are higher than those of W_0. This discrepancy highlights the morphology effects that modulate the settling velocity of these non-spherical snow particles, with potential turbulence effects considering the varying particle orientation because of turbulence disturbances, and the production of the Stokes number and settling parameter reaching critical condition <cit.>. To better model the terminal velocity, it is important to quantify the aerodynamic drag of snow particles for various morphological types. In Figure <ref>, we present the mean drag coefficients and mean Reynolds numbers, estimated using the average particle size and measured settling velocity. The error bars indicate the variability of these quantities, reflecting the distribution of snow particle sizes and settling velocities as represented by their standard deviations. The drag coefficient is calculated as C_De, mean =2 ρ_pV_p g / ( ρ_a W_s^2A_e,max), where ρ_p is the average snow particle density, V_p is the average particle volume <cit.>, W_s^2 is the mean square settling velocity; and A_e,max is the average maximal projected area of the measured snow particles (e.g. a flat falling dendrite, see Appendix <ref>). The snow particles have an average Reynolds number (R e_p, mean = W_s D_p / ν) on the order of 100, agreeing with typical field measurements <cit.>. The drag coefficients for aggregates, graupels, and needles agree well with the model predictions from <cit.>, C_De=(A / A_e)^3/4 C_0(1+δ_0 / R e_p^1 / 2)^2, as presented by the dotted lines. Note that the average area ratio, A / A_e, is calculated from the snow particle holograms for each snow type, and it is necessary to rescale the generalized drag equation <ref> to the specific snow morphologies <cit.>. As graupels show more sphere-like features, their drag coefficient leans towards that of spheres <cit.>. Despite the smaller terminal velocity for the non-spherical particles considering the particle orientation, the drag coefficient is well-predicted by the <cit.> model for aggregates and needles. Potential contamination from other types (∼ 20% after filtering out particles with close to an aspect ratio of one) within the datasets might lead to the mismatch between the PDFs of the terminal velocity and measured settling velocity for needles. The enhanced settling for aggregates could be a combined result of particle orientation, weak turbulence enhancement considering the critical condition of St_η Sv_L ∼ 1, and contamination from other types that do not align with the statistically dominant group, as shown in Table <ref>. Moreover, the dendrites show, on average, a higher drag coefficient as compared to the other types, potentially due to their large frontal area and higher density. Such a discrepancy could also explain the underestimation of the terminal velocity of dendrites by the equations from <cit.>, considering the higher, on average, settling velocity. We further compare our measurements with laboratory experiments by <cit.>, with squares representing the aggregates (AgCr77, Ag15P1, AgSt18), pentagrams representing the dendrite (D1007), and left-pointed triangles representing the columnar snow (CC20Hex2). Our measurements agree well with the laboratory experiments, with the dendrite type showing a larger drag coefficient due to its disk-like shape. Such observations provide insights for snow settling modeling, especially for the predominantly dendrite snow events, as they show a large deviation from the <cit.> model prediction. §.§ Settling kinematics §.§.§ Qualitative observation Besides the settling velocity, the kinematic behaviors of snow particle settling trajectories are as variable as their shapes, with morphology playing a significant role in their settling behavior. Similar to the findings of Kajikawa’s laboratory studies <cit.>, snow particles demonstrate a range of falling styles under weak atmospheric turbulence, akin to those of disks and thin cylinders in quiescent flows. Figure <ref> displays a collection of snow particle trajectories, differentiated by the color-coded spanwise acceleration, from datasets dominated by different snow types. The distinct kinematics observed here are likely a consequence of each type’s unique morphology under similar atmospheric conditions. Aggregates and dendrites, in particular, exhibit pronounced meandering motion, characterized by substantial acceleration fluctuations at a relatively low frequency. This behavior could be attributed to their larger sizes and frontal areas, which, when subject to even weak atmospheric turbulence, result in unstable settling patterns marked by fluttering or tumbling motions. In contrast, graupels, with their quasi-spherical form, show a relatively high-frequency, low-magnitude meandering motion, and maintain a consistent travel direction. This suggests that graupels can better follow the fluid flow, considering their smaller particle size and lower density compared to the other non-spherical particle types, with their meandering motion possibly revealing interactions with small turbulent eddies. Needles exhibit weak magnitude and infrequent fluctuations in acceleration, but appear to experience a wider spanwise velocity range, as shown by the spread of trajectories in the spanwise direction. Their elongated, cylindrical shape, presenting a minimal frontal area relative to length, likely contributes to their tendency to align with the flow, resulting in this distinct settling pattern. Additionally, a detailed but qualitative examination of the trajectories indicates that non-spherical particles predominantly exhibit zig-zag motions, potentially due to the vortex shedding in their wake <cit.>, whereas graupels tend to follow more helical paths, potentially spiraling around vortex tubes <cit.>. §.§.§ Kinematic quantification To thoroughly analyze the kinematics of snow particles, we examine their trajectories using the particle velocity (u=(u_x,u_y,u_z)), the Lagrangian accelerations (a=(a_x,a_y,a_z)), and the resulting curvature (κ=u×a / u^3), where × represents the cross product. The curvature quantifies the trajectory's deviation from a straight path, influenced by flow structures or the snow particle morphology. We define two curvatures: one based on the original path and another adjusted to reduce the effect of the different mean streamwise flow and settling velocities across datasets (κ= u^'×a / u^'^3, with u^'=(u_x-u_x, u_y, u_z-u_z)). Figure <ref> demonstrates this analysis with the trajectory of a dendrite snow particle. The apparent sinusoidal meandering is an actual measurement from our 3D PTV system. This meandering is underscored by the sinusoidal patterns in the velocity and acceleration components (Figure <ref>b and d), particularly pronounced in the acceleration signals in the horizontal plane. Spectral analysis of the acceleration variation along specific trajectories enables us to discern the strength and frequency of the meandering motion (Figure <ref>c). The dominant frequency is identified from the spectral peak, and the intensity is characterized by the magnitude of the horizontal acceleration fluctuations at this frequency. This comprehensive analysis yields detailed insights into how the morphology of snow particles influences their settling dynamics, particularly highlighting the effects on their acceleration statistics and trajectory geometry, which will be explored in depth in subsequent sections. §.§.§ Acceleration statistics Having quantified the settling trajectories of snow particles, we proceed to examine and compare the acceleration statistics across datasets featuring four snow particle types. Figure <ref> presents a detailed comparison of the acceleration behaviors of different types through the normalized acceleration probability density functions (PDFs) and the Lagrangian acceleration auto-correlation. The acceleration response of the particles is influenced by their morphological features and density within the weak atmospheric turbulence. Figure <ref>a juxtaposes the normalized acceleration PDFs of different snow types against the acceleration of fluid parcel in homogeneous isotropic turbulence, based on simulations by <cit.>. It is generally anticipated that, due to inertia, particles in turbulence will not accelerate as intensely as the surrounding fluid because they cannot keep pace with the rapid fluctuations of the turbulent flow. Nevertheless, the shape of the particles is also a critical factor in their acceleration dynamics. Dendrites, for instance, are more prone to high acceleration events, likely a consequence of their considerable size, expansive frontal area, and the non-linear nature of the drag forces they experience. Aggregates and graupels display a decrease in the probability of high accelerations, attributable to their less intricate shapes. Needles, characterized by their slender profile, exhibit a diminished probability of encountering higher acceleration events, which may be due to their streamlined shape that naturally aligns with the flow and vortex structures within, along with their smaller size and frontal area <cit.>. The PDF tails of these non-spherical particles also appear to correlate with their shape factors, with needles being prolate (β > 1), dendrites oblate (β < 1), and aggregates displaying a spectrum in between these extremes. This observation contrasts with the recent findings reported by <cit.>, which provides a universal scaling for snow particle acceleration. Additional experimentation under various turbulence conditions is needed to further investigate this discrepancy. In Figure <ref>b, c, and d, the acceleration autocorrelation functions of the Lagrangian acceleration components reveal distinct inertial responses for the four types of snow particles. These functions are derived from the snow particles’ settling trajectories, using the formula ρ_a(n Δ t)= ⟨ a(t_0) a(t_0+n Δ t)⟩ / ⟨ a^2⟩, where n is the number of time steps, and Δ t=1/200 s is the time step. Dendrites display the highest inertia, indicating a more pronounced resistance to changes in the fluid motion, followed by aggregates, needles, and graupels (small differences among the three for all components). The particle inertia is attributable to the larger sizes of dendrites and aggregates, their non-spherical shapes, and their greater density in the case of dendrites and needles. The zero-crossing points (τ_0) on the autocorrelation curves also provide temporal insights into the acceleration fluctuations and, consequently, the frequency of the meandering motions of the snow particles, as listed in Table <ref>. It scales with one-fourth the period of the meandering motion. In Table <ref>, we present a comparison of four times the zero-crossing time (4τ_0) of the acceleration autocorrelation function for the three acceleration components across various snow particle types. Generally, dendrites exhibit the largest zero-crossing time scales in their acceleration autocorrelation functions, suggesting a low-frequency meandering motion. Conversely, graupels demonstrate the smallest zero-crossing time scales, suggesting the fastest meandering frequency, corroborating the qualitative observations from Figure <ref>. The acceleration autocorrelation functions for needles and aggregates reach their initial zero at intermediary times, with aggregates showing slightly larger time scales. The trends in these autocorrelation functions further emphasize the influence of particle morphology on settling behavior, with the aspect ratios of non-spherical particles mirroring the trends in the zero-crossing time scales. Following up the autocorrelation functions of acceleration above, we provide a more direct measurement of the meandering motion of snow particles by examining the Lagrangian variations in position, velocity, and acceleration along their settling trajectories, as shown in Figure <ref>. The horizontal acceleration component, displaying the most pronounced variation, serves as a key indicator of meandering behaviors, as illustrated in Figure <ref>a and b, which depict the PDFs of acceleration fluctuation frequency and magnitude for different snow particle types. This analytical approach aligns with the qualitative findings from Figure <ref> and supports the acceleration statistics presented in Figure <ref>. The measured average frequencies and corresponding magnitudes are summarized in Table <ref>. These frequencies can be nondimensionalized into Strouhal numbers, S t=f_horz ·D_p / W_s, as proposed by <cit.>, and summarized in Table <ref>. Although the near-spherical shape of graupels is not expected to induce meandering motion, our measurements surprisingly reveal a weak meandering or helical motion, as evidenced by variations in velocity and acceleration. The observed average frequency of this motion closely matches the Kolmogorov scale frequency 1/τ_η = 4.6 Hz. This correspondence suggests that, despite the dominance of morphological effects in dictating particle behavior, especially for non-spherical particles, graupels still move around and weakly interact with the Kolmogorov eddies within the flow, considering their sizes close to those of the Kolmogorov eddies. In contrast, the meandering frequencies for non-spherical particles are lower than both the frequency corresponding to the Kolmogorov scale and the vortex shedding frequency in the wake of anisotropic particles identified in various studies <cit.>. Specifically for dendrites, we estimate the dimensionless moment of inertia to be ∼ O(0.1-1), resulting in the Strouhal number ∼ O(0.01) based on Willmarth et al. (1964), larger than that of the dendrites from our measurement. This discrepancy may be attributable to the delayed inertial response of non-spherical particles to the fluid flow and vortex shedding, as well as to the permeability of the dendrites. Moreover, the measured Strouhal numbers for these particles are consistent with Kajikawa’s laboratory measurements <cit.>, situating our findings within the observed range for the meandering motions of non-spherical snow particles. The vertical acceleration component displays fluctuations that could stem from orientation changes (resulting in drag force variation) in anisotropic particles due to horizontal meandering. The a_z fluctuation magnitudes are more pronounced since the horizontal component combines the x and y components, which are typically out of phase. Furthermore, the vertical acceleration fluctuation frequency is nearly twice the horizontal one because the inferred changes in particle orientation, caused by horizontal meandering, e.g., a perfectly edge-on configuration, have a 180-degree periodicity for disk- and needle-like shapes. This interpretation is consistent with the minimal differences observed for the near-symmetric graupel, and the trend is also consistent with the shorter zero-crossing times (τ_0,z) observed in our data. §.§.§ Trajectory geometry The variance in meandering frequency and magnitude across different types of snow particles results in distinctive trajectories. To quantify their geometrical differences, we employ curvature calculations both with and without the impact of the mean streamwise and settling velocities. Figure <ref> illustrates the probability distribution functions (PDFs) of these normalized trajectory curvatures (κη, where η is the Kolmogorov scale). For the original trajectories, curvature is calculated using the formula κ= u×a / u^3 (Figure <ref>a), where × indicates the cross product between the velocity (u) and acceleration (a) vectors. Additionally, to minimize the influence of varying flow and settling velocities across datasets, we adjust the velocity vector to u^'=(u_x-u_x, u_y, u_z-u_z)), and recompute curvature (Figure <ref>b). Previous research by <cit.>, <cit.>, and <cit.> has explored the geometry of fluid trajectories in turbulence, uncovering characteristic scaling within the curvature PDFs. Their findings suggest a universal scaling for both tails of the PDFs: low curvature events scale with κ^1, while high curvature events follow a κ^-5/2 scaling. <cit.> propose that these tail scaling laws result from Gaussian velocity statistics rather than turbulence gradients, contending that high curvature events correlate with periods of low velocity rather than high acceleration from interactions with thin vortex tubes as one might expect. Moreover, curvature can also be expressed as κ= a_n / u^2, so the tail of the curvature PDF, P_κ→∞, as κ→∞, scales similarly to the tail of the PDF of u^-2=1 / ( u_x^2+u_y^2+u_z^2 ), P_u^-2→∞, as u^-2→∞. Assuming velocity components are independent and follow Gaussian statistics, P_u^-2→∞ conforms to a chi-square distribution with three degrees of freedom, leading to the derived scaling of P_κ→∞∼κ^-5/2. <cit.> extended this theoretical framework to heavy inertial particles and verified through simulations that the same scaling applies to the PDFs of these particles’ trajectories. These theoretical insights can be integrated into our analysis of the geometry of snow particle trajectories, providing a better understanding of the intricate settling dynamics and trajectory geometry under the influential role of particle morphology. Figure <ref>a and b reveal that for most snow particle types, the tails of the curvature probability distribution functions (PDFs) exhibit similar scaling trends as reported in the previous research <cit.>. Nonetheless, when considering the mean streamwise and settling motions, the PDFs of curvature for non-spherical particles exhibit a notably different scaling, approximately following a κ^-4 trend as shown in Figure <ref>a. This deviation may arise from the rotation and meandering motion due to the morphology of non-spherical snow particles, which modulates the Lagrangian velocities along the trajectories. Notably, when the mean settling and streamwise velocities are removed from consideration, the tails of the curvature PDFs for different snow types tend to align on the higher curvature end. This pattern indicates that particle morphology predominantly influences the mean values of the settling and streamwise velocities, rather than their fluctuations. Moreover, the peaks of the PDFs are around 10^-2 and 10^-3, similar to those in the previous studies. However, as proposed by <cit.>, for fluid tracers, the peak of the PDF scale with (η R e_λ)^-1, which for the snow particle trajectories would be ∼ O(1). The smaller curvature for the snow particle trajectories might be attributed to the particle inertia <cit.>. Further analysis shows that, despite the pronounced meandering behavior of dendrites, they exhibit the smallest mean curvature, with aggregates, needles, and graupels following in ascending order. This trend can be explained by the fact that both the frequency and magnitude of the meandering motion contribute to the overall trajectory curvature. Dendrite trajectories, while displaying significant fluctuations in spanwise meandering motion, have a lower frequency, which culminates in a reduced mean curvature. The observed differences in the curvature probability density functions (PDFs) for graupels and other non-spherical snow types can be elucidated by drawing upon our earlier analysis in Section <ref>. For graupels, the curvature PDF in Figure <ref>a scales like that of fluid trajectories, indicating that the weak meandering behavior of graupels may stem from interactions with turbulent eddies. In contrast, for non-spherical particles, it is likely due to the combined influence of wake vortex instabilities, as discussed in Section <ref>, and weak atmospheric turbulence, as the scaling later converged in Figure <ref>b. Figure <ref> then delves into the relationship between normalized spanwise velocity (|u_y| / W_s) and normalized trajectory curvature (κη) for snow particles, taking into account the theoretical finding by <cit.> that high curvature events tend to coincide with low velocities. While snow particles settling in the atmosphere generally have non-zero streamwise and settling velocities, high curvature events are often tied to moments when the spanwise velocity is minimal and changing sign. This trend is evident in Figures <ref> and <ref>a, where the spanwise velocity approaches zero and reverses direction at the peaks of the meandering motion, leading to increased curvature at these turning points. In Figure <ref>, joint PDFs map the spanwise velocity magnitude and the local trajectory curvature, after subtracting the mean streamwise and settling velocities, for each snow particle type. A pronounced negative correlation between spanwise velocity and trajectory curvature is observed, particularly for dendrites (Figure <ref>c), which exhibit the most substantial correlation coefficient (σ_xy=-0.80). Aggregates display a similar negative correlation, but with a slightly lower coefficient (σ_xy=-0.77) and a reduced magnitude of spanwise velocity. Needles, despite having the lowest spanwise velocity magnitude potentially due to their smaller frontal area and high density, maintain a strong correlation with curvature, indicated by a correlation coefficient of σ_xy=-0.76. Graupels, on the other hand, show the weakest correlation among all particle types, with the lowest coefficient (σ_xy=-0.71). This analysis highlights the profound effect of snow particle morphology on meandering motion for non-spherical particles, which associates the near-zero spanwise velocity in the meandering extremes with the high curvature in their trajectories. Unlike graupels, whose weaker meandering motion is influenced more by interactions with turbulence eddies, non-spherical particles do not exhibit the expected correlation between high curvature and high acceleration, suggesting that their complex morphology dominates this meandering motion and corresponding high curvature events. §.§ Interconnection between trajectory geometry and settling velocity Our comprehensive analysis elucidates the distinctive settling behaviors of snow particles with various morphologies, addressing the questions raised at the beginning of our results section. Concerning the influence of morphology on snow aerodynamic properties, we observe that the response times for all snow particles are broadly similar, averaging around 0.1 seconds, on the same order of the intercept in the acceleration autocorrelation function. However, needles exhibit a marginally increased response time attributed to their higher density, particularly when compared with the density of the surrounding air. This higher density contributes to the needles' higher average terminal velocity in still air. Furthermore, the empirical models by <cit.> well predict the drag coefficients for aggregates, graupels, and needles, while dendrites emerge as anomalies, exhibiting drag coefficients exceeding model predictions, likely due to their large frontal area and oblate, disk-like, geometry. Notably, although dendrites and needles have similar aspect ratios, as measured by the snow particle analyzer, the dendrites are disk-like, oblate spheroids, while the needles are columnar, prolate spheroids. The different aerodynamic properties and morphologies of these snow particles have strong effects on their settling kinematics. Specifically, dendrites display unique behavior compared to other types. Their non-linear drag and substantial frontal area result in the most prominent acceleration fluctuation magnitude, occurring at relatively low frequencies. While the acceleration probability density function (PDF) for dendrites closely resembles that of a fluid parcel in turbulence, the acceleration auto-correlation function indicates a slow response to the rapid fluctuation of the flow velocity. Conversely, needles exhibit minimal acceleration fluctuation magnitude, suggesting relatively large inertia and a tendency to avoid intense cross-flow drag and convoluted trajectories. Yet, the acceleration autocorrelation function indicates a moderate rate of change in the direction of acceleration. Such different behaviors from that of dendrites are possibly due to their streamlined shape aligning with fluid flow structures, as for fibers in turbulence. Overall, acceleration statistics appear to correlate with the shape factors of these particles considered as spheroids, with dendrites and needles representing the spectrum's extremes and aggregates positioned in between. Finally, to answer the third question, it becomes apparent, from our analysis above, that the combination of turbulence and non-spherical particle morphologies can modulate the particle settling velocities even under weak atmospheric turbulence. For the cases investigated here, we hypothesize that dendrites exhibit an enhanced settling velocity (Figure <ref>c) that is due to an underestimation of the drag coefficient in the model by <cit.>; graupels settling velocity is well predicted even though spherical particles smaller than the Kolmogorov scale close to critical Stokes conditions were expected to exhibit settling velocity enhancement. The significant cross-flow velocities (considering the large settling parameter Sv_L) experienced by graupels may suggest that preferential sweeping was not the only mechanism in play during settling. Aggregates drag coefficient is well captured in the still air model, implying that the observed enhanced settling is likely due to combined effects of anisotropic particle orientation and the weak atmospheric turbulence. The observed conditions are marked by St_η Sv_L ∼ 1 for which settling enhancement has been predicted and observed <cit.>. Disentangling turbulence and morphology effects is challenging because turbulence-induced disturbances alter the preferential orientation (i.e., particles with their largest projected area facing the settling direction) of stably falling particles. The meandering motions of the non-spherical particles, whether fluttering or tumbling, likely affect their orientation, reducing their average projected area compared to a steady settling and thus enhancing settling velocity. However, direct measurement of particle orientation during settling is technically challenging and beyond our current capability. Thus, we investigate the interconnection between the meandering motion and the vertical acceleration along the trajectories of snow particles. Recognizing that these fluctuations might not be perfectly synchronized (owing to the particles' inertial response and the variability in drag force related to changes in projected area and settling velocity), we have considered a slight phase shift between the varying spanwise location y(t) and vertical acceleration a_z (t+τ) along the trajectories. This adjustment aims to align the locations of maximum meandering with the smallest projected area, which typically corresponds to greater downward acceleration. To maintain the integrity of the correlation, we limit the phase shift to 0.15 times a quarter of the meandering period to avoid creating an inverse relationship between vertical acceleration and meandering motion. Note that this time lag (τ) is on the order of 0.1τ_p, and close to the estimated Strouhal number for the disk-like particles. Thus, during a fraction of the anisotropic particle rotation and the corresponding translational response time, the particle is experiencing a reduction in drag area, leading to its acceleration downward. Figure <ref>a and b demonstrate that, following this phase shift, the greatest downward accelerations predominantly occur at the furthest extent of the spanwise meandering motion (y^'_a,max∼ 1). Conversely, the least downward acceleration—or even upward acceleration—tends to happen near the central position (y^'_a,min∼ 0), where anisotropic snow particles are likely to have their maximum projected area facing downwards. Subsequently, we calculate the correlation coefficient between vertical acceleration and spanwise position during the snow particles’ meandering motion. The results reveal substantial positive correlation coefficients for dendrites (σ_y,a=0.58) and aggregates (σ_y,a=0.45), which is consistent with their observed enhanced settling velocities. Needles display a moderate correlation coefficient (σ_y,a=0.33), reflective of their anisotropic shape. Graupels, however, exhibit a low average correlation coefficient of σ_y,a=0.15, as changes in particle orientation are not expected to significantly affect the drag force. § CONCLUSIONS AND DISCUSSION In this study, we conduct a comprehensive field investigation into the three-dimensional (3D) settling dynamics of snow particles under weak atmospheric turbulence. This investigation was enabled by a field 3D particle tracking velocimetry (3D PTV) system <cit.>, recording over a million settling trajectories for four distinct types of snow particles (i.e., aggregates, graupels, dendrites, needles) and by simultaneous characterization of their aerodynamic properties using a holographic snow particle analyzer <cit.>. We have examined the snow particle aerodynamic properties, including terminal velocity in air, settling velocity, drag coefficient, settling kinematics including acceleration statistics and trajectory geometry, and the interconnection between observed the meandering path and settling velocity of the snow particles. The comparison between the estimated terminal velocity <cit.> and the measured settling velocity demonstrate that non-spherical particles, especially aggregates and dendrites, exhibit large differences between measurements and model predictions potentially due to dynamic orientation changes along their meandering paths, which is not observed in graupels. Specifically, the settling enhancement observed in aggregates is likely a synergistic result between morphology-induced oscillations due to vortex shedding and the ambient flow that promotes wake instabilities and the onset of meandering motions. Even though dendrites are characterized by a higher drag coefficient, as compared to other snow types, corroborating the laboratory findings of <cit.>, their settling velocity under weak atmospheric turbulence is higher than model predictions assuming nominal flat-falling drag area <cit.>. These apparently contradicting results emphasize the need to quantify particle settling dynamics along their complex trajectories in the field. A detailed Lagrangian analysis reveals that dendrites and aggregates undergo pronounced meandering motions in the horizontal plane perpendicular to the direction of gravity at relatively lower frequencies, likely governed by their inertia to tumbling and rotation but enabled by ambient turbulence. Needles, however, exhibit weaker meandering amplitudes due to their smaller frontal area. Graupels, despite their near-spherical form, undergo oscillatory motions along their settling paths, characterized by higher frequencies comparable to the Kolmogorov scale and smaller amplitudes. This behavior suggests a limited interaction with small-scale turbulence structures under the conditions investigated, a notion corroborated by the agreement between model-predicted terminal velocities and measured settling velocities. These distinct settling motions and styles are also reflected in the curvature statistics, differentiating non-spherical particles from graupels. More specifically, the analysis of vertical acceleration during meandering paths reveals that periodic changes in the orientation of non-spherical particles, especially dendrites and aggregates, contribute to their enhanced settling velocity. These findings highlight the dominant impact of the morphology of snow particles on their settling dynamics under weak atmospheric turbulence. Our current study provides a unique dataset of realistic snow morphologies and settling trajectories. These measurements contribute to the modeling and simulation of snow settling velocity and subsequent snow accumulation rate on the ground. Despite the dominant morphology effect, interactions between the snow particles and the weak atmospheric turbulence are still manifested in some aspects of their settling dynamics. Although the non-spherical particles are likely to rotate or tumble when settling in quiescent flow, disturbances by the ambient flow promote these unsteady motions. Besides, the meandering motion of graupels exhibits frequencies closest to that of the Kolmogorov scale, hinting at weak interaction with the ambient turbulence. The weak turbulence effect is also manifested in the curvature statistics of the particle trajectories. We observe distinct scaling laws for the high-curvature tails of the probability density functions (PDFs), which mark differences between spherical particles, consistent with <cit.> for fluid trajectories in turbulence and <cit.> for inertial particles, and non-spherical particles, hinting at morphological influences on their settling kinematics. To compensate for variations in streamwise flow velocity and settling velocity across different datasets and morphologies, we have corrected the curvature formulation. The resulting high-curvature tails of the PDFs converge to a universal scaling associated with low spanwise velocity, reinforcing the concept that high curvature events are associated with the meandering motions of non-spherical particles. This association is further emphasized by the observed inverse correlation between the trajectory curvature and spanwise velocity. Our detailed characterization of the snow particle morphology, density, and settling velocity may lead to an improved prediction of ground snow accumulation, benefiting several related applications in snow hazard warning, climate modeling, and traffic regulation during/after snowfall. Despite our major findings that substantiate the hypothesis of strong morphological effects on dictating snow particle settling dynamics under conditions of weak atmospheric turbulence, several challenges persist. First, quantifying the exact enhancement or hindrance of settling velocities due to weak atmospheric turbulence remains a challenging task. A better model that elaborates on the interplay between particle morphology and turbulence will be necessary. Second, current predictive models, including those by <cit.>, fall short in estimating the aerodynamic properties, especially the drag coefficient, for dendrites. There is a clear need for refined models that can more accurately represent these unique and complex snow particle types. Third, while we aimed to correlate the meandering motion and orientation changes to enhanced settling in non-spherical particles, the spatial resolution of our 3D PTV system is insufficient for capturing the orientation dynamics of particles throughout their settling. Some of the smaller particles captured by the snow particle analyzer might also exhibit too weak a signal to be detected by the 3D PTV system. Advancements in measurement systems could enable simultaneous assessments of particle orientation and settling trajectory and higher resolution for capturing smaller snow particles. Systems such as a high-magnification, high-resolution 3D PTV <cit.>, or a digital inline holography setup with an expanded field of view <cit.> and higher sampling rate, hold promise for the desired measurements. Finally, the variability of field conditions presents an additional layer of complexity. Snow particle types and concentrations, as well as average wind speed and direction, are subject to change over each measurement period, which typically spans 3-5 hours. The relatively slow streamwise wind adds to the difficulty of accurately estimating turbulence quantities. Looking ahead, we aim to extend our investigations to scenarios involving moderate to intense turbulence. By contrasting the settling behaviors across a spectrum of turbulence intensities, particularly for different snow particle morphologies, we anticipate a more thorough understanding of how turbulence and snow particle morphology collectively influence the settling dynamics of snow particles. This future research will enable us to improve our predictive capabilities of snow settling velocity and, in the long term, of the spatial distribution and intensity of snow accumulation on the ground during snowfalls. § We employ a Gaussian kernel as a low-pass filter to reduce uncertainties in determining the 3D positions of snow particles and to prevent these errors from affecting Lagrangian statistics. Large errors typically occur at the trajectory ends after filtering; therefore, these segments are excluded from the statistical analysis. The selection of the kernel size is critical. A kernel that is too short fails to sufficiently reduce position uncertainty, while a kernel that is too long may suppress genuine strong acceleration events. We optimize the kernel size by analyzing the change in acceleration variance, defined as a_0=⟨ a^' 2⟩ v^1 / 2 / ε^3 / 2, across varying kernel sizes. As shown in Figure <ref>, this approach enables the identification of the optimal kernel size. We determined that a minimal kernel size of 45 frames best maintains the exponential dependency of acceleration variance on kernel size. Notably, this selected kernel size, τ_g, is comparable to the Kolmogorov length scale, τ_η, corroborating findings from previous studies <cit.>. The estimated uncertainty in the acceleration measurements reflects the uncertainty in the filter size, which ranges between 43 and 47 frames. This results in the root mean square error in acceleration estimation, a_rms, ranging between 0.32 and 0.38 m/s^2 for different snow particle types. § To more accurately model the drag coefficient of snow particles, significant efforts have been made by researchers <cit.>. The illustrations in Figure <ref> summarize and clarify the calculations used in this study. The drag coefficient of snow particles can be defined using either the projected area, C_De=f(A_e), or the circumscribed area, C_D=f(A). According to <cit.>, the two drag coefficients are correlated by the area ratio, C_De / C_D = (A / A_e)^3/4, where C_D is defined by equation <ref>. Thus, to compare the model (C_De = (A / A_e)^3/4 C_0(1+δ_0 / R e_p^1 / 2)^2) with the measured drag coefficient (C_De,mean), as discussed in Section <ref>, it is necessary to calculate the maximum projected area, A_e,max. Given that our snow particle analyzer only measures the general projected area (A_e) and the circumscribed area (A) at an unknown orientation, we assume that the ratio A/A_e remains constant regardless of orientation. Consequently, the maximum projected area can be estimated as A_e,max = A_max(A_e / A). In this equation, the maximum circumscribed area is calculated as A_max = π D_maj^2 / 4 for plates and dendrites and as A_max = π D_maj D_min / 4 for other snow particle types, where D_maj and D_min are measured by the snow particle analyzer.
http://arxiv.org/abs/2406.08064v1
20240612102921
Gate-based counterdiabatic driving with complexity guarantees
[ "Dyon van Vreumingen" ]
quant-ph
[ "quant-ph" ]
Institute of Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands QuSoft, Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 XG Amsterdam, The Netherlands § ABSTRACT We propose a general, fully gate-based quantum algorithm for counterdiabatic driving. The algorithm does not depend on heuristics as in previous variational methods, and exploits regularisation of the adiabatic gauge potential to suppress only the transitions from the eigenstate of interest. This allows for a rigorous quantum gate complexity upper bound in terms of the minimum gap Δ around this eigenstate. We find that the algorithm requires at most Õ(Δ^-(3 + o(1))ϵ^-(1 + o(1))) quantum gates to achieve a target state fidelity of at least 1 - ϵ^2, which is nearly equivalent to the gate complexity of gate-based adiabatic state preparation. This calls into question the perception of counterdiabatic driving as a general shortcut to adiabaticity. Gate-based counterdiabatic driving with complexity guarantees Dyon van Vreumingen June 17, 2024 ============================================================== § INTRODUCTION Adiabatic quantum computing (AQC) is a paradigm of quantum computation that leverages the principles of adiabatic processes in quantum mechanics to solve computational problems <cit.>. AQC relies on the adiabatic theorem, which states that a quantum system approximately remains in its instantaneous eigenstate if a given time-dependent hamiltonian that governs its energy levels is changed slowly enough <cit.>. In this sense, AQC is a conceptually straightforward way to prepare ground states or other eigenstates of complex systems, by starting from an eigenstate of a simple hamiltonian (e.g. a product state) and slowly time evolving towards the complex hamiltonian. Contrary to AQC, counterdiabatic driving (CD) aims to prepare target states through a fast time evolution, actively suppressing excitations that would typically accompany such rapid driving. Traditional adiabatic processes require slow evolution to maintain the system in its instantaneous eigenstate, which can be impractically long. CD involves adding auxiliary, non-adiabatic control fields to counteract the diabatic transitions, thereby simulating the effect of a slow adiabatic process in a (much) shorter time <cit.>. It is a specific method within a broader class of techniques known as “shortcuts to adiabaticity” <cit.>, designed to achieve the same goal of mimicking adiabatic evolution on a small time scale. Its original formulation is due to Demirplak & Rice <cit.>; in this formalism, the central object is the adiabatic gauge potential (AGP), which is an auxiliary field that is added to the system hamiltonian and cancels what is akin to a Coriolis force in the rotating system eigenbasis, thereby supressing all excitations. The issue however is that determining this AGP exactly is computationally difficult – possibly as difficult, if not more, as preparing the eigenstate of interest – and analytical expressions are only known for very specific systems <cit.>. This is due to the fact that, typically, the AGP is highly nonlocal and contains high-rank (two-body, three-body etc.) interactions <cit.>; moreover, if the spectrum of the system is gapless, the AGP does not exist <cit.>. For this reason, much effort has been focussed on computational methods that approximate the AGP. In this context, a pivotal role has been played by variational methods <cit.>; these methods assume a parametrised ansatz that is optimised with respect to an action function that approaches zero as the ansatz approaches the exact AGP. This approach has enabled approximate CD in a large number of different contexts, leading to improved fidelities as compared to standard AQC <cit.>. However, the limit to the variational approach lies in the fact that evaluating the action function itself becomes harder the higher the rank of the interactions that is taken into account in the ansatz, and therefore only ansätze with low-rank interactions are computationally feasible. Further improvements to this scheme were made through the invention of a nested commutator ansatz to the AGP <cit.> and subsequent Krylov space methods to solve for the optimal coefficients in this ansatz <cit.>. Here, the ansatz is expressed as a linear combination of basis operators constructed by repeated application of the liouvillian superoperator ℒ(·) = [,̋·] to the parameter derivative of the system hamiltonian $̋, which form the Krylov space. A key benefit of these methods is that, after a truncation of the Krylov space, the coefficients can be found by solving a linear equation, which avoids optimisation heuristics with unpredictable running times. At the same time, this approach sheds more light on the computational complexity of computing the AGP. Numerical experiments <cit.> suggest that the Krylov space for weakly interacting and integrable systems is a small subspace in operator space, so that the AGP is easily approximated with an ansatz containing few coefficients; on the other hand, for strongly interacting, chaotic systems, this is not the case, and the whole operator space may need to be explored. In this sense, the complexity of CD is tightly connected to quantum Krylov complexity <cit.>. The main issue, however, in these developments, is that a precise description of the computational complexity of CD has so far been lacking. In particular, the following question persists: does the computational effort we put into determining the AGP, in order to boost the fidelity of the output state with the target state, compare favourably to that of AQC with simply a longer evolution time? Following up on this question, how should we define computational complexity so that a fair comparison is made between the two methods? After all, the complexity of AQC is typically measured in terms of physical evolution time, whereas that of CD, in the light of aforementioned variational and Krylov space methods, boils down to the computing time a classical machine requires to solve a (linear) equation for the ansatz coefficients. In this work, we place AQC and CD on an equal footing by considering the complexity of both approaches in terms of the number of quantum gates as required by a fault-tolerant quantum computer. To this end, we first express AQC in terms of quantum gates, through well-known Trotter decomposition techniques (section <ref>), and subsequently develop a gate-based quantum algorithm for CD (sections <ref> and <ref>) that can be compared to gate-based AQC. We give upper bounds on the number of quantum gates required for both algorithms. In this way, we contribute to the answering of the posed questions. A crucial insight here is that the complexity of AQC is state-dependent, in the sense that it is known to scale with the inverse of the spectral gap around the instantaneous eigenstate of interest, while that of currently known approaches to CD is not. So far, CD approaches have aimed to approximate the exact AGP, which cancels excitations on the entire spectrum; however, if the objective is to prepare only a single eigenstate, this is inefficient, since only the excitations from the eigenstate of interest need to be suppressed. We remedy this by basing our CD algorithm on an approximate AGP that is blind to excitations below a certain energy gap cutoff, which is small enough to suppress transitions from the relevant eigenstate. In this way, the inverse gap around this eigenstate enters into the complexity description of CD. The main contribution of this paper is theorem <ref>, which states the gate-based CD algorithm and an upper bound on its worst-case quantum gate complexity. Together with our analysis of gate-based AQC (theorem <ref>), this leads to the remarkable result that the gate complexities of both algorithms scale asO(Δ^-(3 + o(1))ϵ^-(1 + o(1))), whereΔis the minimum energy gap along the path, andϵis the square root of the infidelity between the output state and the target state[The notation f ∈ o(g) means that lim_x→∞ f(x) / g(x) = 0. Specifically, o(1) is used to indicate a quantity that can be made arbitrarily small.]. SinceΔoften decreases exponentially in the system size, especially in complex systems, theΔdependence is usually the dominant factor in the complexity of AQC and CD; as such, for complex systems, we find that the complexities of both gate-based algorithms is (approximately) equivalent, which casts doubt on the advantage of CD from a worst-case, quantum computational complexity point of view. This work is structured as follows. In section <ref>, we discuss the formalism of counterdiabatic driving and set out notational conventions used throughout this work. Section <ref> addresses the gate complexity of AQC, detailing the Trotter decomposition that translates a physical time description to a gate-based description, and setting the stage for a comparison with gate-based CD. Section <ref> introduces the regularised truncated AGP, which brings the aforementioned gap cutoff into play and is at the heart of the gate-based CD algorithm. In section <ref>, we construct the gate-based CD algorithm, which is a simulation, in the form of a Trotter decomposition similar to gate-based AQC, of the path-ordered exponentiation of the regularised truncated AGP. Finally, in section <ref>, we reflect on the findings in this work and elaborate on the consequences of the similarities in the complexities of gate-based AQC and CD. § PRELIMINARIES §.§ Counterdiabatic driving Consider the time-dependent Schrödinger equation for a system parametrised by a time-dependent parameterλ(t): ı_̣t |ψ(t)⟩ = (λ(t)) |ψ(t)⟩. Clearly, the solution|ψ(t)⟩will be dependent onλ(t). LetU(λ(t), λ(0))be the unitary that connects the instantaneous eigenstates of(λ(t))with those of(λ(0)): (λ(t)) = ∑_n E_n(λ(t)) U(λ(t), λ(0)) |n(0)⟩ ⟨n(0)| U(λ(t), λ(0)). We can view this system in the rotating frame by writing|ψ(t)⟩ = U(λ(t), λ(0))|ψ̃⟩, where|ψ̃⟩is some reference state that is independent ofλandt. In the rotating frame then, the time-dependent Schrödinger equation becomes ı_̣t |ψ̃⟩ = [ UU - ıλ̇ U∂_λ U] |ψ̃⟩. SinceUUis diagonal in the rotating eigenbasis of$̋, any nonadiabatic transitions can only be generated by the second term of the effective hamiltonian in eq. <ref>. Thus we can suppress all nonadiabatic transitions by time evolving with a different hamiltonian that cancels out this second term in the rotating frame: U_̋ CD U = UU + ıλ̇ U∂_λ U, or, going back to the lab frame, _̋ CD(t) = (λ(t)) + λ̇(t) (λ(t) ) where the operator = ı∂_λ U U is called the adiabatic gauge potential (AGP). Since evolution under _̋ CD, known as the counterdiabatic hamiltonian, causes no transitions into excited states, one may carry out a perfect evolution at any speed: U(, ) = 𝒯exp[ -ı∫_0^T ṭ _̋ CD(t) ] for any T ∈ [0, ∞). The adiabatic gauge potential, then, can be viewed simply as the derivative operator ı∂_λ: ⟨m(λ)| A(λ) |n(λ)⟩ = ı⟨m(λ)|∂_λ U(λ, ) |n()⟩ = ı⟨m(λ)|∂_λ|n(λ)⟩, and therefore generates motion in λ space, U(, ) = 𝒯exp[ -ı∫_^λ̣ A(λ) ] which is simply a special case of eq. <ref>, namely the limit T→0. From the Hellmann-Feynman theorem, A may be expressed in the eigenbasis of (̋λ), and has the off-diagonal matrix elements ⟨m(λ)| A(λ)|n(λ)⟩ = ⟨m(λ)|∂_λ(̋λ) |n(λ)⟩/ıω_mn(λ) where ω_mn = E_m(λ) - E_n(λ). The diagonal elements are arbitrary since they depend on a gauge choice (namely, the phase of the eigenstates of (̋λ)). Lastly, the form of the AGP in eq. <ref> may be expressed as a time integral <cit.> A(λ) = 1/2lim_η→0∫_-∞^∞τ̣ ^-η|τ|(τ) ^-ı(̋λ) τ∂_λ(̋λ) ^ı(̋λ) τ since ⟨m(λ)| A(λ) |n(λ)⟩ = 1/2⟨m(λ)|∂_λ(̋λ) |n(λ)⟩lim_η→0∫_-∞^∞τ̣ ^-η|τ|(τ) ^-ıω_mn(λ) τ = ⟨m(λ)|∂_λ(̋λ) |n(λ)⟩lim_η→0ω_mn(λ)/ı(ω_mn(λ)^2 + η^2) = ⟨m(λ)|∂_λ(̋λ) |n(λ)⟩/ıω_mn. Note that the limit η→0 must be taken after integration for the integral to converge. §.§ Norms and notation In this work, we will make use of two matrix norms: the spectral norm · (also known as operator norm) and the trace norm ·_1. The spectral norm is defined as O = sup_|ψ⟩: ψ=1 O|ψ⟩, or equivalently as the largest singular value of O. For hermitian matrices, this corresponds to the largest absolute eigenvalue. The trace norm is defined by O_1 = √( O O), which equates to the sum over all singular values of O (or absolute eigenvalues if O is hermitian). Both norms satisfy a triangle inequality and are multiplicative, as well as unitarily invariant, in the sense that U O V_(1) = O_(1) for unitary matrices U, V. The norms are related by the inequality O Q_1 ≤ O Q_1 for any matrices O and Q, which may be derived from either Hölder's inequality or Von Neumann's inequality. Furthermore, both norms satisfy the so-called telescoping property: let U_1, U_2, V_1, V_2 be unitary matrices; then U_1 U_2 - V_1 V_2_(1)≤ U_1 - V_1_(1) + U_2 - V_2_(1), which follows from the unitary invariance of the norms and a triangle equality. From this it may be immediately seen that errors in a product of unitaries scale at most linearly in the number of factors. The spectral and trace norms give rise to the spectral distance O - Q and trace distance 1/2 O - Q_1 respectively. The trace distance is especially useful when dealing with density matrices. In particular, for pure states, it can be shown that 1/2|ψ⟩⟨-||ϕ⟩⟨|_1 = √(1 - |⟨ψ|ϕ⟩|^2). The derivation follows from explicit calculation of the eigenvalues of |ψ⟩⟨-||ϕ⟩⟨$| after Gram-Schmidt orthogonalisation. We will call the right-hand side of this equation the square-root infidelity between|ψ⟩and|ϕ⟩. It may be shown that the square-root infidelity is upper bounded by the euclidean distance: √(1 - |⟨ψ|ϕ⟩|^2)≤|ψ⟩- |ϕ⟩. Throughout this work, we will frequently encounterλ-dependent vector norm expressions of the formO(λ)|n(λ)⟩where|n(λ)⟩is an instantaneous eigenstate of the system hamiltonian. To simplify notation, we will use the shorthandO_n(λ)for this purpose. When clear from context, theλargument will be omitted. Finally, we introduce convenient notation for the following integral expressions: O_n, p = ( ∫_^λ̣ O(λ)|n(λ⟩^p )^1 / p ; O_∞, p = ( ∫_^λ̣ O(λ)^p )^1 / p forp ∈ℕ_>0;O_n, ∞andO_∞, ∞are understood to meanmax_λ∈[, ] O(λ)|n(λ)⟩andmax_λ∈[, ] O(λ)respectively. Lastly, when working with big-Onotation, we will writef ∈Õ(g)to indicatef ∈O(g (g)). § GATE COMPLEXITY OF GATE-BASED ADIABATIC COMPUTING Before we we discuss gate-based counterdiabatic driving and its complexity, we need to establish exactly what is meant by complexity and how this is compared to standard adiabatic computing. The complexity of AQC is typically quantified by the physical timeTwhich sets the speed at whichλ(t)is varied. A very useful and rigorous bound onT, in order to achieve a given fidelity between the time-evolved state and the instantaneous eigenstate of interest, is given by Jansen et al. <cit.>. Let T > 0, and let λ(·) : → be a function such that λ(0) = and λ(T) =. Suppose (̋λ) is hermitian, gapped and twice differentiable in λ on the interval [, ]. Let (T, 0) = 𝒯exp[-ı∫_0^T ṭ (̋λ(t))]. Let |n()⟩ be the the n-th instantaneous eigenstate of (̋), and |n()⟩ = lim_T→∞(T, 0)|n()⟩ (i.e. the n-th eigenstate of (̋)). Then |⟨n()|(T, 0) |n()⟩^2 ≥ 1 - ϵ^2 provided that T ≥Θ(ϵ^-1Δ_n^-3_∞, 2^2). where Δ_n is the minimum gap around |n(λ)⟩ on the interval [, ]. Now, since the objective is to run a counterdiabatic driving protocol on a digital quantum computer, both CD and AQC should be considered from the viewpoint of digitised simulation methods. In other words, physical time, with which the performance of adiabatic computing is typically quantified, is insufficient here since it is not a meaningful quantity in the description of gate-based CD and therefore leads to an apples-to-oranges comparison. Nonetheless, physical time is an important ingredient in simulation algorithms, and crucially influences the number of digital quantum operations that is required for their implementation. Several time-dependent hamiltonian simulation methods are known, most notably Lie-Trotter-Suzuki (LTS) decompositions <cit.> and truncated Dyson series methods <cit.>. For both methods, the hamiltonian is typically provided as a linear combination of unitaries (LCU),(̋t) = ∑_i = 1^ℓβ_i(t) V_i, where theV_iare unitary and theβ_iare nonnegative scalars. LTS formulae approximate a time-ordered exponential(T, 0)as a product(T, 0) ≈∏_j ^-ıβ_i_j(t_j) V_i_j δt_j; each factor in this product, a time-independent operator exponential, may then be simulated as a multi-qubit rotation gate. Truncated Dyson series, on the other hand, seek to implement the approximation(T, 0) ≈∑_k = 0^K (-ı)^k/k! ∫_0^T ⋯∫_0^T 𝒯[(̋t_k) ⋯(̋t_1)] ^̣k t. In this process, oracle access to the hamiltonian is assumed, in the form of a time-independent and a time-dependent oracle, where the time-dependent oracle is a direct sum of time-independent oracles over a finite set of times in the simulation interval (see ref. for details). While efficient methods have been developed to decompose the time-independent oracle into a sequence of quantum gates <cit.>, such decompositions for the time-dependent oracle are only known in special cases <cit.>. For this reason, we will use LTS formulae to describe gate-based AQC and CD throughout the rest of this work. §.§ Lie-Trotter-Suzuki decompositions Trotter formulae are a well-known method to approximate ordered exponentials by a decomposition into a product of path-independent operator exponentials. The starting point is the observation that, for path-independent operatorsOandQandδt ∈ℝ,^-ı(O + Q) δt - ^-ıO δt ^-ıQ δt ∈O(δt^2), which yields a favourable error scaling ifδtis small. Ifδtis large, one may divideδtintorparts to obtain^-ı(O + Q) δt - (^-ıO δt / r ^-ıQ δt / r)^r ∈O(δt^2 / r)where the additional factorrin the error follows from linear propagation of errors as a result of the telescoping property of the operator norm. To achieve a desired precisionϵ,rmust then be taken to be at leastO(δt^2 / ϵ). More sophisticated decompositions, known as Lie-Trotter-Suzuki (LTS) formulae, are due to Suzuki <cit.>. These decompositions may be defined for any orderk ∈ℕ_>0, with every order representing a more fine-grained approximation, and achieveO(δt^2k + 1)error scaling. Again dividing a largeδtintorsmaller segments, this results in anrscalingO(δt^1 + 1 / 2k ϵ^1 / 2k)to achieve precisionϵ. Similar formulae may be constructed for path-ordered exponentials. While the generalisation of path-independent decompositions themselves to path-dependent ones is straightforward, general error bounds were found only years later by Wiebe et al. <cit.>. Their result is very similar to the path-independent case, but imposes conditions on the differentiability of the operator exponents. We base the analysis in this section on their work, accurately describing the decomposition of the ordered exponential(T, 0)and the requirements to achieve a given precision. The path-ordered exponentialUmay be implemented by akth-order,r-segment LTS product formula. The first-order single-segment formulaŨ_1, 1(T, 0), for the operator(̋t) = ∑_i = 1^ℓβ_i(t) V_iis given by Ũ_1, 1(T, 0) = ( ∏_i = 1^ℓexp [-ıβ_i(t + δ t / 2) V_i δ t / 2] ) ( ∏_i = ℓ^1 exp [-ıβ_i(t + δ t / 2) V_i δ t / 2] ). Higher-order LTS product formulas are built by repeatedly applying the following recursive process toŨ_1: Ũ_k + 1, 1(t + δ t, t) = Ũ_k, 1(t + δ t, t + [1 - s_k] δ t) Ũ_k, 1(t + [1 - s_k] δ t, t + [1 - 2 s_k] δ t) ×Ũ_k, 1(t + [1 - 2 s_k] δ t, t + 2 s_k δ t) Ũ_k, 1(t + 2 s_k δ t, t + s_k δ t) Ũ_k, 1(t + s_k δ t, t) withs_k = (4 - 4^1 /(2 k + 1))^-1. Now if everyβ_i(t)is2ktimes differentiable int, then the error is bounded as in the path-independent case: (T, 0) - _k, 1(T, 0)∈ O(δ t^2k + 1). This means that ifδtis small, then the approximation error decreases monotonically with the orderk. Ifδtis not small, we divide it intorsegments and apply a LTS product formula for each segment: Ũ_k, r(t + δ t, t) = ∏_s = 1^r Ũ_k(t + s δ t / r, t + (s - 1) δ t / r). We refer to this decomposition as akth-orderr-segment LTS formula. The work of Wiebe et al. <cit.> now provides a condition onrthat is sufficient to upper bound the spectral distance between an ordered exponential of a sum of operators and its LTS simulation. Let H(t) = ∑_i H_i(t) be defined on the interval [, ] such that each H_i(t) is hermitian and 2k times differentiable on the entire interval. Let δ t = -. Furthermore, suppose max_p = 0, 1, …, 2k[ max_t ∈ [, ]( ∑_i ∂^p/∂ t^p H_i(t) )^1 / (p + 1)] ≤Λ. If ϵ≤min{(9/10)(5/3)^k Λδ t, 1}, then the spectral distance between the ordered exponential 𝒯exp[-ı∫_^ṭ H(t)] and its kth-order r-segment LTS decomposition is at most ϵ, provided that r ≥ 5k Λδ t ( 5/3)^k ( Λδ t/ϵ)^1/2k. §.§ Gate-based complexity of AQC We can now put a bound on the gate complexity of gate-based AQC by combining theorems <ref> and <ref>. Theorem <ref> in principle gives a bound on the number of operator exponentials required to simulate a time evolution in a time interval of given length. Since we assumed the hamiltonian to be given as a linear combination of unitaries, every exponential is a multi-qubit rotation gate; if required, such gates can be decomposed into a number of elementary gates (single-qubit rotations, CNOT gates and the like) that depends only on the locality of the respective unitaryV_i(i.e. the number of qubit it acts on nontrivially). We will consider the maximum locality of these terms as a constant, and as such, in big-Onotation, we equate the number of elementary gates to the number of operator exponentials. What remains is to count the number of operator exponentials, upper boundΛfrom theorem <ref> and insert the time lengthTfrom theorem <ref>. Let T ≥Θ(ϵ^-1Δ_n^-3_∞, 2^2) as in theorem <ref>, and let λ(t) be such that λ(0) = and λ(T) =. Suppose (̋λ(t)) = ∑_i = 1^ℓβ_i(λ(t)) V_i is hermitian and 2k times differentiable in t on the interval [0, T], with β_i(λ(t)) ∈ and V_i unitary. Let {|n(λ(t))⟩} be the set of its instantaneous eigenstates. Suppose the spectrum of (̋λ(t)) around an eigenstate |n(λ(t))⟩ has a gap of at least Δ_n on the entire interval. Let (, ) = ∑_n |n()⟩⟨n()|. Then there exists a gate-based quantum algorithm (T, 0) such that ( U(, ) - (T, 0)) |n()⟩≤ O(ϵ) which can be implemented with O( ℓ( 25/3)^k k Λ^1 + 1/2k_∞, 2^2 + 1/k/ϵ^1 + 1/kΔ_n^3 + 3/2k) quantum gates, where Λ = max_0 ≤ p ≤ 2k∂_t^p β_1, ∞^1 / (p + 1) and ∂_t^p β_1, ∞ = max_t ∈ [0, T]∑_i = 1^ℓ |∂_t^p β_i|. Let (T, 0) = 𝒯exp[-ı∫_0^T ṭ (̋t)] and identify (T, 0) as a kth-order r-segment LTS decomposition of (T, 0). Observe that a first-order single-segment LTS decomposition _1, 1 of (T, 0) (eq. <ref>) contains 2ℓ operator exponentials, and that _k, r is a product of 2ℓ 5^k - 1 r operator exponentials. Theorem <ref> gives the conditions under which ((, ) - (T, 0)) |n()⟩≤ O(ϵ); we use the bound on r from theorem <ref>, which assures that ((T, 0) - (T, 0)) |n()⟩≤ O(ϵ) so that, by a triangle inequality, eq. <ref> holds. To use this bound, we first notice that the choice Λ = max_0 ≤ p ≤ 2k∂_t^p β_1, ∞^1 / (p + 1) is sufficient to fulfill the requirement of eq. <ref> in theorem <ref>. From theorem <ref> we then insert T to obtain that O( ℓ( 25/3)^k k Λ^1 + 1/2k T^1 + 1/2kϵ^1/2k) = O( ℓ( 25/3)^k k Λ^1 + 1/2k_∞, 2^2 + 1/k/ϵ^1 + 1/kΔ_n^3 + 3/2k) quantum gates are sufficient to guarantee an overall error of at most O(ϵ). This proves the theorem. § REGULARISED TRUNCATED GAUGE POTENTIAL In the following sections, we will describe our approach to a gate-based counterdiabatic driving algorithm. We will base our methods on the time-integral expression for the adiabatic gauge potential from eq. <ref>. However, in its stead, we will use its regularised truncated versionA_η, a. Here, regularised means that we fix anη> 0instead of taking the limitη→0; furthermore, we restrict the integration to a bounded interval[-a, a]. That is: A_η, a(λ) = 1/2∫_-a^a τ̣ ^-η|τ|(τ) ^-ı(̋λ) τ(λ) ^ı(̋λ) τ. One may view this approximation choice as a straightforward way to get rid of unworkable infinities: as we will show later, it turns out that the algorithm complexities grow withη^-1anda. But there is a more intuitive reason to regularise the AGP: considering the matrix elements ofAin the energy eigenbasis (eq. <ref>), one would intuitively expect counterdiabatic driving with the regularised AGP to suppress only those level transitions whereη^2 ≪ω_mn^2<cit.>. In this sense,ηcan be viewed as a gap cutoff. Since we're only interested in suppressing the level transitions from then-th eigenstate, a cutoffη∼Δ_n^νfor someν> 0, withΔ_nthe minimum gap around then-th eigenstate, should suffice. The benefit is that, considering the exponential factor^-η|τ|, the tail falls off quickly withτand thus an evolution withA_ηshould be easier to approximate than one with the exact AGP. For the remainder of the work, we will work in the limitT→0(eq. <ref>), for two reasons. First of all, we expect the complexity of simulating (exponentials of) the AGP to be dominant over that of simulating (exponentials of) the system hamiltonian, especially in regimes of smallΔ_n; we will see in section <ref> that this is indeed the case. Secondly, in the proofs of the error bounds that follow, the system hamiltonian is always cancelled out (see eq. <ref> in the proof of lemma <ref>). As such, working in the limitT→0and thereby leaving out the system hamiltonian merely simplifies the calculations. We proceed to establish a more precise relation betweenη,aandΔ_n, which puts bounds onηandafor arbitrary approximation error. Let _η, a(, ) = 𝒯exp[-ı∫_^λ̣_η, a(λ)], where _η, a(λ) is defined as in eq. <ref>, and let |n()⟩ be the n-th eigenstate of (̋). Then ((, ) - _η, a(, ))|n()⟩≤ϵ if we take η = 1/√(2)Δ_n^3/2ϵ^1/2_n, 1^-1/2; a = 1/ηlog( 2 Δ_n + η/Δ_nϵη_n, 1) = Õ(η^-1). We first bound the unitary error by an error in the AGP: ( U(, ) - U_η, a(, )) |n()⟩ = ( U_η, a(, ) U(, ) - I) |n()⟩ = ∫_^λ̣ ̣/λ̣ ( U_η, a(λ, ) U(λ, )) |n()⟩ = ∫_^λ̣ U_η, a(λ, ) [ A(λ) - A_η, a(λ)] U(λ, ) |n()⟩ ≤∫_^λ̣ ( A(λ) - A_η, a(λ)) |n(λ)⟩ where the last inequality follows from a triangle inequality on the integral. To evaluate the AGP error, we make use of the energy eigenbasis expansion (omitting the λ argument for brevity and using that ⟨k|∂_λ k⟩ = 0) ^-ıτ̋^ıτ̋ = ∑_k ∂_λ E_k |k⟩⟨k| + ∑_k; m≠ k^-ı (E_m - E_k) τ|m⟩⟨m||k⟩⟨k|. Since ^-η|τ| is symmetric in τ, it follows that ∑_k 1/2∫_-∞^∞τ̣(τ) (1 - ^-η|τ| 1_τ∈[-a, a]) ∂_λ E_k |k⟩⟨=| 0. Furthermore, for any eigenstate |n⟩, ∑_k m≠ k1/2∫_-∞^∞τ̣(τ) (1 - ^-η|τ| 1_τ∈[-a, a]) ^-ı (E_m - E_k) τ|m⟩⟨m||k⟩⟨k|n⟩ = -ı∑_m≠ n c_η, a(ω_mn) ⟨m||n⟩|m⟩ where c_η, a(ω) := 1/ω - ω/ω^2 + η^2 + ^-a ηcos(aω) ω + sin(aω) η/ω^2 + η^2. Observe that |c_η, a(ω)| ≤ g_η, a(ω) := η^2/|ω|^3 + ^-η a( 1/|ω| + 1/η) and that g_η, a(ω) is a decreasing function of |ω|; therefore we may give the upper bound ∫_^λ̣ ( A(λ) - A_η(λ)) |n(λ)⟩ = ∫_^λ̣ √(∑_m≠ n |c_η, a(ω_mn(λ))|^2 |⟨n(λ)|∂_λ(̋λ)|m(λ)⟩|^2) ≤∫_^λ̣ max_m g_η, a(ω_mn(λ)) √(∑_m≠ n |⟨n(λ)|∂_λ(̋λ)|m(λ)⟩|^2) ≤_n, 1 g_η, a(min_m min_λω_mn(λ)) = _n, 1 g_η, a(Δ_n). The requirement that this error be at most ϵ is then satisfied by setting η = Δ_n^3/2 (ϵ / 2)^1/2_n, 1^-1/2 ⇒ η^2/Δ_n^3_n, 1≤ϵ/2 and a = 1/ηlog( Δ_n + η/Δ_nηϵ / 2_n, 1) ⇒ ^-η a( 1/Δ_n + 1/η) _n, 1≤ϵ/2. § GATE-BASED COUNTERDIABATIC DRIVING With the regularised truncated AGP in hand, we can now discuss approaches to gate-based counterdiabatic driving. For this purpose, we will understand CD as the task of implementing the ordered exponential_η, a(, ) = 𝒯 exp[-ı∫_^λ̣ _η, a(λ)]with some unitary(, ), up to a given precisionϵ; by a simple triangle equality, the total error((, ) - (, ))|n()⟩is then at mostO(ϵ). In this section, we present a deterministic gate-based algorithm that achieves exactly this goal, and provide an upper bound on the complexity of running this algorithm on a fault-tolerant gate-based quantum computer. The problem of implementing ordered exponentials of hermitian operators is far from new: several deterministic time-dependent hamiltonian simulation routines exist for implementing ordered exponentials <cit.>. However, these methods assume that the integrand in the exponent is a sum of finitely many operator terms. As such, we first need to approximate_η, awith a suitable sum. We give a construction that is similar to an operator-valued Riemann approximation to the integral, except we make a more clever choice of evaluation points and weights way to obtain a smaller error bound. As such, all terms in the sum are proportional to the integrand in eq. <ref>, i.e. an operator of the formexp[-ı(̋λ) τ] (λ) exp[ı(̋λ) τ]. This leads nother issue: the mentioned time-dependent hamiltonian simulation routines assume that the operator terms in the sum are given, in the same sense that we assume(̋λ)and(λ)given as sums of simple (weighted unitary) terms. Clearly, this is not the case here, since one would first have to simulate all operatorsexp[±ı(̋λ)τ]to construct the desired operator terms. To avoid nested simulation, we should integrate the simulation of theexp[±ı(̋λ)τ]operators in the procedure for approximatingU_η, a. Fortunately, this can be done in a straightforward way using Trotter formulae, which approximate a path-ordered exponential as a product of path-independent operator exponentials: 𝒯exp[ -ı∫λ̣ ∑_j O_j(λ) ] ≈∏_i exp[-ı O_j_i(λ_i) δλ_i]. The key observation here is that, when we exponentiate the integrand terms, the operatorsexp[±ı(̋λ)τ]may be taken out of the exponent: exp[ -ı (^-ı(̋λ) τ(λ) ^ı(̋λ) τ) δλ] = exp[-ı(̋λ) τ] exp[-ı(λ) δλ] exp[ı(̋λ) τ]. This follows immediately from the fact thatexp[O ] = exp[O] , for anyOand unitary, which may be shown using a Taylor expansion. As such, we avoid double exponentiation and are left with only a product of ordinarily simulatable operator exponentials. In what follows, we first give a detailed description of our summation method, which gives rise to an AGP approximation_η, a^M, q ≈_η, aand a corresponding evolution operator_η, a^M, q(, ) = 𝒯exp[-ı∫_^λ̣ _η, a^M, q(λ)]. Subsequently, we demonstrate how_η, a^M, qmay be simulated through Lie-Trotter-Suzuki formulae, producing the unitary. This follows a pattern that is very similar to the discussion of gate-based AQC in section <ref> and, as such, allows to make a fair complexity comparison. We establish conditions on the relevant parameters to upper bound the errors(_η, a(, ) - _η, a^M, q(, ))|n()⟩and(_η, a^M, q(, ) - (, ))|n()⟩; the error(_η, a(, ) - (, ))|n()⟩is then simply the sum of these two errors, by a triangle inequality. Finally, setting this error to at mostO(ϵ)leads to a gate complexity expression for the algorithm. §.§ Weighted sum approximation The most elementary way of approximating an integral of a scalar-valued function is a Riemann sum: the integration range is partitioned intoMsubintervals and one approximates∫_a^b x̣ f(x) ≈∑_κ= 1^M f(x_κ) δx_κ. A similar thing can be done with operator functions. Let us write A_η, a = 1/2∫_0^a τ̣ ^-ητ( ∂_λ(̋τ) - ∂_λ(̋-τ) ) where we dropped theλargument for legibility, and denoteO(τ) = ^-ıτ̋ O ^ıτ̋for any operatorO. We partition the interval[0, a]intoMsubintervals, pickingτ_0 < τ_1 < ⋯< τ_Msuch thatτ_0 = 0andτ_M = a; defineδτ_κ= τ_κ- τ_κ- 1. Note that the subintervals need not be uniform. The operator Riemann sum then takes the form _η, a≈1/2∑_κ = 1^M δτ_κ^-ητ_κ( (τ_κ) - (-τ_κ) ). However, this is a rather crude method in that the error in each subinterval is relatively large, so that many subintervals are needed to make the overall error small. A more clever method evaluates the integrand at multiple points in each interval and weighs each term accordingly: A_η, a≈ A_η, a^M, q = 1/2∑_κ = 1^M δτ_κ∑_α = 0^q w_κ, α^-ητ_κ, α( ∂_λ(̋τ_κ, α) - ∂_λ(̋-τ_κ, α) ). The weightsw_κ, αare determined through Lagrange interpolation (see appendix <ref>) and are such that∑_α= 0^q w_κ, α = 1for allκ. Furthermore, they depend only on our choice of the interpolation pointsτ_κ, αand not on the value of the integrand at those points. We will not specify the choice of the interpolation points, and instead rely on general Lagrange interpolation bounds for estimating the error made in the approximation. This gives rise to the following lemma. Let _η, a(, ) be as in lemma <ref>, let A_η, a^M, q(λ) be as in eq. <ref> and let _η, a^M, q(, ) = 𝒯exp[-ı∫_^λ̣_η, a^M, q(λ)]. Assume η≤min_λ∈[, ](̋λ). If we choose τ_κ = -q + 2/ηlog( 1 - κ/M (1 - ^-η a / (q + 2)) ), κ∈{0, …, M} and if we set, for any choice of interpolation points τ_κ, 0 < τ_κ, 1 < ⋯ < τ_κ, q within each subinterval [τ_κ - 1, τ_κ], w_κ, α = 1/δτ_κ∫_τ_κ - 1^τ_κτ̣∏_0≤β≤ q β≠ατ - τ_κ, β/τ_κ, α - τ_κ, β, α∈{0, …, q} then (_η, a(, ) - _η, a^M, q(, ))|n()⟩≤ϵ provided that M ≥max{3 (2a)^1 + 1 / (q + 1)/ϵ^1 / (q + 1) (q + 1)_∞, ∞∂_λ_n, 1^1 / (q + 1), ^η a / (q + 2) - 1 }. As before, we start from the observation that ( U_η, a(, ) - U_η, a^M, q(, ))|n()⟩ ≤∫_λ_ı^λ̣ A_η, a(λ) - A_η, a^M, q(λ)_n(λ). By a triangle inequality, we have A_η, a - A_η, a^M, q_n ≤1/2∑_κ=1^M ( ∫_τ_κ - 1^τ_κτ̣ ^-ητ∂_λ(̋τ) - ∑_α = 0^q w_κ, α^-ητ_κ, α∂_λ(̋τ_κ, α) _n + ∫_τ_κ - 1^τ_κτ̣ ^-ητ∂_λ(̋-τ) - ∑_α = 0^q w_κ, α^-ητ_κ, α∂_λ(̋-τ_κ, α) _n ). The first term on the right-hand side of eq. <ref> can be upper bounded by expanding in the eigenbasis of $̋ and applying element-wise interpolation. For each vector element^-ητ⟨m|(τ) |n⟩, which is a scalar function ofτ, we can then use the error bound in eq. <ref>. This works for our vector-valued case because_σ_m∈[τ_κ - 1, τ_κ]| ⟨m|^̣q + 1/τ̣^q + 1∂_λ (^-ητ(̋τ)) |_τ = σ_m|n⟩|is identical for everym. We obtain ∫_τ_κ - 1^τ_κ τ̣ ^-ητ∂_λ(̋τ) - ∑_α = 0^q w_κ, α^-ητ_κ, α∂_λ(̋τ_κ, α) _n = ( ∑_m | ∫_τ_κ - 1^τ_κτ̣ ^-ητ⟨m|∂_λ(̋τ)|n⟩ - ∑_α = 0^q w_κ, α^-ητ_κ, α⟨m|∂_λ(̋τ_κ, α) |n⟩|^2 )^1 / 2 ≤δτ_κ^q + 2/(q + 1)!( ∑_m max_σ_m ∈ [τ_κ - 1, τ_κ]| ⟨m|^̣q + 1/τ̣^q + 1 (^-ητ∂_λ(̋τ)) |_τ = σ_m|n⟩|^2 )^1 / 2 = δτ_κ^q + 2/(q + 1)!^-ητ_κ - 1∑_s = 0^q + 1q + 1s (-η)^s [-ı,̋ [-ı,̋⋯, [-ı,̋_q + 1 - s∂_λ]̋⋯]] _n ≤δτ_κ^q + 2/(q + 1)!^-ητ_κ - 1∑_s = 0^q + 1q + 1s η^s (2 )^q + 1 - s∂_λ_n = δτ_κ^q + 2/(q + 1)!^-ητ_κ - 1 (η + 2 _∞)^q + 1∂_λ_n. The calculation for theτ↔ -τterm in eq. <ref> is identical; therefore A_η, a - A_η, a^M, q(λ) _n(λ)≤ Q^q_n(λ) ∑_κ=1^M δτ_κ^q + 2^-ητ_κ-1 where we have defined Q^q_n = (η + 2 _∞)^q + 1∂_λ_n/(q + 1)!. We will now choose theτ_κ, so as to ultimately derive the required value ofM. Instead of using constant intervalsδτ_κ, it should be intuitively beneficial to pickδτ_κsmall for thoseκwhere^-ητis large, and pickδτ_κlarge when^-ητis small. An ansatz that makes every term in the sum∑_κ=1^M δτ_κ^q + 2^-ητ_κ-1approximately constant can be found by regardingτ(κ)as a continuous function ofκand solving the differential equation dτ/dκ^-ητ / (q + 2) = ζ for some positive constantζ, subject to the boundary conditionτ(0) = 0. This differential equation is solved byτ(κ) = -q + 2/ηlog(1 - ηκζ/q + 2); the conditionτ(M) = athen implies ζ = q + 2/η M(1 - ^-η a / (q + 2)) ≤a/M so we obtain the solution τ(κ) = -q + 2/ηlog( 1 - κ/M (1 - ^-η a / (q + 2)) ). Each term in the sum then contributes approximatelyζ^q + 2. To make this exact, we Taylor-expandτ(κ)aboutκ - 1to obtain δτ_κ ^-ητ_κ - 1 / (q + 2) - ^-ητ_κ - 1 / (q + 2)τ̣/κ̣(κ - 1) _ = ζ = ^-ητ_κ - 1 / (q + 2)1/2^̣2τ/κ̣^2(κ̂) (for some κ̂∈ [κ - 1, κ]) = q + 2/2η^-ητ_κ - 1 / (q + 2)( 1 - ^-η a / (q + 2)/M - (1 - ^-η a / (q + 2)) κ̂)^2 ≤q + 2/2η( 1 - κ - 1/M (1 - ^-η a / (q + 2)) ) ( (1 - ^-η a / (q + 2))/M - (1 - ^-η a / (q + 2)) κ)^2 = q + 2/2Mη( [1 - ^-η a / (q + 2)]^2/[M - (1 - ^-η a / (q + 2)) κ] + [1 - ^-η a / (q + 2)]^3/[M - (1 - ^-η a / (q + 2)) κ]^2) ≤(q + 2)[1 - ^-η a / (q + 2)]/2Mη( ^η a / (q + 2) - 1/M + ( ^η a / (q + 2) - 1/M)^2 ) ≤(q + 2)[1 - ^-η a / (q + 2)]/Mη = ζ if M ≥^η a / (q + 2) - 1. In the end, we find that the interpolation error obeys A_η, a - A_η, a^M, q_n ≤ M ( 2a/M)^q + 2 Q^q_n = (2a)^q + 2/M^q + 1(η + 2 )^q + 1∂_λ_n/(q + 1)! implying that ( U_η, a(, ) - U_η, a^M, q(, ))|n()⟩≤(2a)^q + 2/M^q + 1(q + 1)!∫_^λ̣ (η + 2 (̋λ)_∞)^q + 1∂_λ(̋λ)_n(λ). We simplify the expression above by using the assumption thatη≤min_λ(̋λ), and by using Hölder's inequality[Define the integral norm f_p = ( ∫_a^b x̣ |f(x)|^p )^1 / p with f_∞ = sup_x ∈ [a, b] |f(x)| and let f, g be such that f_p, g_p and fg_p are bounded for 1 ≤ p ≤∞; then Hölder's inequality states that fg_1 ≤f_p g_p' if 1 / p + 1 / p' = 1. We choose p = ∞ and p' = 1.]: ( U_η, a(, ) - U_η, a^M, q(, ))|n()⟩ ≤3^q + 1(2a)^q + 2/M^q + 1(q + 1)!∫_^λ̣ (̋λ)_∞^q + 1∂_λ(̋λ)_n(λ) ≤3^q + 1(2a)^q + 2/M^q + 1(q + 1)!_∞, ∞^q + 1∂_λ(̋λ)_n, 1. To make this error at mostϵ, it is sufficient to take M ≥max{3 (2a)^1 + 1 / (q + 1)/ϵ^1 / (q + 1) (q + 1)_∞, ∞∂_λ_n, 1^1 / (q + 1), ^η a / (q + 2) - 1 } where we used Stirling's approximation to rewrite((q + 1)!)^-(q + 1)≤^1 - 1 / (q + 1) (q + 1)^-1≤ (q + 1)^-1. §.§ Lie-Trotter-Suzuki expansion of the weighted sum The path-ordered exponential U_η, a^M, q(, ) = 𝒫exp[-ı∫_^λ̣ A_η, a^M, q(λ)]may be implemented by akth-orderr-segment LTS product formula. These formulas follow the same description as given in section <ref>. The first-order single-segment formulaŨ_1, 1(, ), for the operator A_η, a^M, q(λ) = ∑_κ = -M^M ∑_α = 0^q C_κ, α(λ) where C_κ, α(λ) = ^-ı(̋λ) τ_κ, α B_κ, α(λ) ^ı(̋λ) τ_κ, α; B_κ, α(λ) = 1/2δτ_κ w_κ, α ^-η|τ_κ, α|τ_κ, α∂_λ(̋λ) (with C_0, α = 0), is given by Ũ_1, 1(, ) = ( ∏_κ=-M^M ∏_α = 0^q exp [-ı C_κ, α( + δλ / 2) δλ / 2] ) ( ∏_κ=M^-M∏_α = q^0 exp [-ı C_κ, α( + δλ / 2) δλ / 2] ). whereδλ = - . Here, the operator exponentials in the product take the form exp[-ı C_κ, α(λ) δλ] = exp[-ı(̋λ) τ_κ, α] exp[ -ı/2^-η|τ_κ, α|τ_κ, α w_κ, α∂_λ(̋λ) δτ_κδλ] exp[ı(̋λ) τ_κ, α]. Higher-order, multi-segment formulas are constructed as in eqs. <ref> and <ref>. In the same way, we can use theorem <ref> to put a bound on the number of path-independent operator exponentials required to achieve a certain precision. We apply this result to our case by giving an explicit expression forΛcorresponding to the ordered exponential_η, a^M, q. If the spectral distance between_η, a^M, qand its LTS decomposition_k, ris at mostϵ, then the euclidean state distance( U_η, a^M, q(, ) - _k, r(, )) |n()⟩is also at mostϵ. To use theorem <ref>, we require bounds on the operator norms of the higher-order derivatives of the C_κ, αoperators. For the case of interpolating hamiltonians(̋λ) = + f(λ) , these norms take on a simple form, since ∂^p/∂λ^p( ^-ı(̋λ) τ^-ı(̋λ) τ) = ∂^p + 1 f(λ)/∂λ^p + 1 = ∂_λ^p + 1(̋λ); however, for general hamiltonians, the higher-order derivatives of C_κ, α(λ)are less straightforward and their norms can grow rapidly withp, as a result of compounding product rules. Since there can be a significant discrepancy in the the scaling of the derivatives between interpolating case(̋λ) = + f(λ) and the general case, and better bounds on the norm of(∂^p / ∂λ^p) C_κ, α(λ)require more specifics about the definition of(̋λ), we will stick to the interpolating case for the rest of the section. A condition on the numberr, and thereby the number of exponentials required in our gate-based counterdiabatic driving algorithm, is then given by the following lemma. Let (̋λ) = + f(λ) for hermitian and and a scalar function f(λ) defined on the interval [, ] that is 2k + 1 times differentiable. Let _η, a^M, q(, ), {τ_κ}, {τ_κ, α} and {w_κ, α} be as in lemma <ref>, and let _k, r(, ) be the kth-order r-segment LTS decomposition of _η, a^M, q(, ) as in eq. <ref>. Let the conditions of lemma <ref> be satisfied. Suppose max_1 ≤ p ≤ 2k + 1[ ( 2(1 - ^-η a)/η∂_λ^p _∞, ∞)^1 / p ] ≤Λ̃. If _n, 1≥ 3^-(q + 1)η/1 - ^-η a and ϵ≤min{(9/10)(5/3)^k Λ̃δλ, 1}, then _η, a^M, q(, ) - _k, r(, )≤ϵ provided that r ≥ 5k Λ̃δλ( 5/3)^k ( Λ̃δλ/ϵ)^1/2k. We merely need to bound the sum ∑_κ=-M^M ∑_α = 0^q ∂^p/∂λ^p C_κ, α(λ) to determine the Λ̃ quantity from theorem <ref>. Clearly, ∑_κ=-M^M ∑_α = 0^q ∂^p/∂λ^p C_κ(λ) = 1/2∂_λ^p+1(̋λ)∑_κ=-M^M ∑_α = 0^q δτ_κ w_κ, α ^-η|τ_κ, α|. We see that the sum on the right-hand side of this equation approximates the integral ∫_-a^a τ̣^-η|τ|. Considering only the positive half of the integration range, i.e. κ > 0, we have ∑_κ = 1^M ∑_α = 0^q δτ_κ w_κ, α^-ητ_κ, α = ∑_κ = 1^M [ ∫_τ_κ - 1^τ_κτ̣ ^-ητ + R_κ] = ∫_0^a τ̣ ^-ητ + R where _κ∈, R = ∑_κ = 1^M R_κ and, much like in the proof of lemma <ref>, |R_κ| ≤η^q + 1δτ_κ^q + 2/(q + 1)!^-ητ_κ≤η^q + 1/(q + 1)!( 2a/M)^q + 2. but since the conditions of lemma <ref> are assumed satisfied, we have 3^q + 1 (2a)^q + 2/M^q + 1(q + 1)!_∞, ∞^q + 1_n, 1≤ϵ≤ 1 as well as η≤min_λ(̋λ), so that |R| ≤η^q + 1(2a)^q + 2/M^q + 1(q + 1)!≤η^q + 1ϵ/3^q + 1_∞, ∞^q + 1_n, 1≤1/3^q + 1_n, 1. The premise that _n, 1≥ 3^-(q + 1)η/1 - ^-η a then directly implies that |R| ≤1 - ^-η a/η. The calculation for κ < 0 is identical, and we finally have ∑_κ=-M^M ∑_α = 0^q ∂^p/∂λ^p C_κ(λ) ≤2(1 - ^-η a)/η∂_λ^p + 1(̋λ). Therefore the condition Λ̃ in eq. <ref> is sufficient to fulfill the requirements of theorem <ref>, and the result follows immediately. The assumption on_n, 1is justified since3^-(q + 1)can be made arbitrarily small and we work in the regime whereηis small andais large (noteη/1 - ^-η aapproaches zero in the limita →∞,η→ 0). §.§ Gate complexity Finally, we turn to the quantum gate complxity of the gate-based CD algorithm. In order to bound this complexity, we need to associate with each path-independent operator exponential in the product formula_k, r(, )a gate complexity and sum over all operator exponentials. In the case of AQC, all these operator exponentials were assumed to be simulatable with a single multi-qubit rotation gate, and unit cost was associated with each simulation. This is not the case for CD, since the constituents of the LTS formulae are exponentials of(̋λ)and(λ). Simulation costs for these exponentials in terms of quantum gates are provided through established time-independent hamiltonian simulation methods. We use a technique known as qubitisation <cit.> for its optimality in the relevant parameters. In short, if an operator$̋ is given as a linear combiantion of unitaries =̋∑_i = 1^ℓβ_i V_i, qubitisation provides a routine that simulates ^±ıθ̋ to error ϵ in spectral norm using O(β_1 θ + log 1 / ϵ) queries to an oracle that provides access to the hamiltonian. Since this oracle can be implemented with O(ℓ) elementary gates <cit.>, we will say that the simulation can be done with O(β_1 θ + log 1 / ϵ) O(ℓ) gates. With a clear definition of gate complexity, we can now state the main theorem of this paper, establishing a gate complexity bound for our gate-based counterdiabatic driving algorithm. Suppose (̋λ) = + f(λ) = ∑_i = 1^ℓβ_i(λ) V_i with hermitian and and a scalar function f(λ) that is 2k + 1 times differentiable on the interval [, ], β_i(λ) ∈ and V_i unitary. Let {|n(λ)⟩} be the set of its instantaneous eigenstates. Suppose the spectrum of (̋λ) around an eigenstate |n(λ)⟩ has a gap of at least Δ_n on the entire interval. Let (, ) = ∑_n |n()⟩⟨n()|. Define γ_p = (√(2)Δ_n^-3 / 2_n, 1^1 / 2∂_λ^p _∞, ∞)^1 / p, p^* = _1 ≤ p ≤ 2k + 1 (ϵ^-1 / 2pγ_p) and Λ̃= ϵ^-1 / 2p^*γ_p^*. Furthermore, let ϵ > 0, q ∈_>0 and suppose that * ϵ≤min{[(9 / 10)(5 / 3)^k γ_p^* ( - )]^1 - 1/2p^* + 1, 1}; * _n, 1≥ 3^-2 (q + 1) / 3Δ_n ϵ^1 / 3; * min_λ∈ [, ](̋λ)≥Δ_n^3 / 2ϵ^1 / 2_n, 1^-1 / 2. Then there exists a quantum algorithm (, ) such that ((, ) - (, ))|n()⟩≤ O(ϵ) which can be implemented with O( ( 25/3)^k k (Λ̃δλ)^1 + 1/2k_n, 1^1/2 + 3/2(q + 1)/ϵ^1/2 + 1/2k + 3/2(q + 1)Δ_n^3/2 + 3/2(q + 1)β_1, ∞) quantum gates, where β_1, ∞ = max_λ∈ [, ]∑_i = 1^ℓ |β_i(λ)|. This gate complexity is identically upper bounded by O( ( 25/3)^k k (Λδλ)^1 + 1/2k_n, 1^1 + 1/4k + 3/2(q + 1)/ϵ^1 + 3/4k + 3/2(q + 1)Δ_n^3 + 3/4k + 3/2(q + 1)β_1, ∞) where Λ = max_1 ≤ p ≤ 2k + 1∂_λ^p _∞, ∞^1 / p. Consider _k, r(, ) as in lemma <ref>, as well as _η, a(, ) for η, a > 0 as in lemma <ref> and _η, a^M, q for M ∈_>0 as in lemma <ref>. Let _k, r^ sim be the simulated version of _k, r, that is a version of _k, r which replaces all (path-independent) operator exponentials with their (qubitisation) simulations. Clearly, if * (_η, a(, ) - (, )) |n()⟩≤ O(ϵ), * (_η, a^M, q(, ) - _η, a(, )) |n()⟩≤ O(ϵ), * (_k, r(, ) - _η, a^M, q(, )) |n()⟩≤ O(ϵ) and * (_k, r^ sim(, ) - _k, r(, )) |n()⟩≤ O(ϵ) then (_k, r(, ) - (, )) |n()⟩≤ O(ϵ) by a triangle inequality. If we set η = 1/√(2)Δ_n^3/2ϵ^1/2_n, 1^-1/2 and a = 1/ηlog( 2 Δ_n + η/Δ_nϵη_n, 1) as in eq. <ref>, then by lemma <ref>, condition (i) is satisfied. By assumption (c), min_λ∈[, ](̋λ)≥η, so according to lemma <ref>, condition (ii) can be made true by setting parameters {τ_κ}, {w_κ, α} and M as in eqs. <ref>, <ref> and <ref>. Furthermore, assumption (b) implies that _n, 1≥ 3^-(q + 1)η≥ 3^-(q + 1)η/1 - ^-η a; assumption (c) implies that ϵ≤ (9 / 10)(5 / 3)^k Λ̃( - ); and the choice of γ_p and Λ̃ ensures that eq. <ref> holds. Therefore, by lemma <ref>, condition (iii) is fulfilled if we set r according to eq. <ref>. Under these conditions, the error bound in eq. <ref> is thus achieved. What remains is to calculate the number of quantum gates required to simulate _k, r(, ) to precision ϵ– that is, implement _k, r^ sim(, ) such that condition (iv) is fulfilled – given the settings of η, a, M, q, k and r. For this purpose, we write 𝒞_ϵ(·) for the gate complexity of simulating a unitary to precision ϵ in spectral norm. We first compute the gate complexity for the simulation of the first-order, single-segment decomposition _1, 1(, ), as given in eq. <ref>, and subsequently extend this to kth-order, r-segment formulas Ũ_k, r(, ). For this purpose, we conveniently relabel the κ, α indices with an additional subscript index i = 1, …, M(q + 1), as follows: (κ_1, α_1) = (-M, 0), (κ_i + 1, α_i + 1) = (κ_i, α_i + 1) if α < q and (κ_i + 1, 0) otherwise. Now, an important observation is that the exp[±ı H τ_κ, α] exponentials surrounding each exp[-ı B_κ, αδλ] (see eqs. <ref>–<ref>) in the product formula partially cancel out, since exp[-ı C_κ_i, α_iδλ] exp[-ı C_κ_i + 1, α_i + 1δλ] = exp[-ı H τ_κ_i, α_i] exp[-ı B_κ_i, α_iδλ] exp[ı H τ_κ_i, α_i] exp[-ı H τ_κ_i + 1, α_i + 1] exp[-ı B_κ_i + 1, α_i + 1δλ] exp[ı H τ_κ_i + 1, α_i + 1] = exp[-ı H τ_κ_i, α_i] exp[-ı B_κ_i, α_iδλ] exp[-ı H (τ_κ_i + 1, α_i + 1 - τ_κ_i, α_i)] exp[-ı B_κ_i + 1, α_i + 1δλ] exp[ı H τ_κ_i + 1, α_i + 1]. As such, we find that the task of simulating _1, 1(, ) to precision ϵ has gate complexity (taking δλ = -) 𝒞_ϵ(_1, 1(, )) = 2 ( ∑_i = 1^M(q + 1)𝒞_ϵ̃(exp[-ı B_κ_i, α_i( + δλ / 2) δλ / 2]) + ∑_i = 1^M(q + 1) - 1𝒞_ϵ̃(exp[-ı H( + δλ / 2) (τ_κ_i + 1, α_i + 1 - τ_κ_i, α_i)] + 𝒞_ϵ̃(exp[-ı H τ_-M, 0]) + 𝒞(exp[-ı H τ_M, q]) ) ≤∑_κ = -M^M ∑_α = 0^q O( 1/2^-η|τ_κ, α| w_κ, αδτ_κ∂_λβ( + δλ / 2)_1 δλ + log1/ϵ̃) O(ℓ) + ∑_i = 1^M(q + 1) - 1 O( β( + δλ / 2)_1 (τ_κ_i + 1, α_i + 1 - τ_κ_i, α_i) + log1/ϵ̃) O(ℓ) + 2 O(β( + δλ / 2)_1 a + 1/ϵ̃) O(ℓ) ≤ O( 1 - ^-η a/η∂_λβ_1, ∞δλ + β_1, ∞ a + M(q + 1) log1/ϵ̃) O(ℓ) where ϵ̃ will be specified shortly, and we used the fact that 1/2∑_κ, α^-η|τ_κ, α| w_κ, αδτ_κ≤ 21 - ^-η a/η from the proof of lemma <ref>. Now, to generalise this to _k, r, observe that a kth-order LTS formula is a product of 5^k - 1 first-order formulas and that an r-segment formula is a product of r single-segment formulas. At the same time, the integration interval δλ is divided into smaller subintervals such that the sum of the lengths of the subintervals over all first-order, single-segment factors is exactly δλ. Therefore, only the second and the third term in the last line of eq. <ref> grow with k and r, and we obtain 𝒞_ϵ(_k, r(, )) ≤ O( 1 - ^-η a/η∂_λβ_1, ∞δλ + 5^k - 1 r β_1, ∞ a + 5^k - 1 r M (q + 1) log1/ϵ̃) O(ℓ). We will now specify ϵ̃. Evidently, _k, r(, ) is a product of O(5^k - 1 r M (q + 1)) (time-independent) operator exponentials. For this product to be simulated to precision O(ϵ) in operator norm, we require all constituent operator exponentials to be simulated to precision O(ϵ / (5^k - 1 r M (q + 1))), since errors (in the sense of spectral distance) add up at most linearly by the telescoping property of the spectral norm. This means that we must set ϵ̃= ϵ / (5^k - 1 r M (q + 1)) to ensure that condition (iv) is satisfied. We proceed to insert the values for M from lemma <ref> (eq. <ref>) and r from lemma <ref> (eq. <ref>). Since r grows superlinearly with Λ̃, Λ̃ grows with η^-1 and M grows superlinearly with a, the third term in eq. <ref> is dominant in the regime of small η and large a; therefore 𝒞_ϵ(_k, r(, )) ≤ O( ℓ 5^k - 1 r M (q + 1) log1/ϵ̃) ≤Õ( ℓ( 25/3)^k k (Λ̃δλ)^1 + 1 / 2ka^1 + 1 / (q + 1)/ϵ^1 / 2k + 1 / (q + 1)β_1, ∞∂_λ_n, 1^1 / (q + 1)). Note that the log 1 / ϵ̃ has been absorbed into the Õ(⋯) notation, since the gate complexity scales polynomially in 1 / ϵ. Lastly, we insert a = Õ(η^-1) with the previously specified expression η = 1/√(2)Δ_n^3/2ϵ^1/2_n, 1^-1/2 to find 𝒞_ϵ(_k, r(, )) ≤ O( ℓ( 25/3)^k k (Λ̃δλ)^1 + 1/2k_n, 1^1/2 + 3/2(q + 1)/ϵ^1/2 + 1/2k + 3/2(q + 1)Δ_n^3/2 + 3/2(q + 1)β_1, ∞) which is exactly eq. <ref>. Finally, we observe that Λ̃ grows at least linearly in η^-1, and that Λ̃≤ 2η^-1Λ. Inserting this inequality into eq. <ref>, we obtain the upper bound 𝒞_ϵ(_k, r(, )) ≤ O( ℓ( 25/3)^k k (Λδλ)^1 + 1/2k_n, 1^1 + 1/4k + 3/2(q + 1)/ϵ^1 + 3/4k + 3/2(q + 1)Δ_n^3 + 3/4k + 3/2(q + 1)β_1, ∞) which is exactly eq. <ref>. The theorem is thus proved. § CONCLUSION AND OUTLOOK In this work, we have presented what is, to the best of our knowledge, the first fully gate-based quantum algorithm for counterdiabatic driving. This algorithm is constructed from the regularised truncated adiabatic gauge potential (eq. <ref>). By discretising the integral form of this AGP approximation, it is fed into a Lie-Trotter-Suzuki formula to produce a gate-based algorithm that may be run on a quantum computer. We have shown that this algorithm, starting from an initial eigenstate |n()⟩, requires O(ϵ^-(1 + o(1))Δ_n^-(3 + o(1)))) quantum gates in order to achieve a fidelity at least 1 - ϵ^2 with the target eigenstate |n()⟩ (theorem <ref>). Here, Δ_n is the minimum energy gap around the instantaneous eigenstate |n(λ)⟩. The o(1) scaling is an inverse linear dependence on the order k of the LTS formula (see eqs. <ref> and <ref>) and the degree q of the Lagrange interpolation polynomial used in the discretisation of the integral form. We remark that q can be made large cheaply using purely classical precomputation of the quadrature weights. At the same time, a gate-based formulation of adiabatic quantum computing in terms of LTS formulae also yields an algorithm running with O(ϵ^-(1 + o(1))Δ_n^-(3 + o(1)))) quantum gates (theorem <ref>). In this case, o(1) scaling is only a dependence on k, which is slightly worse than in CD. As such, we have shown a near equivalence in gate complexity between the gate-based counterdiabatic driving and gate-based adiabatic computing algorithms, calling into question the perception of counterdiabatic driving as a general “shortcut to adiabaticity”. However, this does not mean that counterdiabatic driving should be considered irrelevant. After all, there may exist more efficient gate-based CD algorithms or better complexity bounds. Our results only suggest that there may not be much to gain in a general, gate-based, worst-case setting; since CD is essentially a coordinate change from time to λ space, some kind of “no free lunch” phenomenon seems plausible. Nonetheless, proving such a statement would require tight lower bounds; to our knowledge, the best complexity lower bound for AQC scales only linearly in the inverse gap <cit.>. Furthermore, CD can still be valuable in settings where quantum resources are scarce. For example, in noisy setups where the ability to work on small timescales is paramount to countering noise, adiabaticity is not available; achieving satisfactory fidelities then necessitates the use of some kind of classically precomputed shortcut field. Finally, we remark that, in order to obtain the presented gate complexity of CD, it turned out crucial to tailor the AGP approximation to the eigenstate of interest, through an appropriately chosen nonzero gap cutoff. This insight that it is unnecessary to suppress all transitions in the spectrum allowed us to reduce the complexity down from the dimensionality of the operator space to a scaling in the minimum gap around that specific eigenstate. While this idea has been recognised <cit.>, it was not taken into account in the currently most prevalent algorithmic CD approaches <cit.>. As such, we would like to stress its importance once again, and hope to see it replicated in future work on counterdiabatic driving. § ACKNOWLEDGMENTS The author thanks Kareljan Schoutens for his suggestion of the differential equation ansatz in eq. <ref>. Additional thanks go to Jan Střeleček and Takuya Hatomura for insightful discussions. This work was supported by the Dutch Ministry of Economic Affairs and Climate Policy (EZK), through the Quantum Delta NL programme. unsrt § LAGRANGE INTERPOLATION AND SCALAR QUADRATURE Scalar functions, may be approximated as weighted through Lagrange interpolation. The idea of this interpolation method is to approximate a function f(x) in some interval [a, b] by some polynomial p_q(τ) of degree at most q such that the interpolation condition f(x_α) = p_q(x_α), x_0, ⋯, x_q ∈ [a, b] is satisfied for a set of q + 1 points in [a, b]. It can be shown that such a polynomial is unique, and is given by p_q(x) = ∑_α = 0^q f(x_α) l_α(x) where l_α(x) = ∏_0≤β≤ q β≠αx - x_β/x_α - x_β are the Lagrange basis polynomials. It is straightforward to check that l_α(x_β) = δ_αβ, which also verifies the interpolation condition (eq. <ref>). Clearly, if f itself is a polynomial of degree at most q, then p_q = f since p_q is unique, and the interpolation is exact. For other functions, it may be shown <cit.> that the remainder r_q(x) = f(x) - p_q(x) at any point in [a, b] is given by r_q(x) = l̃_q(x) f^(q + 1)(ξ)/(q + 1)! for some ξ∈ [a, b], where l̃_q(x) = ∏_α = 0^q (x - x_α). The integral ∫_a^b f(x) x̣ is then approximated as ∫_a^b f(x) x̣≈∫_a^b p_q(x) x̣ = ∑_a = 0^q f(x_α) ∫_a^b l_α(x) x̣_ = w_α. Note that ∑_α = 0^q w_α = b - a; this can be seen by taking f(x) = 1, in which case the approximation is exact (because f is a polynomial of degree zero). For general functions, the error is R = ∫_a^b r_q(x) x̣ = f^(q + 1)(ξ)/(q + 1)!∫_a^b l̃_q(x) x̣. Clearly, |R| is bounded above by |R| ≤(b - a)^q + 2max_ξ∈ [a, b] |f^(q + 1)(ξ)|/(q + 1)!.
http://arxiv.org/abs/2406.09101v1
20240613132942
V-static metrics and the volume-renormalised mass
[ "Stephen McCormick" ]
math.DG
[ "math.DG", "gr-qc", "53C21, 53C25, 83C99" ]
plain thmTheorem[section] prop[thm]Proposition cor[thm]Corollary lem[thm]Lemma *thmnonumberTheorem definition defn[thm]Definition conj[thm]Conjecture *queQuestion exmp[thm]Example remark rem[thm]Remark *noteNote *caseCase *rem2Remark propprime[1] thm-1 thmprime[1] thm-1 1.12 g ∇ g ℜ mnotecount[section] equationsection introthmTheorem ]V-static metrics and the volume-renormalised mass S. McCormick]Stephen McCormick Institutionen för teknikvetenskap och matematik Luleå tekniska universitet 971 87 Luleå Sweden stephen.mccormick@ltu.se § ABSTRACT V-static metrics generalise the notion of static metrics, and stem from the work of Miao and Tam <cit.>, and Corvino, Eichmair and Miao <cit.> on critical points of the volume functional over the space of compact manifolds with constant scalar curvature. In this article we show that these V-static metrics arise naturally in the context of asymptotically hyperbolic manifolds as critical points of the volume-renormalised mass, recently introduced by Dahl, Kröncke and the author <cit.>. In particular, we show that critical points of the volume-renormalised mass over the space of constant scalar curvature asymptotically hyperbolic manifolds without boundary, or satisfying appropriate boundary conditions, are exactly V-static metrics. This is directly analogous to the relationship between critical points of the ADM mass and static metrics for asymptotically flat manifolds. [ [ ===== § INTRODUCTION In recent work, Dahl and Kröncke and the author introduced a new geometric quantity defined for asymptotically hyperbolic manifolds, which we call the volume-renormalised mass <cit.>. It is essentially a linear combination of the renormalised volume and a surface integral at infinity closely resembling the ADM mass for asymptotically flat manifolds (see Definition <ref> below for the precise definition). Among other results, in <cit.> we showed that on the set of complete constant scalar curvature asymptotically hyperbolic metrics without boundary, critical points of the volume-renormalised mass correspond exactly to Einstein metrics. In this article, critical points and local extremisers of the volume-renormalised mass are further explored, and we characterise them more generally as V-static metrics (see Definition <ref>, below). These V-static metrics should be viewed as a generalisation of static metrics in the context of initial data for general relativity. It should be remarked that the class of asymptotically hyperbolic manifolds we consider here impose slower decay conditions than what is usually considered in the context of mathematical general relativity, as the volume-renormalised mass is defined for metrics with slower decay than the standard asymptotically hyperbolic mass <cit.>. In particular, under our decay assumptions, the standard asymptotically hyperbolic mass is not well-defined. Roughly speaking, we are interested in metrics that are asymptotic to a fixed conformally compact asymptotically hyperbolic Einstein manifold at a rate of o(ρ^(n-1)/2), where ρ is the boundary defining function and n is the dimension of the manifold. See Section <ref> for the precise definitions. These V-static metrics that we are interested in stem from the work of Miao and Tam <cit.>, and Corvino, Eichmair and Miao <cit.> as critical points of the volume functional over the space of compact manifolds with constant scalar curvature. We give a definition now. A V-static metric is Riemannian metric g admitting a non-trivial solution (f,λ) to D^*_g(f)=λ g where λ∈ℝ and the adjoint of the linearised scalar curvature operator is given by D^*_g(f)=-Δ_g(f)g+∇^2_g(f)-f_g. We call such an f the static potential for g. In the case of asymptotically hyperbolic manifolds, which we consider here, we will further ask that a V-static potential be bounded. Analogous to static metrics on asymptotically flat manifolds, this then implies that f must be asymptotic to a constant, which depends on λ (see Corollary <ref>, below). Clearly rescaling a solution to (<ref>) by a constant results in another solution to (<ref>) with λ rescaled by the same constant. For this reason we can assume without loss of generality that bounded V-static potentials are asymptotic to 1 at infinity, which corresponds to fixing λ=n-1. Note that V-static metrics generalise the notion of static metrics, which are solutions with λ=0 (and asymptotically hyperbolic manifolds cannot have the static potential f asymptotic to a constant in this case). For complete asymptotically hyperbolic manifolds without boundary, the metric is V-static if and only if it is Einstein (Proposition <ref>, below). In this case the equivalence of V-static metrics with critical points of the volume-renormalised mass on the space of constant scalar curvature metrics simply recovers the result of Dahl, Kröncke and the author <cit.> equating critical points with Einstein metrics. For comparison, recall that the critical points of the ADM mass on the space of scalar flat, complete asymptotically flat manifolds without boundary are exactly Ricci flat metrics. However, it is now well-known that minimisation of the ADM mass is closely related to boundary static metrics, which in the case of a complete asymptotically flat manifold without boundary coincide with Ricci flat metrics. We now state a simplified version of the main results of this article. For more precise statements of Theorems <ref> and <ref>, the reader is directed to Theorems <ref> and <ref>, respectively. [Theorem <ref>] Let (M,g) be an asymptotically hyperbolic manifold without boundary. The following three statements are equivalent: * (M,g) is a local extrema of the volume-renormalised mass on the space of constant scalar curvature metrics, * (M,g) is a V-static, * (M,g) is Einstein. The above result in fact is straightforward to establish from <cit.> and the work of Corvino, Eichmair and Miao <cit.> since the only V-static potential in this case is the constant function 1. However, we prove it in such a way that emphasises the role of the V-static potential and readily generalises to the case of a manifold with boundary (Theorem <ref>). In that case, when the manifold has boundary, V-static metrics are distinct from Einstein metrics. Natural boundary conditions for this problem are to fix the Bartnik boundary data, ( M,g_ M,H), where g_ M is the induced metric and H is mean curvature of the boundary. We show the following. [Theorem <ref>] Let (M,g) be an asymptotically hyperbolic manifold with boundary. The following two statements are equivalent: * (M,g) is a local extrema of the volume-renormalised mass on the space of constant scalar curvature metrics with fixed Bartnik data, * (M,g) is a V-static. The general procedure we use to prove both Theorem <ref> and <ref> is a now somewhat standard Lagrange multipliers argument <cit.>, originally due to Bartnik in the study of critical points of the ADM mass <cit.>. The main technical aspect in carrying out this type of argument lies in proving that the linearised scalar curvature operator (augmented with a boundary map in the case of Theorem <ref>) is surjective. Conveniently, this has already been addressed in the case of a manifold without boundary by Huang, Jang and Martin <cit.> (and <cit.> for the precise asymptotics we use). Furthermore, the inclusion of a boundary satisfying the boundary conditions we consider here has been recently established by Huang and Jang <cit.>, although they impose decay conditions that are too strong for studying the volume-renormalised mass. Thankfully, their work only requires minor modifications to be extended to the natural decay rates for the volume-renormalised mass so we do not need to repeat the full analysis here. Instead we can directly use the results to apply the Lagrange multipliers theorem. The structure of this article is as follows. In Section <ref>, we recall some basic definitions and set up the function spaces used throughout. Section <ref> then motivates the relationship between V-static metrics and the volume-renormalised mass with some elementary results that follow directly from the study of V-static metrics on compact manifolds. In Section <ref>, we prove Theorem <ref> (Theorem <ref>) and outline the Lagrange multipliers argument from which the main theorems follow. Finally, in Section <ref>, the proof of Theorem <ref> (Theorem <ref>) is given. § PRELIMINARIES Throughout this article we will use n≥3 to denote the dimension of the asymptotically hyperbolic manifolds on which we work. When we speak of mean curvature, we take it to be the trace of the second fundamental form with respect to the normal ν pointing towards the asymptotic end of our manifold. We take the Laplacian to be the trace of the Hessian, which we remark differs from <cit.> wherein we defined the volume-renormalised mass, but appears more common in the literature on static metrics. We begin by defining a suitable notion of reference manifold. Let (N,h) be a closed (n-1)-dimensional manifold and k∈{-1,0,1} a constant. An asymptotically hyperbolic reference manifold will be taken to be a manifold M=(r_k,∞)× N equipped with a metric =1/r^2+kdr^2+r^2h that is conformally compact with all sectional curvature asymptotic to -1 at infinity, where r_k=0 if k=0 or k=1 and r_k=1 if k=-1. Furthermore, we will ask that a reference manifold be asymptotically Poincaré Einstein (APE) in the sense of Definition <ref> (below). This definition follows that of <cit.> as we rely heavily on their analysis. However, a major point of difference between this and the work there – and indeed the majority of the work related to asymptotically hyperbolic manifolds – is the rate at which g decays to . It is standard to ask that (g-)=O(r^-τ) for some τ>n/2, as this is required for the usual definition of the mass of an asymptotically hyperbolic manifold to be well-defined. However, the volume-renormalised mass only requires τ>n-1/2 to be well-defined (and a weaker integrability condition for the scalar curvature) <cit.>. In order to make the decay rates precise, we make use of weighted Hölder spaces C^k,α_δ=r^-δC^k,α, equipped with the standard norm u_k,α,δ=r^δ u_C^k,α. This follows the convention that a function in C^k,α_δ is O(r^-δ). Weighted Hölder spaces of sections of bundles are defined analogously (see Lee <cit.>, for example). Throughout this article we fix some τ∈(n-1/2,n), which will serve as the rate of decay of a metric g towards a fixed reference metric. The lower bound suffices to ensure that the volume-renormalised mass is well-defined, and the upper bound is a requirement to ensure that (Δ-n) is an isomorphism from C^k,α_τ to C^k-2,α_τ <cit.>. We will say a reference manifold (M,) is asymptotically Poincaré–Einstein (APE) if |_+(n-1)|_∈ C^k-2,α_τ. It is required that the reference manifold be APE in order to ensure that the volume-renormalised mass with respect to that manifold is well-defined under the appropriate scalar curvature integrability condition. We will therefore impose this additionally throughout, on the fixed (M,). A smooth connected Riemannian manifold (M,g) is said to be asymptotic to the reference manifold (M,) if there exist compact sets K⊂ M and K⊂M and a diffeomorphism φ:M∖K→ M∖ K, such that φ^* g - ∈ C^2,α_τ(S^2T^*(M∖K)), where S^2T^*(M∖K) is the bundle of symmetric bilinear forms on M∖K Throughout this article we will slightly abuse notation while we work on by omitting reference to this diffeomorphism when working with g and on the asymptotic end. The space of asymptotically hyperbolic Riemannian metrics on M that we consider throughout will be denoted by ℛ^k,α_τ={g | g-_0 ∈ C^2,α_τ(S^2T^*M),g>0 } where _0 denotes some extension of to the whole manifold M. The volume-renormalised mass of a Riemannian manifold (M,g) that is asymptotic to a reference manifold (M, ) is defined as _VR,(g)= lim_R→∞( ∫_S_R(^i(g_ij)-_j(^ikg_ik))ν^j dS_. .+ 2(n-1)(∫_B_RdV_g-∫_B_RdV_)), where S_R is a sphere of radius R in M∖ K ≅M∖K, and B_R and B_R are the regions bounded by S_R in M and M respectively. It was shown in <cit.> (Theorem 3.1 therein) that _VR,(g) is well-defined and finite for g∈ℛ^2,α_τ provided that _g+n(n-1)∈ L^1(M). Assuming the manifolds are conformally compact and a mild assumption on the conformal boundary, the volume-renormalised mass was also shown to be independent of the diffeomorphism used to define it (Theorem 3.18 of <cit.>). As mentioned in the Introduction, the main results here rely on a Lagrange multipliers argument. The Lagrange multiplier Theorem for Banach manifolds is a classical textbook result, however we state explicitly the version we use here for reference. A proof of which can be found in Appendix D of <cit.>, for example. Let X and Y be Banach spaces and :X→ Y a C^1 map. Assume that for each x∈^-1(0), the linearisation D_x:X→ Y is surjective. Then if D_xℋ[v]=0 for all v∈(D_x) for some x∈^-1(0) and C^1 functional ℋ:X→ℝ, there exists λ∈ Y^* such that D_xℋ[v]=λ(D_x[v]) for all v∈ X. § PROPERTIES OF V-STATIC METRICS This section contains essentially a reinterpretation of some results due to Corvino, Eichmair and Miao <cit.>, which although were obtained in the case of compact manifolds, in fact shed some light on the connection between V-static metrics and the volume-renormalised mass. The following proposition, proven in <cit.>, follows the same argument as the static case <cit.>, and importantly, is a local argument so applies also to asymptotically hyperbolic manifolds. If (M,g) admits a non-trivial weak solution f∈ H^1_loc to (<ref>), then _g is constant. That is, an asymptotically hyperbolic V-static manifold has scalar curvature equal to -n(n-1). Still following the compact case (cf. example 1.3 of <cit.>), taking the trace of (<ref>) then implies that a V-static potential f must satisfy (-Δ_g+n)(f-1)=0. Since (-Δ_g+n):C^2,α_δ→ C^0,α_δ is an isomorphism for δ∈(-n,1) <cit.>, we have u≡ 0. That is, we have the following. Let f satisfy (<ref>) with f-1∈ C^2,α_δ for δ∈(-n,1) on some asymptotically hyperbolic manifold (M,g) without boundary. Then f≡1 and _g=(1-n)g. Note that the converse is also true. That is, if _g=(1-n)g then f≡1 solves (<ref>). Note that if M has an interior boundary then this argument no longer holds, and V-static is distinct from Einstein, just as with the equivalence between static asymptotically flat and Ricci flat. That is, unless we impose additional restrictions such as f≡ 1 on the boundary. In this case, Dirichlet boundary conditions ensure that (-Δ_g+n) is still an isomorphism between weighted Hölder spaces <cit.>, so we also have the following. Let f satisfy (<ref>) with f-1∈ C^2,α_δ for δ∈(-n,1) on some asymptotically hyperbolic manifold (M,g) with boundary. Assume further that f≡ 1 on M, then f≡1 and _g=(1-n)g. A key result of <cit.> is a local deformation result (see Theorems 1.1 and 1.2 therein), demonstrating that if a metric g is not V-static then g can deformed locally on an open set in such a way to make small prescribed changes to scalar curvature and volume simultaneously. We don't need the full power of the result here, and the precise statement is somewhat technical. For this reason we do not explicitly state the full theorem. However, we do state the following corollary of their main result. If g∈ℛ^4,α_τ minimises the volume-renormalised mass with respect to (M,) on some asymptotically hyperbolic manifold M (with or without interior boundary), then g must be V-static. Let (M,g) be asymptotically hyperbolic with well-defined volume-renormalised mass with respect to (M,). If g were not V-static then we can take some open domain U⊂⊂ M where g is not V-static. By Theorem 1.2 of <cit.> we can find a new metric g̃ satisfying the following properties: * g̃ exactly agrees with g outside of U, * _g̃≡_g everywhere on M, and * the volume of U with respect to g̃ is strictly less than the volume of U with respect to g. It immediately follows that g̃ would have strictly smaller volume-renormalised mass than g, and therefore g cannot minimise the volume-renormalised mass if it is not static. The above corollary is already a first demonstration of the volume-renormalised mass-minimising property of V-static metrics. It is worth remarking that the proof is somewhat more straightforward than the analogous result for the ADM mass and static asymptotically flat manifolds (cf. Theorem 8 of <cit.>), as we can decrease the volume-renormalised mass without affecting the boundary term at infinity. In this article, we aim to understand this connection analogously to Bartnik's variational approach to ADM mass minimisers <cit.>. § THE CASE OF NO BOUNDARY – EINSTEIN METRICS ARE MASS MINIMISERS As discussed, the relationship between V-static metrics and the volume-renormalised mass can be viewed analogously to the relationship between static metrics and the ADM mass. In fact, it is not only true that many results have direct analogues but also the proofs follow by essentially the same arguments too. Following Bartnik's approach to the ADM <cit.>, we define an analogue of the Regge–Teitelboim Hamiltonian and will apply a Lagrange multipliers argument to it. This modified Regge-Teitelboim functional is given by ℋ(g)=_ VR,(g)-∫_M f( _g+n(n-1) ) dV_g where f is a function asymptotic to 1 at infinity and we have suppressed reference to and f in the notation. Although neither term in (<ref>) is finite for general g∈ℛ^2,α_τ, in light of the renormalised Einstein–Hilbert action defined in <cit.>, we can quickly convince ourselves that the dominant terms in each should cancel out resulting in something finite when appropriately formulated (see Theorem <ref> below). It should be noted that while the standard Regge–Teitelboim Hamiltonian generates the correct equations of motion for the Einstein equations, ℋ defined by (<ref>) cannot be viewed as a genuine Hamiltonian. However, ℋ is closely related to a reduced Hamiltonian first developed by Fischer and Moncrief <cit.>, as indicated in a forthcoming article of Dahl, Kröncke and the author <cit.>. Nevertheless, ℋ still plays the role of a Lagrange function for extremising _ VR, subject to the constraint _g+n(n-1)=0, and the Lagrange multiplier we find in this process corresponds to the V-static potential that the minimiser admits. In this section we consider the case where (M,g) has no interior boundary, and then in the following section deal with an interior boundary separately although the argument in both cases is essentially the same. We will make use of the following densitised and normalised scalar curvature ℜ(g)=(_g+n(n-1) )dV_g so that we can write (<ref>) as ℋ(g)=_ VR,(g)-∫_M f ℜ(g). This form of the Lagrange function emphasises the role of f as the Lagrange multiplier, and as the constraint map. We will be interested in the constraint set 𝒞_o=^-1(0)={ g∈ℛ^2,α_τ | _g=-n(n-1) } of constant scalar curvature asymptotically hyperbolic metrics, asymptotic to (M,). We omit reference to α and τ∈(n-1/2,n) for the sake of notational brevity and the subscript o is used to differentiate the constraint set from when we consider a similar set for a manifold with boundary. As mentioned in the introduction, the key ingredients for this work come from the work of Huang, Jang, and Martin <cit.> who used similar arguments to prove rigidity of the standard asymptotically hyperbolic mass, and the work of Huang and Jang <cit.> carrying out an analogous analysis for the case with boundary. Unfortunately, since the standard asymptotically hyperbolic mass requires faster decay than we consider here, the metrics considered in <cit.> and <cit.> decay to the reference metric at a rate τ>n/2, which is too strong for our purposes. However, there are fortunately no obstacles preventing their arguments from applying to the case we considered here. In fact, the proofs of most results we require here follow unchanged for our decay rates. In this section we consider the case where (M,g) has no interior boundary, and then in the following section deal with an interior boundary separately although the argument in both cases is essentially the same. Crucial to applying the method of Lagrange multipliers is that the linearisation of the constraint map – in this case _g+n(n-1) – is surjective. This is where the technical difficulty lies, however in the case considered here this was established already by Huang, Jang, and Martin <cit.> for asymptotically hyperbolic manifolds with the standard decay (τ∈(n/2,n)) and Dahl, Kröncke and the author <cit.> for the decay rates considered here. In particular we have: For g∈𝒞_o, the map D_g:T_g𝒞_o→ C^0,α_τ(Λ^3T^*M) is surjective. Surjectivity of the linearised scalar curvature operator was established in the aforementioned work of Huang, Jang, and Martin <cit.> assuming the standard decay for g, namely τ∈(n/2,n). However, this is one of the results mentioned above that apply in our case by the same proof given there, verbatim (the case considered here is also demonstrated in <cit.>). Since g∈𝒞_o and we have D_g[h]=D_g[h]dV_g+1/2_g(h)(_g+n(n-1))dV_g, surjectivity of D_g follows immediately from the surjectivity of the linearised scalar curvature operator. We will also need the following result of Huang, Jang and Martin to control the decay of a V-static potential. Let g∈ℛ^2,α∩ C^∞_loc and V∈ C^,α_loc(M∖ K), for some compact set K∈ M, satisfy D_g^*[V]=Z, for Z∈ C^0,α_-s(M∖ K), where s>0. Then one of the following holds: * there is some cone U⊂ M∖ K and a constant C>0 such that C^-1|x|≤ |V(x)|≤ C|x|, for all x∈ U, or * there are constants C>0 and 0>d≤1 such that |V(x)|≤ C|x|^-d for all x∈ M∖ K. Note that in <cit.>, Z is taken to be in a specific weighted Hölder space with decay relayed to the rate τ at which g decays. However, this too is only because it is the decay required there so we state the result in the generality that the proof provides. Note that we need to require that g be smooth in the above. This is because the proof relies on standard elliptic regularity theory, and for the same reason we will need it for our main results. We have the following straightforward corollary of Theorem <ref> that shows V-static potentials are either asymptotically constant of grow linearly on a cone analogous to static potentials on asymptotically flat manifolds <cit.>. Let g∈ℛ^2,α∩ C^∞_loc and V∈ C^,α_loc(M∖ K), for some compact set K∈ M, satisfy D_g^*[V]-λ g=Z, for Z∈ C^0,α_-s(M∖ K), where s>0. Then one of the following holds: * there is some cone U⊂ M∖ K and a constant C>0 such that C^-1|x|≤ |V(x)|≤ C|x|, for all x∈ U, or * there are constants C>0 and 0>d≤1 such that |V(x)-λ/n-1|≤ C|x|^-d for all x∈ M∖ K. In particular, a V-static potential either grows linearly on a cone or is asymptotically constant with the constant given by λ/(n-1) with λ as in equation (<ref>). Set u=V-λ/n-1 then note that we have D_g^*[u]=Z+λ/n-1(_g+(n-1)g ). Since g is APE, The result follows directly from Theorem <ref> applied to u. We are now ready to carry out the main proofs of this section, beginning by demonstrating the that the modified Regge–Teitelboim Hamiltonian is well-defined when suitably regularised. The functional ℋ defined by (<ref>) with (f-1)∈ C^0,α_τ(M) can be extended to a functional that is defined on all ℛ^2,α_τ, given by ℋ(g)= ∫_M ( ^i^jg_ij-Δ(^ijg_ij)+2(n-1)(√(g)/√()-1) -f( _g+n(n-1) )√(g)/√() ) dV_, where √(g)/√() is defined by dV_g=√(g)/√()dV_. The expression (<ref>) is exactly what one obtains by writing (<ref>) as a single combined integral over M via the divergence theorem, so we need only demonstrate that the integral converges. To see this, first note that (1-f)(_g+n(n-1)) is O(r^-2τ) and therefore integrable. So it is equivalent to show that ℋ defined by (<ref>) is finite when f≡1. This is essentially the renormalised Einstein–Hilbert action of <cit.>, which was shown to be finite therein and we see this is well-defined for the same reason. First note that the volume form satisfies dV_g=dV_+1/2^ij(g_ij-_ij)dV_+O(r^-2τ)dV_, which follows from a Taylor expansion and using the fact that |g-|^2=O(r^-2τ). Then since D_[g-] =^i^jg_ij-Δ(^ijg_ij)-_^ij(g_ij-_ij) we arrive at ℋ(g)= ∫_M ( D_[g-]+_^ij(g_ij-_ij)+(n-1)^ij(g_ij-_ij) -(_g+n(n-1))) dV_+C where C denotes a collection of finite terms obtained by integrating O(r^-2τ). Then noting that we have _g=-n(n-1)+D_[g-]+O(r^-2τ), and since _=-(n-1) we see that ℋ is well-defined. For what follows, it will be useful to first note that a direct computation (see, for example, <cit.>) gives f D_g[h]-h· D_g^*[f] =∇^i( f( ∇^jh_ij-∇_i(g^jkh_jk) )-( h_ij∇^jf-g^jkh_jk∇_i f ) )dV_g, where D_g^* is the formal adjoint of D_g. For convenience we will write this as ∇^i(𝔅_i)dV_g, since this will result in boundary terms at infinity (and on the inner boundary in the following section). We now compute the variation of the Lagrange function. For all g∈ℛ^2,α_τ and (f-1)∈ C^0,α_τ D_gℋ(h)=(n-1)∫_M(_g(h)dV_g-h· D_g^*[f]) for all h∈ T_gℛ=C^2,α_τ(S^2T^*M). We consider ℋ as the limit of integrals over balls B_R of radius R so we can consider different terms separately. The variation of ∫_B_R f (g) in the direction h can be computed via (<ref>) as ∫_B_R fD_g[h]=∫_B_R h· D_g^*[f]+∫_S_R𝔅_i ν^idS_g, where S_R is the boundary of B_R. Since ∇ f and h are both O(r^-τ), then we can write this as ∫_B_R fD_g[h] =∫_B_R h· D_g^*[f]+∫_S_Rf( ∇^jh_ij-∇_i(g^jkh_jk) ) ν^idS_g+o(1) =∫_B_R h· D_g^*[f]+∫_S_R( ∇^jh_ij-∇_i(g^jkh_jk) ) ν^idS_+o(1), where we use the fact that (f-1) and dV_g-dV_ are O(r^-τ). Since the difference of connections tensor for ∇ and is also O(r^-τ), we can replace ∇ with and the integral over S_R becomes ∫_B_R^i^jh_ij-Δ(^ijh_ij)dV_, which exactly cancels the variation of the first two terms in (<ref>) coming from the surface integral at infinity. The only remaining term in (<ref>) to linearise is ∫_B_R2(n-1)(√(g)/√()-1) dV_, which in the direction h gives (n-1)_g(h)dV_g. That is, putting everything together and taking the limit R→∞ we arrive at (<ref>). We are now prepared to prove the main result of this section. The Lagrange multiplier argument for critical points of a mass functional like this is quite standard now, stemming from Bartnik's work <cit.>, however we closely follow the argument of <cit.> in particular, as we rely on their analysis. Suppose g∈𝒞_o∩ C^∞_loc, then the following three statements are equivalent: * For all h∈ T_g𝒞_o, we have D_g_VR[h]=0, * There exists f with (f-1)∈ C^2,α_τ satisfying D_g^*[f]=(n-1)g dV_g, * g is Einstein with _g=-(n-1)g. The equivalence between (II) and (III) is precisely Proposition <ref> (and the comment directly below it). So we need only consider the equivalence between (I) and (II). To this end, suppose first that (I) holds. This implies D_gℋ[h]=0 for all h∈ T_g𝒞_o, so the hypotheses of Theorem <ref> are satisfied. This then gives us λ∈(C^0,α_τ(Λ^3T^*M))^* satisfying D_gℋ[h]=λ(D_g[h]) for all h∈ T_g𝒞_o, which from Theorem <ref> gives λ(D_g[h])+∫_M h· D_h^*[f]=(n-1)∫_M _g(h) dV_g, for all h∈ C^∞_c. In particular, as a distribution λ is a weak solution to L[λ]=(n-1)g-L[f], where L is the de-densitised D_g^* operator, L[·]dV_g=D_g^*[·]. Note that the right-hand side of (<ref>) is given explicitly by (n-1)g-D_g^*(f)=(n-1)g-∇^2 f+Δ_g(f)g+f_g, where we have used the fact that g∈𝒞_o. Tracing (<ref>) results in the elliptic equation -(Δ_g-n)λ=(Δ_g-n)(f-1), which λ satisfies in the weak sense. However, Since we assume g∈ C^∞_loc, we have by elliptic regularity, λ∈ C^2,α_loc. That is λ is a strong solution to (<ref>). We next must show that λ has the desired decay at infinity. Since the right-hand side of (<ref>) is in C^0,α_τ, Theorem <ref> implies that λ either grows linearly on a cone at infinity or is in C^2,α_-d for some d∈(0,1]. Assume for the sake of contradiction that λ does grow linearly on M∖ K for some compact K, and then without loss of generality we take λ>0 on M∖ K. Now consider a family of test functions u_i∈ C^∞_c(M∖ K) converging in C^0,α_τ to a non-negative function u that is exactly equal to |x|^-τ outside of a compact set. Since λ is continuous we have λ(u)=lim_i→∞λ(u_i)=lim_i→∞∫_M λ u_i_g, where we continue to abuse notation slightly using λ to denote the linear functional and its representation by the L^2 inner product. By the monotone convergence theorem we have λ(u)=∫_M λ u _g, which blows up to infinity since τ<n and λ≥ C|x|, contradicting the fact that λ is bounded. That is, we have λ∈ C^2,α_-d for d∈ (0,1]. We next observe that the right-hand side of (<ref>) belongs to C^0,α_τ⊂ C^0,α_-d. Then since -(Δ_g+n):C^2,α_-d→ C^0,α_-d is an isomorphism, we have f+λ-1≡ 0. That is, by (<ref>), we have L[f+λ]=L[1]=(n-1)g, or g is a V-static metric with V-static potential identically equal to 1. That is we have (I)(II)(III), and it remains to prove (II)(I). To this end, assume (II) holds, that is a V-static potential f exists. Defining ℋ with this choice of f, Theorem <ref> implies D_gℋ[h]=0 for all h. From the expression (<ref>) for ℋ we immediately see that this implies D_g_VR[h]=0 for all h∈ T_g𝒞_o. The equivalence (I)(III) was already established in <cit.> (Corollary 4.4 therein) by different methods, and the inclusion of (II) was the straightforward part of the above proof. However, the proof presented here illuminates the connection between the volume-renormalised mass and V-static metrics. Furthermore, when we consider the case with boundary, where (III) is no longer equivalent to (II), the equivalence (I)(II) continues to hold (Theorem <ref> below). § BARTNIK BOUNDARY CONDITIONS – V-STATIC METRICS ARE VOLUME-RENORMALISED MASS MINIMISERS We now turn to the case where (M,g) has an interior boundary. If we do not impose boundary conditions then it seems unreasonable to expect any situation where the volume-renormalised mass can be minimised, so we must choose appropriate boundary conditions. It is natural to impose that the boundary itself is fixed, that is the induced metric on the boundary should be fixed. However, it is also a natural condition that the mean curvature of the boundary be fixed in addition to this, which essentially corresponds to insisting that the scalar curvature be constant right up to and including and distributional contributions on the boundary. We are therefore interested in the set 𝒞={ g∈ℛ^2,α_τ | (g)=0, g_Σ= γ, H(Σ)=H_0 } where γ is some fixed (n-1)-dimensional Riemannian metric on Σ, H(Σ) is the mean curvature of Σ and H_0 is a fixed function on Σ. We use the notation ℛ^2,α_τ to denote the space ℛ^2,α_τ defined by (<ref>) in the case where the underlying manifold has a boundary, to distinguish it from the preceding sections. In order to conclude that 𝒞 is a Banach manifold, which is a key requirement for the argument we use here, we need surjectivity of the map T defined by T(h) =(D_g(h),h_Σ, DH_g(h)) T:ℛ^2,α_τ→ C^0,α_τ (S^2T^*M)× C^2,α(S^2T^*Σ)× C^1,α(Σ). In the preceding section, surjectivity of the linearised constraint map for the decay rates we consider here were readily available. However, as mentioned in the introduction, in the case with boundary we do not have this immediately available. Fortunately, Huang and Jang <cit.> prove surjectivity of this map for the standard decay of τ∈(n/2,n), and after a careful examination of their proof, it is clear that this choice of τ is not essential but rather it is a choice made because they study the usual asymptotically hyperbolic mass and this rate is required for it to be well-defined. In fact, their proof goes through verbatim except in only one proposition where the decay rate is explicitly required. Furthermore, only a superficial modification is required to extend to the values of τ we work with. Since the full proof is rather involved, and the modification is straightforward, we do not repeat their entire argument here and instead explain the minor adaptation required to cover the case we require. Like many results of this flavour, the proof hinges on a coercivity estimate for D_g^*, which in this case is precisely where the range of τ is explicitly used. The proof is given in Sections 3 and 4 of <cit.>, and the following Proposition is the only part of it requiring modification. Let (M,g) be asymptotically hyperbolic at a rate of τ∈(n/2,n). Then there exist constants R_0, C>0 such that for all R>R_0 and any u∈ C^∞_c(M), we have uρ^1/2_H^2(M∖ B_R)≤ CD_g^*(u)ρ^1/2_H^2(M∖ B_R), where ρ=r^-2τ+n-δ for some δ<1 sufficiently close to 1, and C depends on n and δ. We briefly explain how this proposition is proved and how that proof can be modified to extend the range of permissible values of τ. The proof in <cit.> follows by a direct computation, after several simplifications are made taking note of the fact that several error terms are small. First, g is taken to be identical to the reference metric =1/r^2+kdr^2+r^2h since taking R_0 sufficiently large renders the difference negligible. For the same reasons, we set k=0 in the reference metric and introduce the operator Lu =D_^*(u)-1/n-1_hg(D_^*(u))+(+1/n-1_-_hg) =∇^2 u-u. Then since (+1/n-1_hg-_hg) goes to zero at infinity, we can prove the estimate for L instead of D_g. The final reduction is to note that ∇^2(u)ρ_L^2(M∖ B_R) can be controlled by the L^2(M∖ B_R) norms of L(u) and u, so one need only prove ∫_M∖ B_R( u^2+|∇ u|^2 )r^a dV_≤ C ∫_M∖ B_R|Lu|^2r^a dV_, where a=-2τ+n-δ. The proof goes on to consider two cases depending on whether the exponent a is greater than or less than -2, which is equivalent to whether τ is less than or greater than 1+1/2(n-δ). In particular the case we would like to extend is when a≥-2 (or τ≤ 1+1/2(n-δ), since δ will be chosen close to 1). The requirement that τ>n/2 in <cit.> implies also that they work with a<-δ, that is a∈[-2,-δ). In order to replace the conditions τ>n/2 with τ>n-1/2, we therefore must carry out the argument in the case a∈[-δ,1-δ). This can be achieved by following “Case 1” of the proof in <cit.>, which obtains the estimate beginning from the nonnegativity of the quantity ∫_S_R( β -∇_ν(u) )^2r^a dS, where β∈ℝ is to be chosen carefully later. This choice of β is precisely the difference made here. After some direct computations Huang and Jang obtain the inequality 0≤ ∫_S_R( β -∇_ν(u) )^2r^a dS ≤ ∫_M∖ B_R( β(2n-β(a-1+n))+β^2+1-aβ)u^2r^a dV +∫_M∖ B_R(2β-(a-1+n)+β^2+1-aβ)|∇ u|^2 r^a dV +∫_M∖ B_R(2β u (Lu)-2(Lu)(∇ u,ν))r^a dV, which relies on the fact that a∈[-2,2]. The idea is to then show that for some choice of β the quantities c_1(a,β) = β(2n-β(a-1+n))+β^2+1-aβ and c_2(a,β) =(2β-(a-1+n)+β^2+1-aβ are negative, to give 0≤ -ε∫_M∖ B_R( u^2+|∇ u|^2 )r^a dV+∫_M∖ B_R( β u(Lu)-(Lu)(∇ u,ν) ) r^a dV, which eventually gives the desired estimate via Cauchy–Schwarz. They key here is in choosing β in such a way that ensures c_1 and c_2 are negative for all values of a in the range under consideration – in this case a∈ [-δ,1-δ) where δ<1 is very close to 1. For this, the choice β=a/2 used in <cit.> (in Case 1 therein) does not work, however choosing β=a/2-1 will suffice. We can readily see that c_2(a,a/2-1)<0 since c_2(a,β)=(β-(a/2-1))^2-a^2/4-n, however we need to work a little to show c_1(a,a/2-1)<0. First note that it can be expressed as a third order polynomial in a, p(a)=c_1(a,a/2-1)=-1/4 a^3+(1-n/4)a^2+2(n-1)a+3(1-n). For δ close to 1 we can readily check that p(δ) and p(1-δ) are both negative, so we can simply check that p'(a)≠0 for any a∈(-1,1+ε) for some small ϵ>0. Differentiating (<ref>) with respect to a and solving for p'(a)=0 we find the solutions to be a_±=1/3 (4-n)±2/3√((2-n/2)^2+3(n-1)). For all n≥3 we find that a_+>0 and a_-<-1, from which we can conclude that for all a∈[- δ,1-δ), for δ<1 sufficiently close to 1, c_1(a,a/2-1)<0. In particular, we have that (<ref>) holds, which suffices to establish the main estimate as in the proof of Proposition 3.1 of <cit.>. That is, Proposition <ref> holds for the range of value of τ required here, and have the following. prop-HJ22[Cf. Proposition 3.1 of <cit.>] Let (M,g) be asymptotically hyperbolic at a rate of τ∈(n-1/2,n). Then there exist constants R_0, C>0 such that for all R>R_0 and any u∈ C^∞_c(M), we have uρ^1/2_H^2(M∖ B_R)≤ CD_g^*(u)ρ^1/2_H^2(M∖ B_R), where ρ=r^-2τ+n-δ for some δ<1 sufficiently close to 1, and C depends on n and δ. As mentioned above, the rest of the proof that T, defined by (<ref>), is surjective (Theorem 4.1 of <cit.>) goes through identically as in <cit.>. That is, following the arguments in sections 3 and 4 therein verbatim from this point onward, keeping the extended range of permissible values of τ, we arrive at the following Theorem. Let (M,g) be an asymptotically hyperbolic manifold with compact inner boundary, asymptotic to (M,) at a rate of τ∈(n-1/2,n), then the map T defined by (<ref>), is surjective. We now carry out the Lagrange multiplier argument similar to the preceding section. For this, we again would like to use the function ℋ defined by (<ref>). However, in this case when we regularise the functional via the divergence theorem, we obtain some additional boundary terms on Σ, which motivates the choice of boundary conditions we use. The functional ℋ defined by (<ref>) with (f-1)∈ C^0,α_τ(M) can be extended to a functional that is defined on all ℛ^2,α_τ. Furthermore, its linearisation is given by D_gℋ(h)= ∫_M ( (n-1) _g(h)dV_g-h· D_g^*[f]) +∫_ Mf( ∇^jh_ij-∇_i(_g(h)) )-( h_ij∇^jf-_g(h)∇_if ) ν^idS_g. for all h∈ T_gℛ=C^2,α_τ(S^2T^*M), where ν is the unit normal pointed towards infinity. First note that the inclusion of a boundary does not affect the well-definedness of ℋ. That is, the proof of Theorem <ref> applies identically except for the addition of some finite terms on the inner boundary which we do not both to explicitly write out. In order to establish (<ref>), follow Theorem <ref> and again consider the difference f D_g[h]-h· D_g^*[f]=∇^i(𝔅_i)dV_g, given by (<ref>). The only term here that differs from the proof of Theorem <ref> is that after the integration by parts, we gain the additional term ∫_ M𝔅_iν^i dV_g, which is exactly the boundary term appearing in (<ref>). We now aim to apply the Lagrange multipliers argument using the boundary-augmented constraint map T (g)=( (g),g_| M, H), where H is the mean curvature of M with respect to ν. Note that D_g𝔗 is a densitised version of the T operator defined by (<ref>), and therefore is surjective for g∈𝒞 by Theorem <ref>. Suppose g∈𝒞 is C^∞_loc, then the following two statements are equivalent: * For all h∈ T_g𝒞, we have D_g_VR[h]=0, * There exists V satisfying (V-1)∈ C^2,α_τ and D_g^*[V]=(n-1)g dV_g. Just as in the case with no boundary, we note that D_g_VR[h]=D_gℋ[h] for all h∈ T_g𝒞=(D_g𝔗). If we first assume (I) holds then the hypotheses of Theorem <ref> holds and we therefore have a Lagrange multiplier (λ, α,β)∈( C^0,α_τ(Λ^3T^*M))^*×( C^2,α(S^2T^* M) )^*×( C^1,α( M) )^* that satisfies D_gℋ[h]=λ(D_g[h])+α(h_| M)+β(D_gH[h]). Taking h∈ C^∞_c(Int(M)), we have D_gℋ[h]=λ(D_g[h])=∫_M ( (n-1) _g(h)dV_g-h· D_g^*[f]), just as in the proof of Theorem <ref>. That is, as a distribution λ is a weak solution to D_g^*[λ]=(n-1)g dV_g-D_g^*[f], which as before is implies that λ∈ C^2,α_loc by elliptic regularity, as it satisfies (<ref>). Furthermore note that λ is of C^2,α regularity up to the boundary from (<ref>) since _g is C^0,α up the boundary. For the same reason as the proof of Theorem <ref>, we get also see that λ→0 at infinity. That is, λ∈ C^2,α_-d(M) for some d∈(0,1]. Now we can write (<ref>) as -(Δ_g-n)λ=F, for some F∈ C^2,α_τ(M)⊂ C^2,α_-d(M). Since -(Δ_g+n):C^2,α_-s→ C^0,α_-s equipped with Dirichlet boundary conditions is an isomorphism for all s∈[d,τ], we have λ∈ C^2,α_τ. That is, V=f+λ is the required V-static potential satisfying (II). The reverse implication, (II) (I), follows by again defining ℋ with respect to f=V, the given V-static potential. Then from Theorem <ref>, we have D_gℋ[h]=∫_ Mf( ∇^jh_ij-∇_i(_g(h)) )-( h_ij∇^jf-_g(h)∇_if ) ν^idS_g for all h∈ T_gℛ. To show D_gℋ[h]=0, we first recall the linearisation of the mean curvature. In its most convenient form for us, it can be expressed as (see, for example, Lemma 5.1 of <cit.> or Proposition 3.1 of <cit.>) -2D_gH[h]=(∇^j(h_ij)-∇_i(_g(h)))ν^i+K_ABh^AB+∇^A(h_iAν^i), where A,B denotes indices for M and K is the second fundamental form of M. We then note that the remaining terms in (<ref>) can be expressed as ( h_ij∇^jf-_g(h)∇_if ) ν^i=ν^ih_iA∇^Af-g^ABh_AB∇_i(f)ν^i so after an integration by parts we have D_gℋ[h]=∫_ M-2fD_gH[h]-fK_ABh^AB+g^ABh_AB∇_i(f)ν^idS_g. That is, for all h∈ T_g𝒞 we have D_g_VR[h]=D_gℋ[h]=0, completing the proof. We conclude with some remarks. For metrics on M=ℝ^3∖B_1(0) where B_1(0) is the closed unit ball, asymptotic to the standard hyperbolic metric at a rate τ∈(2,3), Brendle and Chodosh proved a (renormalised-) volume comparison theorem <cit.>. Namely, letting (M,γ_m) be the AdS–Schwarzschild metric of mass m>0, they proved that among all metrics g with the same asymptotics and boundary metric and mean curvature matching the minimal surface boundary in the AdS–Schwarzschild manifold, the renormalised volume of (M,g) is strictly larger than that of (M,γ_m) unless g=γ_m. Under these decay conditions, the renormalised volume is exactly the volume-renormalised mass (the ADM boundary integral vanishes), so this result can be understood as a Riemannian Penrose inequality for the volume-renormalised mass under stronger decay assumptions. That is, the AdS–Schwarzschild manifold minimises the volume-renormalised mass among all metrics with the same asymptotic decay rate, and minimal surface boundary of the same area. Note that AdS–Schwarzschild metrics are V-static, for if they were not then by the same argument as Corollary <ref> we would be able to perform a local perturbation that decreases the volume-renormalised mass, which would contract the result of Brendle and Chodosh. Combined with Theorem <ref>, this suggests that the volume-renormalised mass should satisfy a Riemannian Penrose inequality. Throughout this article we have tried to emphasise the analogy with the ADM mass and static asymptotically flat metrics. It seems likely that the volume-renormalised mass and V-static potentials on asymptotically hyperbolic manifolds share many other analogous properties, which would be interesting to pursue further. § ACKNOWLEDGEMENTS This work is partially supported by Stiftelsen G.S. Magnusons fond grant no. MG2023-0060. 1.08 abbrvurl
http://arxiv.org/abs/2406.07791v1
20240612011228
Judging the Judges: A Systematic Investigation of Position Bias in Pairwise Comparative Assessments by LLMs
[ "Lin Shi", "Weicheng Ma", "Soroush Vosoughi" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Automatic detection of large-scale flux ropes and their geoeffectiveness with a machine learning approach Simon W. Good ========================================================================================================= § ABSTRACT LLM-as-a-Judge offers a promising alternative to human judges across various tasks, yet inherent biases, particularly position bias—a systematic preference for answers based on their position in the prompt—compromise its effectiveness. Our study investigates this issue by developing a framework to systematically study and quantify position bias using metrics such as repetitional consistency, positional consistency, and positional fairness. We conduct experiments with 9 judge models across 22 tasks from the MTBench and DevBench benchmarks and nearly 40 answer-generating models, generating approximately 80,000 evaluation instances. This comprehensive assessment reveals significant variations in bias across judges and tasks. Although GPT-4 often excels in positional consistency and fairness, some more cost-effective models perform comparably or even better in specific tasks, highlighting essential trade-offs between consistency, fairness, and cost. Our results also demonstrate high consistency of judgment across repetitions, confirming that position bias is not due to random variations. This research significantly contributes to the field by introducing new concepts for understanding position bias and providing a multi-dimensional framework for evaluation. These insights guide the selection of optimal judge models, enhance benchmark design, and lay the foundation for future research into effective debiasing strategies, ultimately enhancing the reliability of LLM evaluators. § INTRODUCTION In recent years, the rapid advancement of generative large language models (LLMs) has revolutionized their application across diverse fields such as automated code generation, conversational agents, and data analysis. Traditionally, evaluations of these models have predominantly relied on human judgment due to its comprehensive nature and ability to accurately assess the nuanced outputs typical of LLMs <cit.>. However, human-based evaluations struggle with scalability and reproducibility, presenting significant challenges when assessing LLMs on subjective or open-ended tasks <cit.>. To address these limitations, the LLM-as-a-Judge methodology has emerged as a promising alternative, aiming to reduce the reliance on human evaluators by automating the evaluation process across various tasks, such as open-ended story generation, adversarial attacks, summarization, machine translation, and instruction following <cit.>. While these automated judges generally show a high level of agreement with human judgments <cit.>, they are not devoid of challenges. Notably, these models can exhibit biases, such as position bias, which potentially undermine the fairness and accuracy of their judgments <cit.>. Position bias refers to the tendency of judgments to disproportionately favor responses based on their placement within an input list rather than their intrinsic merit. This bias manifests across various types of LLMs—open-source, proprietary, or fine-tuned <cit.>. Despite efforts involving various mitigation strategies like bootstrapping <cit.>, split-and-merge <cit.>, and multi-agent discussions <cit.>, effectively addressing position bias remains a complex and often costly challenge. Our review of the literature suggests that existing methods for mitigating position bias might be ineffective due to a limited understanding of how these biases emerge and their impact on evaluation outcomes. To address this gap, we introduce a comprehensive framework designed specifically for dissecting and understanding position bias within the context of pairwise comparative assessment. This method involves LLM judges evaluating pairs of responses, selecting the more accurate or suitable one while providing structured reasoning as part of a Chain-of-Thought (CoT) process <cit.>. Position bias is identifiable when judgments consistently favor either the first or the second response in both the original and swapped response sequences, indicated by patterns {A, A} or {B, B}, where A and B denote preferences for the first and second responses, respectively. Our framework was rigorously applied to assess the extent of position bias across nine LLM judges, including models from the GPT-3.5, GPT-4-Turbo, GPT-4, Claude-3, and Gemini-Pro series, using two distinct benchmarks: MTBench <cit.> and DevBench <cit.>. Additionally, we operationalized and defined several terms related to position bias, such as repetition bias, positional fairness, positional preference score, primacy / recency-preference, answer quality gap, and overall win rate, to clarify and standardize terminology for future research. Insights on Position Bias Our study revealed significant variability in position bias across different judges and tasks, influenced by the LLMs' intrinsic properties like context windows and familial traits, as well as the specific nature of the tasks they were judging. We observed that familial properties played a crucial role in mutual judge agreements, positional consistency, fairness, and cost-effectiveness. For instance, models from the GPT series outperformed others in terms of positional consistency and fairness, while the Claude-3 models, although consistent, displayed a tendency to prefer more recent responses. Agreement among the selected nine LLM judges was substantial, with more than 80% consensus in about two-thirds of the evaluated instances. However, achieving consensus remained challenging in about a quarter of the cases, often due to minimal differences in the quality of responses. The study also highlighted that positional consistency among LLM judges was directly proportional to the answer quality gap, suggesting that evaluations where responses were closely matched in quality were particularly challenging to adjudicate consistently. This finding underlines the complexity of applying LLMs as judges when responses do not exhibit clear distinctions in quality. Furthermore, our analysis demonstrated that repetition bias is minimal, indicating that position bias is influenced more by the structural and inherent traits of the models rather than by randomness in judgments. This consistency across repetitions affirms the reliability of the judgments made by these models, although a more consistent judge does not always equate to a fairer one. For example, while gpt-4-0613 showed remarkable consistency, it also exhibited stronger positional preferences compared to other models like GPT-3.5, which were fairer but less consistent. Interestingly, our investigation into the length of responses and biases associated with verbosity and self-enhancement revealed that these biases are essentially manifestations of position bias driven by differences in answer quality. This suggests that biases related to the length of content or the model's affinity for its outputs are secondary to how the quality of responses influences judge preferences. Practical Implications The insights from our research offer several practical implications: (1) Systematic Framework: Our systematic framework for interpreting positional consistency and preference in LLM judges enhances the reliability and scalability of evaluations, contributing to more standardized assessment methodologies (2) Judge Model Recommendations: We provide detailed recommendations for selecting judge models that balance consistency, fairness, and cost-effectiveness. (3) Benchmark Evaluation: Insights from this study inform the design and methodology of future benchmarks, improving the interpretability and scalability of subjective evaluations. (4) Foundational Research: By elucidating position bias across various models, tasks, and judging types, we pave the way for more effective debiasing strategies. This comprehensive framework for understanding position bias in LLMs-as-judges not only enhances the fairness and efficacy of model evaluations but also supports the development of more robust, equitable, and efficient LLM applications. In the subsequent sections of the paper, we provide detailed descriptions of our framework, definitions, experiments, and results to support the findings described above. The anonymized code & data for this paper can be found here [https://github.com/Slimshilin/Position-Bias-Analyzer]. § METHODS & DEFINITIONS This section outlines the methods and definitions employed in our study, including the specific metrics and factors used to assess position bias. Our approach involves a pairwise comparative analysis conducted by LLM judges, as depicted in Figure <ref>. These judges are tasked with selecting the superior solution from each pair presented to them for evaluation. To measure position bias effectively, we analyze how the judge models respond when the order of solutions is reversed. §.§ Position Bias In our study, position bias is investigated from three perspectives: repetitional consistency, positional consistency, and positional fairness, each addressed in dedicated subsections below. §.§.§ Repetitional Consistency Repetitional consistency evaluates the reliability of LLM judges when presented with identical queries multiple times. This metric is essential for assessing whether the LLM judges' choices reflect a consistent evaluative pattern or are merely random variations. We measure this by calculating the percentage of majority choices across multiple trials for each query, aggregated from all queries within each dataset. This metric is formalized as follows: RC = 1/n∑_j=1^n max(|c_1^j|, |c_2^j|)/t_j, where |c_1^j| and |c_2^j| denote the counts of times solution #1 and solution #2 are selected by the judge for the j-th query, respectively, t_j represents the total trials for that query, and n is the total number of queries. The RC value ranges from a small positive value near 0 (indicating completely random decisions) to 1.0 (indicating perfect consistency). §.§.§ Positional Consistency Positional consistency quantifies how frequently a judge model selects the same response before and after the order of options is reversed. This is formalized as follows: PC = 1/n∑_j=1^n 1_(c_JO_j, c_JP_j) ∈ V, where JO and JP represent the original and reversed prompts, respectively, n is the total number of paired prompts, and V is the set of choice pairs considered valid for consistent judgment. In this context, the “Option-2” mode refers to a scenario where only two choices are valid regardless of their positions—specifically, where a judge can either consistently choose the first option over the second, or vice versa, before and after the switch. Thus, for DevBench operating under “Option-2” mode, V consists of pairs {(A, B), (B, A)}. In contrast, “Option-3” mode, used by MTBench, includes an additional valid scenario where the same option (i.e., a tie) can be chosen both times, denoted by (C, C), making V {(A, B), (B, A), (C, C)}. This allows for a more nuanced assessment of the judge's consistency, considering the potential for a neutral or unchanged choice across prompts. §.§.§ Positional Fairness Positional fairness is essential for ensuring that LLM judges do not exhibit preference for solutions based on their position within the presented options. Previous studies, such as those by <cit.>, have measured positional preference by counting instances where the judge consistently favors solutions appearing first (primacy count, or pc) or last (recency count, or rc), normalizing these counts by the total number of query pairs to measure positional preference. However, this method is limited by dataset size, complicating comparisons across different datasets. Alternatively, studies like <cit.> utilize balanced datasets where 50% of instances position answers first and the remaining 50% last. In this setup, positional fairness is evaluated based on the percentage of primacy and recency preferences when the responses deviate from the ground truth, defined as inconsistent primacy rates (ipr) or inconsistent recency rates (irr). This approach can disproportionately penalize judges when their decisions generally align with the ground truth but differ in specific cases. To overcome these limitations, we introduce a positional preference score to measure positional fairness (PF), calculated initially as a raw score: PF_raw = (rc × irr) - (pc × ipr). We then normalize this raw score using the formula: PF = PF_raw - S^-_min/S^+_max - S^-_min× 2 - 1, where S^-_min and S^+_max are the minimum and maximum achievable PF_raw scores for each judge on each task, respectively. This normalization technique, using min-max scaling, ensures comparability across datasets by accounting for the range of achievable scores and centering the scale around zero, where a score of 0 denotes complete positional fairness. The PF score is interpreted as follows: PF = 1, PC=0 and entirely recency-preferred (0, 1), Recency-preferred 0, Positionally Fair (-1, 0), Primacy-preferred -1, PC=0 and entirely primacy-preferred § FACTORS AFFECTING POSITION BIAS To investigate the factors influencing position bias in LLM judges, we have categorized these factors into three groups: Judge-level, Model-level, and Task-level. Each group contains specific factors that we hypothesize may impact position bias, which we explore through a series of experiments. Table <ref> lists the seven factors analyzed in this study. Our experimental framework is designed for reproducibility, enabling the integration and assessment of additional influencing factors. Among the influencing factors we identified, we opted for “familial property” as a key variable instead of model sizes and training specifics, which are often proprietary and not publicly accessible for the models involved in our experiments. The familial categories of the models used in our studies are: (1) GPT-4-Turbo, (2) GPT-4, (3) GPT-3.5, (4) Claude-3, (5) Gemini-Pro. Exhaustive list of models can be found in Appendix <ref> The “answer quality gap” is another intuitive factor included in our study. To measure this, for each query instance with two solutions, we create a swapped instance by reversing the order of the solutions. We then employ multiple judge LLMs to observe the evaluation instances where one solution is consistently chosen as superior in both the original and swapped query instances (termed “consistent wins”), the instances where both solutions are deemed equal (termed “consistent ties”), and the instances where judges select different solutions across the swapped instances. We denote these counts as the number of consistent wins (C_w), the number of consistent ties (C_t), and the number of inconsistent judgment pairs (I), respectively. Following the methodology introduced by <cit.>, we define the overall win rate (owr) for each solution in a pair of instances as: owr = C_w + 1/2(C_t + I)/n, where n is the total number of instances. We assess the quality gap (qg) between the two solutions in each instance using: qg = |owr - 0.5|, where owr is the overall win rate for any of the candidate solutions. This approach ensures that the sum of owr for both solutions equals 1. We prefer the overall win rate over the consistent win rate (calculated by C_w/n) as it incorporates all data points and captures the “comparable quality” scenario, where similar quality solutions might lead to positionally biased judgments—a scenario the consistent win rate might overlook. Appendix <ref> provides further details on the comparative analysis of both win rate metrics and their impact on position bias. § EXPERIMENTS Our experiments leverage datasets from MTBench <cit.> and DevBench <cit.>, both selected for their demonstrated high agreement rates between human and LLM evaluators. Details of the tasks and the datasets can be found in Appendix <ref>. Prompt Design Our prompts are designed within the DevBench pairwise comparative framework for its adaptability and ease of integration, which complements the methodology used in MTBench. Each query prompt is constructed to include a task question, two competing solutions from different models, and a system prompt to guide evaluation. Optionally, a reference answer and an evaluation metric may be included to enhance the depth of assessment. For MTBench tasks, vicuna-13b-v1.3 is chosen to be the reference/baseline model for comparison based on its median performance on the leaderboard, facilitating a range of quality gaps between the responses. For DevBench, human-annotated software design files serve as the reference 'model'. Option Modes of Judge Models In line with the practices of MTBench and DevBench, our study incorporates various option modes for the LLM-as-a-Judge, which are determined by the nopt parameter indicating the number of options available for selection. Option-2 mode restricts judges to choosing between two responses, labeled `A' for the first and `B' for the second. Option-3 mode includes an additional option, `C', allowing judges to indicate a tie if neither option is preferable. Option-4 mode further expands the choices with `C' representing a “both good tie” and `D' as a “both bad tie,” In our experiments, we employ Option-3 mode for MTBench and Option-2 mode for DevBench, aligning with their respective original frameworks. These option modes are explicitly specified in the system prompts to clearly direct the decision-making process of the judge LLMs. The detailed prompt settings can be found in Appendix <ref>. §.§ Results Repetitional Consistency To investigate repetitional consistency, we randomly selected three questions along with their corresponding solutions from four different answer-generating models (see Appendix <ref> for details). This approach resulted in an inspection dataset comprising 24 evaluation instances for each task, which included prompts with their order swapped. We evaluated nine judges on this sampled dataset, conducting three tests per judge, and computed the repetitional consistency (RC) scores for each, as detailed in Table <ref>. The consistently high RC scores across tests indicate minimal repetition bias, suggesting that observed position biases are not merely due to randomness in the judges' responses. This finding is vital as it substantiates the reliability of the judges' decisions, demonstrating that they are not random occurrences. While all nine models displayed significant robustness against random variations, the claude-3-haiku-20240307 and gemini-pro-1.0 models exhibited a higher sensitivity to these variations. This heightened sensitivity may affect their reliability and suitability for deployment in LLM-as-a-Judge scenarios. Positional Consistency We present the positional consistency (PC) scores along with their standard deviations for each LLM judge in Table <ref>. This table also notes the error rates, reflecting how often judges failed to generate responses in the expected format (i.e., correctly selecting the superior solution from a pair). Despite these error rates being low across all judges, which suggests a minimal impact on our analysis of position bias, our study does not delve deeply into the reliability of each judge's response quality. Instead, we treat all judges as generally dependable for this analysis. The results underscore that models such as gpt-4-0613, gpt-4-1106-preview, both GPT-3.5 models, and the claude-3-opus-20240229 demonstrate high positional consistency across both MTBench and DevBench, affirming their robustness irrespective of the order in which solutions are presented. Conversely, the claude-3-haiku-20240307 model exhibits poor performance in these tests, achieving the lowest PC scores among all models evaluated. It is noteworthy that the standard deviations of PC scores are consistently high (always above 0.140) across all models. This indicates a persistent presence of position bias, affecting even the top-performing models such as GPT-4 and GPT-4-Turbo. Positional Fairness Examining positional fairness through Table <ref>, we observe that GPT models exhibit significantly greater positional fairness on MTBench, as reflected by their notably lower PF scores compared to other models. On DevBench, gpt-3.5-turbo-1106 and gemini-pro-1.0 particularly stand out for their positional fairness. Generally, the positional preference scores are higher on DevBench than on MTBench, potentially due to the challenges posed by longer-prompt queries to the judge models. In terms of preference direction, the Claude models consistently display a recency preference on both benchmarks, a trend also seen in the two GPT-4-Turbo models. The positional preferences of other models vary between the benchmarks. When analyzing both positional consistency and fairness, gpt-3.5-turbo-1106 emerges as the most effective LLM judge, exhibiting the least position bias among the nine models evaluated. Although gpt-4-0613 is the most consistent model when handling solution-swapped prompts, its pronounced recency preference on DevBench makes it less favorable compared to gpt-3.5-turbo-1106. Conversely, claude-3-haiku-20240307, with the lowest PC scores and high absolute PF scores, is identified as the least fair judge. The performance of other models spans these extremes, with their levels of positional consistency and preferences clearly delineated within our assessment framework. The analysis of both PC and PF together reveals a pattern where PF scores generally decrease as PC scores increase, forming a right-arrow-shaped pattern as illustrated in Figure <ref>. This pattern, consistent across both benchmarks and for each judge model as detailed in Appendix <ref>, underscores the consistency between our measures of PC and PF. Notably, models like gpt-3.5-turbo-0125 manage to maintain higher levels of positional fairness (i.e., lower absolute PF scores) even when consistency is lower, indicating their robust performance across different measures. This demonstrates their balanced capabilities in maintaining fairness amidst varying levels of consistency. Cross-Judge Agreement We also explore the agreement between LLM judges to gain deeper insights into their behavioral patterns and validate the fairness and prompt-independence of our position bias measurement. For each pair of judges, we calculate the number of instances in both benchmarks where their choices align, normalizing this count by the total number of instances. To address the potential bias introduced by ties, where judges rate the candidate solutions as equally valid, we compute an additional agreement score excluding such instances. These scores are represented as JA for overall agreement and JA_woC for agreement without considering ties, respectively. Figure <ref> presents these scores for each pair of LLM judges in a heatmap format. The heatmaps reveal clear “familial patterns” in the choices of these LLM judges. For instance, the two GPT-4-Turbo models display the highest agreement scores, achieving 79.49% with ties included and 88.67% without. GPT-4 models also show high agreement, with scores around 75% (JA) or 85% (JA_woC). Similarly, the GPT-3.5 models form a cohesive group, with a JA of 80.80% and a JA_woC of 82.47%, showing lower agreements with other judges. The Claude-3 models and Gemini-Pro are recognized as separate families, with claude-3-opus often aligning more closely with GPT-4 or GPT-4-Turbo models. These patterns suggest that familial similarities, possibly stemming from analogous model sizes, training data, and strategies, influence the positional preferences of these judges. Identifying such groupings provides valuable insights, as comparisons between judges from different groups, both adept at assessing LLM-generated content, can reveal distinct position biases and enrich our understanding of this phenomenon. An additional factor influencing these outcomes is the differing capabilities of models, which can lead to disagreements. However, majority voting has proven to be a practical method to refine judgments and enhance decision quality, especially as many LLM judges exhibit near-human performance in evaluating LLM-generated content <cit.>. An analysis of decisions agreed upon by multiple models reveals that all nine judges agree on 23.4% of evaluation instances across MTBench and DevBench, while 94.6% of instances achieve majority agreement (from at least five judges) (refer to Figure <ref> in the appendix). These findings suggest that LLM judges are generally capable of reliably assessing generation quality, with only 5.4% of instances posing significant challenges for LLMs. Employing majority voting with LLMs from different families can effectively mitigate position bias, assuming the majority count exceeds the size of the largest family of judges. The effectiveness of this approach is further validated by separate analyses for MTBench and DevBench. § DISCUSSION As discussed in Section <ref>, the severity and direction of position bias vary among LLM judges, with our proposed metrics effectively highlighting these variations. However, the relatively high standard deviation in the PC scores and notable disparities in the PF scores across benchmarks suggest that other factors significantly influence these metrics. These factors may influence the choice of judge models under different circumstances. We explore how position bias in LLM judges is shaped by the nature of the target tasks, differences between the quality of the candidate solutions, and the lengths of the problem descriptions, outputs from the answer-generating models, and the prompts. Position Bias of LLM Judges is Task-Dependent We conduct separate evaluations of positional consistency (PC) and positional fairness (PF) for each LLM judge across all tasks within each benchmark and compare these metrics across judges and tasks. To facilitate clear comparisons and emphasize differences, we establish baselines using the PC scores of gpt-4-0613 and a PF score of 0. We then visualize the percentage deviations from these baselines for each judge's performance in Figure <ref>. In this framework, the baseline performance is consistently set at 0; scores above this baseline yield positive values, while scores below it result in negative values after scaling. Additionally, we perform a Student's t-test for each experiment set related to a judge model and a task to determine statistical significance. Statistically significant scores are marked with an asterisk. The baseline comparison figures from MTBench and DevBench reveal significant variations in positional consistency and preference among different LLM judges and tasks. gpt-4-0613 consistently shows high positional consistency, outperforming other models in most contexts, though gpt-4-1106-preview and gpt-3.5-turbo-0125 demonstrate superior performance in specific tasks like architecture design and coding. Despite this, gpt-4-0613 generally maintains a more balanced preference across most tasks, with only a few showing a notable recency preference. In contrast, Claude-3 models and Gemini-Pro tend to exhibit a strong recency preference. These observations suggest that while gpt-4-0613 can serve as a benchmark for positional consistency, other models may excel in specific areas. This variability underscores the importance of carefully selecting the appropriate judge model for specific tasks, weighing the trade-offs between positional consistency, fairness, and practical applicability. Our evaluation framework for position bias addresses these needs effectively. It is adaptable to both specific tasks and broader settings, and is sensitive to subtle differences in model behavior, ensuring its broad applicability in various evaluative scenarios. Further analyses using Maximum Likelihood Estimation (MLE) detailed in Appendix <ref> further elucidate the impact of target tasks on the position bias of LLM judges. Position Bias of LLM Judges is Influenced by Answer Quality Gap We explore the impact of the answer quality gap on the position bias of LLM judges by examining the relationships between positional consistency (PC), positional fairness (PF), and the quality gap (qg) depicted in Figure <ref>. The PC-qg curves display a parabolic shape centered around qg=0.5, suggesting that the positional consistency of judge models is closely tied to the magnitude of quality differences between candidate answers. This observation confirms our hypothesis that more equivocal instances, where qg approaches 0.5, tend to confound LLM judges, potentially leading to increased position bias. On the other hand, the PF-qg curves indicate that more positionally fair judges, such as gpt-4-0613 and both GPT-3.5 models, show negligible bias towards candidate answers based on their positions, as evidenced by their nearly flat curves. In contrast, other models show a tendency to exhibit increased positional preference as the absolute quality gaps narrow. These insights suggest that the selection of LLM judges can be strategically tailored to specific situations to optimize cost-efficiency; for example, deploying more robust, less biased judges for more challenging instances and choosing more cost-effective models when the quality gaps are more pronounced. This analysis also has implications for evaluating benchmark difficulty, identifying benchmarks that more frequently induce confusion and position bias in LLM judges as inherently more challenging. Such insights are crucial for designing and refining LLM evaluation benchmarks, ensuring they accurately measure model capabilities under varied conditions. Input & Output Lengths do not Significantly Affect Position Bias We investigate the impact of three different lengths on the position bias of LLM judges: the length of the query (task input length), the responses generated by answer-generating models (task output length), and the length of the entire prompt (prompt length). Our analysis indicates that significant impacts on the judges' position bias primarily occur in extreme cases where the prompt lengths substantially exceed the judges' input length limits. In such cases, one exceedingly long and one short solution are typically presented to the judges, who typically favor the long solution. This leads to an apparent position bias as measured by our evaluation framework. However, in other scenarios, these three lengths do not significantly affect the judges' position bias, leading us to conclude that there is no consistent pattern linking position bias with lengths. Further details on these analyses are available in Appendix<ref>. Conclusion This paper introduces a framework designed to evaluate position bias in LLM judges, focusing on repetitional consistency, positional consistency, and positional fairness. Utilizing this framework, we conducted a comprehensive analysis of position bias across nine LLMs tasked with evaluating the generation quality of other models. Our analysis aimed to identify the factors influencing position bias. We discovered that the target task, the quality gap between candidate solutions, and the context window size of the judge model significantly affect its effectiveness. However, other influencing factors specific to each judge model remain unclear to users, necessitating that the selection of judge models often be conducted empirically to ensure accurate evaluations. Our framework provides a systematic method to assess the capabilities of judge models, aiding in the optimization of performance and cost. unsrtnat § APPENDIX §.§ Findings & Contributions Key Findings Here are some main take-away messages from our research: * Overall: Position bias varies significantly across different judges and tasks. The intrinsic properties of large language model (LLM) judges, such as context window and familial properties, along with the nature of the judging task, significantly affect positional consistency and fairness. * Familial Properties: Familial property is prominent in LLM-LLM mutual judge agreements, positional consistency, positional fairness, and cost-effectiveness. The GPT series has superior performance in terms of both positional consistency and positional fairness. Claude-3 family members, though sometimes having ideal positional consistency, are all recency-preferred models. * Judge Agreement: The chosen 9 LLM judges can achieve more than 80% consensus around two-thirds of the evaluation instances across both benchmarks, whereas a quarter remains challenging to judge. The mutual LLM agreement demonstrates familial properties, grouping GPT-4/GPT-4-Turbos, GPT-3.5, Claude-3 (with haiku as a "distant relative"), Gemini, and SOTA families. * Answer Quality Gap: The positional consistency of LLM judges is directly proportional to the answer quality gap. This may explain why the proportion of evaluation instances that LLMs find challenging to reach consensus aligns with the positional consistency of most LLM judges - these instances are the ones with small answer quality gaps, which then makes them difficult to judge. * Repetitional Consistency: Repetition bias is negligible, as evidenced by the high repetitional consistency of most judge models across the two benchmarks. This indicates that position bias mainly stems from other factors rather than the randomness of judgments. * Consistency vs. Preference: Generally, the judgments become fairer as the positional consistency increases. However, a positionally more consistent judge may not necessarily be positionally fairer than other models. For example, gpt-4-0613 is the most positionally consistent judge on both benchmarks but has larger positional preference scores than GPT-3.5s (the fairest judges). Similarly, claude-3-opus, despite matching GPT-3.5s in consistency, is highly recency-preferred. * Position bias and Length Stats: No clear correlation exists between the length stats (task input length, task output length, and prompt length) and the position consistency or preference, implying minimal impact of lengths on position bias. * Length/Verbosity bias and self-enhancement bias: These biases are essentially position bias due to answer quality gaps. For length/verbosity bias, we have shown that there is no linear correlation between lengths and position bias; for self-enhancement bias, we don't have such a discovery across our experiment results and we also propose that the findings of such bias from previous studies can be attributed to the answer quality gap. * Judge recommendation: Overall, gpt-4-0613 stands out as the top judge in terms of positional consistency, fairness, and stability, but is also the most expensive. On the other hand, certain judges perform adequately well or even better on certain tasks, such as gpt-3.5-turbo-0125 judging coding tasks. Also, since a positionally more consistent judge may not be fairer, there is a trade-off between positional consistency, positional preference, and cost-effectiveness when selecting the appropriate judging LLMs. Contributions Despite the limited number of models. tasks, and judging types, our proposed systematic framework is agnostic to these criteria and can be used with any LLM judge. Our work is particularly valuable for exploring the factors influencing position bias and their quantitative impacts. We are the first to emphasize the importance of understanding the position bias of LLM-as-a-Judge thoroughly before attempting to resolve it. We also propose a series of formal definitions and calculation formulas for terms related to position bias, such as repetition bias, positional fairness, positional preference score, primacy/recency-preference, answer quality gap, overall win rate, etc. We encourage further studies to continue the use of our terminology for clarity, as previous studies lack a uniform way of defining or explaining some of the concepts like the "positional preference". Our work of comprehensively understanding the position bias of LLM-as-a-Judge in pairwise comparative assessment makes the following unique contributions: * Concepts and Formulas: We define a series of systemic concepts and formulas for terminologies related to position bias, correcting previous studies and introducing new concepts. * Repetition Bias: We are the first to investigate repetition bias and its impact on position bias. Our validation reveals that LLM judges demonstrate high consistency across repetitions, highlighting the minimal impact of randomness on judgments. * Positional Fairness Analysis: We are the first to focus and conduct detailed quantitative analysis on the positional preference side of the position bias, emphasizing the adequate importance of positional fairness of LLM judges alongside positional consistency. Our "Positional Preference Score" offers a standardized way to quantify positional preference more accurately and at scale compared to existing methods. See Appendix <ref> for more detail. * LLM Judge Agreement: We are the first to analyze the agreement between LLM judges rather than the traditional LLM-human agreement. On one hand, agreement among various judges — particularly those differing by family, context window, and maximum output length — provides valuable insights into LLM judges. On the other hand, our extensive research lacks sufficient human annotations. However, previous studies, especially the two benchmarks (MTBench and DevBench) we employed, have shown that GPT-4's evaluation results are highly consistent with human evaluators, which is one of the main reasons for including them in our research. Moreover, if all 9 judges, or 7 to 8 of them, reach a consensus, we can confidently assume the correctness of their judgment. While our study doesn't qualitatively analyze these judgments in detail, it is worth noting that this consensus supports the practical utility of our quantitative analysis. * Answer Quality Gap: We are the first to comprehensively explore the relationship between the answer quality gap and position bias regarding both positional consistency and preference. We have also shown that the verbosity/length bias (i.e., preferring longer responses) and self-enhancement bias (preferring responses that are generated by the same model as the judge model) are essentially position biases driven by the answer quality gap. This further emphasizes the need to understand the position bias comprehensively through our work. * Judge Model Recommendations: We propose that while GPT-4 is generally the most optimal choice, especially in a completely new task that hasn't been experimented on, certain tasks allow cheaper judge models to perform equally well or even better. We provide cost-performance comparison plots at both the judge and judge-task levels to aid researchers in making cost-effective choices (see Appendix <ref> ). * Factors and Metrics: The factors affecting the position bias explored in our study can be categorized at the judge, model, and task level, including length statistics (task input length, task output length, prompt length), answer quality gap, two versions of win rate (consistent and overall), judge model family, context window, and maximum output length. The metrics of quantifying the position bias include repetition consistency, positional consistency, and positional fairness/preference. Social Benefits Our comprehensive understanding of position bias in LLM-as-a-Judge provides multiple benefits to the community: * Judge Model Recommendations: Offers LLM judge model recommendations for new tasks, such as the most positionally consistent, fair, or cost-effective model. * Interpretation Framework: Provides a systematic framework for interpreting the positional consistency and preference of LLM judges. * Benchmark Evaluation: Enhances the design and methodology of future benchmarks requiring subjective evaluation with a more interpretable and scalable evaluation approach. * Foundational Research: Lays the groundwork for future research to understand position bias across models, tasks, and judging types, enabling more effective debiasing strategies. * Broader Applications: Benefits the application of LLM evaluators across diverse fields, such as healthcare <cit.>, instruction following and prompting <cit.>, multimodal assessment <cit.>, and recommender systems <cit.>. §.§ Strength, Limitations, and Future Work Strength Our framework of comprehensively understanding position bias has the following strengths: * Extensive Experiments: We study the position bias of 9 LLM judges on 2 benchmarks across 22 tasks and around 40 answer-generating models, resulting in about 80,000 evaluation instances, which is more comprehensive than prior studies <cit.>. We also elaborate on 7 potentially influential factors in judge, model, and task levels for a comprehensive understanding. * Accurate Measurement: We first demonstrate the little impact of random variations of LLMs on position bias, validating the effectiveness of one-shot judgment and the valuable insight of using repetitional consistency as a metric for evaluating LLM evaluators more accurately. Moreover, we propose the positional preference score PF using a min-max scaled weighted difference approach to more accurately measure the positional fairness of LLM judges. Applying the overall win rate instead of a consistent win rate to quantify the answer quality gap is another contribution to measurement accuracy. * Concept Explanation: We correct and propose various concepts in our study, including positional preference, answer quality gap, repetitional consistency, etc., with clear definitions, detailed examples, and formal mathematical symbols/formulas. This benefits further research in terms of terminology usage and extensions. * Statistical Validation: We conduct statistical tests for all of our findings. For baseline comparison, we apply t-tests to validate the significance of performance disparities between LLMs and the baseline. We then supplement a Maximum Likelihood Estimation (MLE) analysis to verify the positional preferences of LLM judges on different benchmarks and tasks. Furthermore, the key influential/non-influential factors we find are verified to be statically significant/non-significant via linear regression. The low R-square result of linear regression also stresses the complexity of position bias, as the positional consistency and preference scores are not linearly predictable. * Scalability and Ease-of-use: Our proposed framework is scalable to various judges, models, tasks, and judging types. It is also easy to use, facilitating a more comprehensive evaluation of LLM judges or their judgment results. Limitations Due to computational limits, our study can be potentially extended to provide more comprehensive insights. * Methodology: We only study the pairwise comparative assessment of LLM-as-a-Judge for its best consistency, ease of use, and scalability, but more types of judging can be explored, such as pairwise scoring, listwise ranking, etc. * Judge model: We only study 9 commercial LLM judges, including only one Gemini-Pro. Further explorations can include analysis of open-source and fine-tuned judge models using our proposed framework. Also, due to the lack of accessibility, we have not investigated the impact of the parameter size of LLMs on the position bias, but this can be easily solved for open-source and fine-tuned models. * Benchmark: We only study position bias on two benchmarks, MTBench and DevBench, which is still limited despite 22 tasks and 40 answer-generating models. Further studies can be extended to more benchmarks to have a broader understanding of position bias across benchmarks, tasks, and answer-generating tasks. Future Work We plan to extend our work in several ways: * Prompt Setting: In this study, we apply the exact default prompt settings of MTBench and DevBench. However, the positional order and prompt style of not just the model-generated responses but also system prompt components (e.g., agent role assignment, mode of judging, direct mention of not making biases) may also have an impact on the extent to which LLM judges make position bias. * Debiasing Strategies: With a comprehensive understanding of position bias through our research, we can re-examine the effectiveness of prior debiasing strategies. For example, our MLE results showing varying preferences among LLMs may essentially explain why multi-agent discussion <cit.> can help mitigate position bias. We may also propose more methodologies and fine-tuned models for reducing position bias. §.§ Related Work §.§.§ LLM-as-a-Judge In recent years, Large Language Models (LLMs) have emerged as a transformative technology, garnering global attention and stimulating substantial research into their applications. For evaluative tasks, particularly subjective ones, human assessment is considered the gold standard due to its comprehensive and open-ended nature <cit.>. However, it lacks scalability and reproducibility <cit.>. As a result, LLMs have increasingly been used as substitutes for human evaluators across various Natural Language Generation (NLG) domains and tasks <cit.>, including open-ended story generation <cit.>, adversarial attacks <cit.>, summarization <cit.>, machine translation <cit.>, and instruction following <cit.>. These LLM evaluators, known as LLM-as-a-Judge, have garnered significant interest within both academic and industrial circles <cit.>. As LLMs have made content "generation" significantly easier, the volume of generated responses has increased, making it impractical to rely solely on human evaluation. Therefore, cost-effective LLM judges are needed to assess these responses efficiently. LLM-as-a-Judge is typically employed in Q&A evaluation tasks, where the LLM judge is prompted to evaluate the quality of responses generated by other models answering the questions. In many of these tasks, LLM judges have shown a high level of agreement with human evaluators <cit.>, yet in some tasks, they are less effective, largely due to inherent biases <cit.>. Even in cases where agreement is high, judgments may still suffer from biases. When employing LLM-as-a-Judge, various types of judging are available, which can be categorized either by the scale or comparative method. From the scale perspective, LLM-as-a-Judge can involve pointwise, listwise, or pairwise assessment <cit.>. By comparative method, it can be either score-based or relation-based <cit.>. For example, pointwise scoring <cit.> lets the LLM judge score the response of one model to a question at a time based on some evaluating metrics. Pairwise/Listwise Scoring <cit.> prompts the LLM judges to score a pair/list of model-generated answers. Listwise Ranking is another relation-based assessment that, instead of giving a score, requires the LLM judge to rank a list of responses following some specified order (e.g., from best to worst). Pairwise comparative assessment <cit.>, on the other hand, asks the LLM judge to select the superior response between a given pair, usually conducted in a double-blind manner: the generating model remains unknown to the judge, and the judging model remains anonymous to the answer generator. Except for pointwise evaluation, all forms of LLM-as-a-Judge suffer—or are susceptible to suffering—from position bias due to the intrinsic nature of "position" and "comparison" within the prompt structure. §.§.§ Position Bias Previous studies have discovered and investigated multiple types of biases, such as position bias <cit.>, verbosity/length bias <cit.>, self-enhancement bias <cit.>, selection bias <cit.>, and contextual bias <cit.>. Among these, position bias stands out as particularly significant, permeating a wide array of tasks and affecting judge models, including open-source <cit.>, proprietary commercial ones <cit.>, and fine-tuned models <cit.>. To clarify, the aforementioned and following "position bias" refers to the concept within the context of LLM-as-a-Judge, meaning that LLM judge tends to favor responses based on their position in the prompt rather than their content. For example, in a pairwise comparative assessment scenario, if the LLM judge consistently selects the first response as superior even after switching the order of the two responses (same position, but different content), then a position bias occurs. Some other studies also use the term "position bias" <cit.>, but in this research, our interest lies solely in the position bias specific to LLM-as-a-Judge. Position bias is arguably the most prevalent and impactful bias among all. Chua et al. <cit.> notes that their Bias-Augmented Consistency Training (BCT), an unsupervised fine-tuning scheme designed to promote consistent reasoning across prompts with and without biasing features, improves Chain-of-Thought <cit.> performance over self-training controls for all biases except position bias. Furthermore, Khan et al. <cit.> point out that LLM judges are less confident when exhibiting position bias, and addressing this bias is highly complex due to varying confidence levels across judges and tasks. Moreover, there is ongoing debate over whether selection bias originates from position bias. Pezeshkpour and Hruschka <cit.> argue that LLMs are sensitive to the ordering of options in Multiple Choice Questions (MCQs), confirming that position bias contributes to this sensitivity. However, Raina et al. <cit.> contest this view, asserting that selection bias stems less from position bias and more from token bias, which represents an inherent challenge for LLMs and contributes to poor robustness. All of these findings underscore the critical importance of addressing the issue of position bias when employing LLM-as-a-Judge. §.§.§ Deal with Position Bias Intuitively, position bias emerges because LLMs are sensitive to changes, especially positional changes, in prompts <cit.>. Also, LLM judges are vulnerable to attacks <cit.>. Parse data with position bias There are many ways to deal with the position bias. The naive way is to exclude the inconsistent judgments(i.e. if the LLM judge gives a positionally biased judgment on a pair or list of model responses) <cit.>. While this ensures consistent and reliable remaining judgments, it does not resolve the fundamental problem. Moreover, if the LLM judge is highly biased, this method discards valuable evaluation instances and information, making it an ineffective and somewhat desperate measure. To take the positionally inconsistent evaluations into account, one may either take an average for scoring-based judging <cit.> or regard inconsistency as a "half-win" or "tie" for relation-based judging <cit.> after swapping the order of model responses in the prompt. For instance, in a pairwise scoring scenario, if model A receives a score of 8 when put in the first position and 4 when put in the second position, the overall score for it compared to model B would be 6; in the pairwise comparative case, if model A wins when put in the first position but then lose when put in the second position, it counts as a "tie" or "half-win" for both model A and model B. The latter way of swapping + tie is proposed because intuitively the position bias is more likely to occur when the model responses get evaluated share a similar quality in terms of the evaluating metric. Our quantitative study also verifies this intuition, evidenced by the fact the LLM judge's positional consistency (the percentage of positionally consistent pairs of judgment) is positively proportional to the answer quality gap between the model responses. To save the expense of running the experiments more than once using swapped order, in practice, many of these studies also support a "random-shuffle" option in their code settings such that the baseline model for comparison does not remain in a fixed position. Solution attempts Due to the significance of position bias, more sophisticated and advanced approaches have emerged to solve position bias as well, including bootstrapping <cit.>, split-and-merge <cit.>, and multi-agent discussion <cit.>. However, these methods are either costly and time-consuming (e.g., multi-agent discussion and review to reach agreement) or ineffective. Furthermore, Li et al. <cit.> suggests that existing calibration techniques designed to reduce bias, including context decomposition <cit.>, order permutation <cit.>, ensembling <cit.>, and batch calibration <cit.>, are insufficient to align LLM evaluators, even with supervised data. Thus, position bias is a pervasive, significantly impactful, and challenging problem to solve. Understand before solving We propose that the existing methods are ineffective or not enough satisfactory because there is a lack of understanding of the position bias. Although a variety of studies have researched this type of bias in LLM-as-a-Judge, a comprehensive understanding of what factors affect the position bias remains a gap. In other words, without clarity on the key factors and their quantitative impact on position bias, the efficacy of current and future methods remains uncertain. For instance, in the paper "Split and Merge: Aligning Position Biases in Large Language Model based Evaluations" <cit.>, the authors propose a PORTIA approach that addresses the position bias to a large extent, receiving a 80.99% Fixed Coverage (the percentage of positionally inconsistent original assessments that are later corrected by PORTIA) for GPT-4 in a relation-based evaluation on 8 MTBench answer pairs, improving consistency from 93.44 % to 97.03%. However, the extraordinarily high consistency of the original evaluation may imply that the choice of answer pairs may be biased in terms of answer quality gap, a factor that our study proves to be significantly impactful. In other words, if the quality between the two model responses differs considerably, the LLM judges will make little position bias during judgment and hence is easy to calibrate. This example illustrates how lacking a comprehensive understanding of the factors affecting position bias can lead to overestimating or misjudging the effectiveness of methods proposed to resolve it and improve the performance of LLM-as-a-Judge. Missing analysis for position bias Besides, the positional preference aspect of position bias remains underexplored. Positional preference refers to the specific positions an LLM judge favors when position bias is evident. For instance, if an LLM judge consistently favors the responses that appear first in the prompt, it exhibits a preference for the first position. Our study formally defines the terms "preference on primacy" ("primacy-preferred") and "preference on recency" ("recency-preferred") to describe biases toward the first and second positions, respectively. While prior research includes some positional preference results <cit.>, none have conducted a detailed analysis. They focus primarily on positional consistency in relation to position bias, while we regard positional preference as an equally crucial component that needs to be examined, understood, and enhanced in future work. Therefore, we propose that mitigating the position bias of LLM-as-a-Judge requires a simultaneous improvement in both positional consistency and positional fairness. A positionally consistent (high positional consistency) and positionally fair (nearly equal preference on primacy and recency when position bias occur) LLM judge is undoubtedly preferable. However, a trade-off between consistency and fairness/preference often occurs in practice. Previous studies that have provided results on positional preference have all shown that even state-of-the-art (SOTA) LLM judges struggle with both positional inconsistency and clear positional preferences. Apart from that, previous studies have yet to examine how repetition influences position bias, a phenomenon we term "repetition bias" for LLM-as-a-Judge. MLLM-as-a-Judge <cit.> conducts repeated experiments but then takes an average/mode of the judgment only to make the evaluations more robust and reliable, without a focus on the repetitively inconsistent judgments. On the other hand, we investigate the repetition bias, measured by repetitional consistency, to determine whether position bias is solely influenced by the prompt's positional information and the judge's intrinsic properties or can be partly attributed to judgment randomness. If the judgments exhibit randomness over repetitions, then the position bias overlaps with repetition bias, complicating the issue and potentially invalidating previous findings. §.§.§ Summary of Prior Work To summarize, LLM-as-a-Judge has a large potential in alternating human judges on a wide range of tasks, in particular subjective evaluation, owing to its cost-effectiveness, high agreement with human judgments, reproducibility, and scalability. However, it suffers from various biases, most notably position bias, which is prevalent across different evaluation tasks, models, and judgment types. Not only is it a type of bias that significantly hinders the improvement and promotion of applying LLM-as-a-Judge, but it is also difficult to solve due to its complexity and lack of understanding in the community. Existing solutions proposed to address this issue are either ineffective, costly, or uncertain in efficacy due to the unclear impact and nature of influencing factors. Additionally, the influence of repetition bias on position bias has not been thoroughly investigated. The positional preference side, reflecting the positional fairness of the LLM judge, also requires comprehensive analysis. In a nutshell, a comprehensive understanding of the position bias of LLM-as-a-Judge is crucial to validate existing and future approaches to address this important problem. §.§ Methodology §.§.§ LLM-as-a-Judge LLM-as-a-Judge <cit.> is a framework designed to assess question-answering tasks and evaluate model performance. Our study employed pairwise comparative assessment, the most effective judging method compared to other types of LLM-as-a-Judge usage. We selected MTBench <cit.> and DevBench <cit.> for our framework. Together, these benchmarks encompass 22 tasks and responses from around 40 models evaluated by the 9 chosen judges in this study. An additional reason for preferring these two benchmarks was their high agreement between human and LLM evaluators, ensuring the robustness of LLM-as-a-Judge and validating the practical effectiveness of our position bias research. MTBench consists of 80 multi-run questions across 8 tasks, including coding, math, extraction, roleplaying, stem, humanities, reasoning, and writing. Responses generated by 31 different models are provided via their GitHub repository [https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge], and some questions, such as objective math problems, include reference answers. DevBench, a comprehensive evaluation framework for the lifecycle of software development, applies pairwise comparative LLM-as-a-Judge to compare the model-generated software design files (UML class diagrams, UML sequence diagrams, and architecture design files) with a specified reference or baseline model. DevBench features responses from 10 models across 8 code repositories and 4 programming languages. The authors provided two evaluating metrics for each of the three software design tasks, general and faithfulness. The general metric aggregates several criteria, which we, for comprehensiveness, further expanded into detailed metrics for a comprehensive understanding of position bias. We considered the same software design task with different evaluating metrics as different tasks in the context of LLM-as-a-Judge. After separating the detailed evaluating metrics, DevBench yields 14 tasks: * UML class (4): cohesion_and_decoupling, complexity, practicability, and faithfulness * UML sequence (5): cohesion_and_decoupling, interaction_complexity, practicability, uniformity_and_integration, and faithfulness * architecture design (5): conformance, design_and_coding, practicability, uniformity_and_integration, and faithfulness The 9 judges selected are grouped into families: * GPT-4-Turbo: gpt-4-0125-preview, gpt-4-1106-preview * GPT-4: gpt-4-0613 * GPT-3.5: gpt-3.5-turbo-0125, gpt-3.5-turbo-1106 * Claude-3: claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307 * Gemini-Pro: gemini-pro-1.0 Detailed task descriptions, questions, and a list of answer-generating models are specified in Appendix <ref>. Our framework was built upon DevBench's pairwise comparative assessment for its versatility, ease of use, and as an extension of MTBench. This framework uses the question, answer pairs (strictly from different models), an optional reference answer, an optional evaluation metric, and a system prompt to form a final prompt for the LLM judge. Benchmarks involving multi-run Q&A were consolidated into a single prompt, generalizing our framework to a variety of benchmark configurations. To focus LLM judges solely on response quality and not the generating models, the evaluation is conducted double-blind: the answering models remained unaware of the judge and the comparisons, and the judge models remained unaware of the sources of the responses. A reference model, denoted by the refm parameter, serves as the baseline for all evaluations. For MTBench, vicuna-13b-v1.3 was used as the reference model due to its moderate performance on MTBench's leaderboard, allowing for varying expected quality gaps between response pairs. For DevBench, instead of the default gpt-3.5-turbo-1106, we set the human-annotated software design files as the reference "model" (human as "model") to ensure baseline quality. Following the convention of MTBench and DevBench, we supported several Option modes of LLM-as-a-Judge with the argument nopt, which stands for the number of options. Option-2 mode requires choosing the better response between two options, with 'A' indicating the first response and 'B' the second. Option-3 mode introduces a third option 'C' for a tie.Option-4 mode, defines 'C' as a "both good tie" and 'D' as a "both bad tie." In this study, we apply Option-3 mode to MTBench and Option-2 mode to DevBench, aligning with their original configurations. Option modes are specified in the system prompt. §.§.§ Metrics for Evaluating Position bias Previous research has primarily focused on the positional consistency aspect of position bias. However, we advocate for a more comprehensive evaluation encompassing repetitional consistency, positional consistency, and positional fairness. Positional fairness is quantified using positional preference scores. Repetitional Consistency Since LLM judges may not provide the same responses given identical prompts, we investigated the consistency of their judgments across repetitions—referred to as "repetitional consistency." Ideally, given the same prompt input (system prompts, task questions, and model answers), a judge should deliver consistent judgments. If inconsistencies arise among repeated judgments, repetition bias occurs. Repetitional consistency measures the extent of repetition bias. Repetitional consistency is crucial for measuring position bias. Firstly, it measures internal randomness from the judge model's evaluation process. Without high repetition consistency, it's unclear if observed position bias results from prompt-based positional information, LLM's judging ability, or the judgments' random variations. Secondly, repetitional consistency itself measures the stability of the LLM as a judge. Our work fills a gap in previous research on this aspect of LLM-as-a-Judge. In this study, we explored the repetitional consistency of each judge on sampled MTBench and DevBench, randomly choosing 3 task questions (i.e., questions in MTBench and code repositories in DevBench) and 4 answer-generating models for each task. For DevBench, instead of using the metric-separated dataset that we used for position bias analysis, we employed the original tasks for sampling purposes. Each LLM was asked to judge the same query repetitively three times. This required 576 and 432 evaluation instances on MTBench and DevBench respectively for each judge, resulting in 9072 evaluation instances. Positional Consistency Like prior research on position bias, positional consistency was computed as the percentage of consistent judgment pairs out of the total. This metric straightforwardly measures position bias since it reflects the proportion of judgments affected by position. In this study, since MTBench uses an Option-3 mode and DevBench uses an Option-2 mode, consistent pairs of judgments include pairs such as {A, B}, {B, A}, and {C, C}. Inconsistent pairs, therefore, include combinations like {A, A}, {A, C}, {C, A}, {B, B}, {B, C}, and {C, B}. For a comprehensive evaluation, we calculated the positional consistency on each (Judge, Model, Task) unit, where Model represents the answer-generating model that is compared to the reference/baseline model. In such a unit, MTBench contains around 10 questions and DevBench consists of 8 code repositories. With the swapped prompt setting, this results in 20 and 16 evaluation instances respectively for each (Judge, Model, Task) unit. Then for the judge-level or the judge-task-level average, hundreds and thousands of data points are considered, ensuring accurate and reliable measurements. Positional Fairness Another straightforward feature of position bias is its preference. An ideal, positionally fair LLM judges should distribute their biased preferences evenly across different positions. Therefore, we argue that positional fairness should be considered as important as positional consistency. In our pairwise comparative setup, an LLM judge can prefer either primacy ({A, A}, {A, C}, {C, A}) or recency ({B, B}, {B, C}, {C, B}). These terms replace "preference for the first/second position" in previous studies to avoid verbosity and ensure generalization for future works. Previous studies have explored two ways of quantifying positional preference. One way is to calculate the positional preference along with positional consistency, ensuring that consistency, preference, and error rate (i.e., choice failed to get extracted from the judgment due to format error) sum to one <cit.>. For example, an LLM judge may have 70% consistency, 20% preference for primacy, 8% preference for recency, and 2% error rate. This method uses the primacy count pc or recency count rc divided by the total number of judgment pairs to quantify positional preference. The other approach treats positional preference independently, with a 50%-50% distribution representing positionally fair judgments <cit.>. This method employs inconsistent primacy rates ipr and inconsistent recency rates icr. where ipr+irr=1. However, we proposed a more advanced way that better quantifies the positional preference compared to the existing approaches. We termed this computed value as the positional preference score, or PF. We first computed a raw positional preference score PF_raw as follows: PF_raw = (rc * icr) - (pc * ipr) This formula measures the difference in preference using a weighted average. In other words, rc, the number of recency-preferred judgment pairs, acts as a weight for the recency percentage rp. The expression rc * icr thus represents the raw_rencency_score. Similarly, pc * ipr calculates the raw_primacy_score. We then manually set the PF_raw as the raw_rencency_score minus the raw_primacy_score to ensure that a positive value represents recency-preferred and a negative value represents primacy-preferred. A value of zero PF_raw therefore indicates positionally fair judgments. Next, we normalized the PF_raw to map it to [-1,1] and retain the preference direction properties using min-max scaling: PF = PF_raw - S^-_min/S^+_max - S^-_min× 2 - 1 where S^-_min and S^+_max represents the minimum or maximum possible raw positional preference score for the unit that PF_raw got computed (in our study, it's per-judge-per-task unit). These scores reflect cases where all judgment pairs are inconsistent and biased toward one side, resulting in zero positional consistency and entirely unidirectional preference. The values of PF have the following interpretations: PF = 1, Positional Consistency = 0 and entirely recency-preferred (0, 1), Recency-preferred 0, Positionally Fair (-1, 0), Primacy-preferred -1, Positional Consistency = 0 and entirely primacy-preferred A zero positional preference score indicates perfectly fair judgments, which can be achieved through 100% consistency or evenly distributed preferences. Advantages of PF Quantifying positional fairness as such has the following advantages: * It incorporates both existing approaches: inconsistent recency/primacy rates and recency/primacy counts. * The weighted average approach avoids penalizing highly consistent judge models. If only one out of 100 pairs is inconsistent, the model shouldn't be heavily penalized. However, the pure inconsistent recency/primacy rates will determine this case as highly biased on one side. Our approach addresses this issue by multiplying percentages by counts, where bias is more accurately measured without reversing its direction. * Our way of rescaling makes the positional preference score scalable to be compared across judges and tasks regardless of the data points involved in the calculation process. This is superior to the approach that only considers primacy/recency count. * Our rescaling approach is superior to usual normalization methods because they use the absolute minimum and maximum raw fairness scores for normalization. In these cases, a zero fairness score does not necessarily map to zero, which deviates from the expected outcome. Additionally, a highly consistent set of judgments entirely biased toward primacy and a less consistent set entirely biased toward recency can both achieve the same numerical absolute value if using normalization, as they correspond to the minimum and maximum raw fairness scores. However, the less consistent set should practically be considered more biased because it contains more inconsistent judgment pairs, all skewed in the same direction. Another problem with the usual normalization and the existing approach of summing consistency, preference, and error rates to 1 is that they are not scalable if the number of questions/data points for each task varies. Then comparisons across judges and tasks are biased. * Our derived positional preference score method addresses these challenges by utilizing the minimum and maximum possible scores instead of absolute minimum/maximum raw fairness scores as boundaries. The possible minimum and maximum cases occur only when positional consistency equals zero and all judgments are biased to one side of the preference. In this setup, a zero raw fairness score accurately maps to zero, while the absolute minimum and maximum raw scores can be quantitatively differentiated by comparing them against the minimum and maximum possible scores, which stand for -1 and 1, respectively. This ensures that the comparison is scalable, as the number of questions or data points for each task is accounted for and rescaled within the per-judge-per-task unit. §.§.§ Factors Affecting Position Bias We categorized three primary factors that may influence position bias when using LLM-as-a-Judge, aside from the system prompt settings: Judge-level, Model-level, and Task-level. This structure also informs our calculation of positional consistency and fairness scores within the (Judge, Model, Task) unit. Judge-level factors These factors encompass the internal properties of the selected LLM judges that might affect position bias. For instance, the size of parameters used to train the LLM is crucial, as larger models usually exhibit superior performance. In this study, we did not consider parameter size due to the lack of information for some judge models, especially different versions. However, we agree that the parameter size of LLM evaluators should be considered if it is accessible and will likely be a dominating factor when measuring its impact on position bias. Instead, we quantitatively consider the context window and maximum output length of the 9 judges as they are all publicly accessible. We also explore how familial property impacts position bias because different companies or research teams train their LLMs differently, and different model versions can exhibit varied performance. Model-level factors In the LLM-as-a-Judge context, we distinguish between judge models and answer-generating models. Since the former are always termed "Judge," we call the latter "Model" for simplicity. What we are interested in at the Model level is neither the answer-generating models' names nor internal properties like a judge model, but the answer quality gap. The reasons are as follows. * Knowing the exact Model names won't help understand position bias, since evaluation is double-blind. * Analyzing internal properties is costly, time-consuming, and ineffective, as each comparison must be exhaustively examined without yielding broadly applicable insights. Considering a Judge's internal properties adds unnecessary complexity to the analysis. * The sheer number of models makes studying exact Model names unscalable to unknown Judges, Models, and Tasks. On the other hand, we proposed that the answer quality gap essentially captures the varied information at the Model level. It is interpretable, comparable, and scalable. Intuitively, it's conceptually reasonable because similar-quality pairs are challenging to assess, whereas pairs with larger quality disparities are easier to judge. Similarly, for pairwise comparative LLM-as-a-Judge, we expect LLM judges to make less position bias when the qualities between the given pair of Model-generated answers differ considerably. This measure doesn't depend on the exact Model name, making it applicable across Judges, Models, and Tasks while still capturing a Model's internal properties because they are the essential cause of variation in answer quality. The formula for calculating the overall win rate also ensures a) the win rates for the Model and reference/baseline model sum to 1 and b) the baseline win rate that represents no quality gap is 0.5. This is because inconsistencies are mathematically treated as ties, and ties are counted as a "half-win" for both models. We calculate the absolute difference between the overall win rate and the baseline win rate of 0.5 because the focus is not on which model wins but on quantifying the quality gap between responses. An added advantage of this directional-agnostic measure is that it provides a simple yet comprehensive metric that encapsulates all the necessary information at the Model level. In a nutshell, the answer quality gap calculated based on the overall win rate incorporating positionally inconsistent judgments provides an optimal measurement of Model-level information, which we use to explore its impact on position bias. Task-level factors At the task level, factors affecting position bias include not only task categorization but also various length statistics such as task input length, task output length, and prompt length [ To standardize length measurements, we use the Python function len() instead of counting tokens because tokenization methods vary across LLMs.]. * Task Input Length: The length of the task question(s) within the prompt. * Task Output Length: The cumulative length of responses from both models. * Prompt Length: The entire length of input data, including system prompts, task input, task output, reference answers, and evaluation metrics. We did not consider the remaining judge output length, because we prompted the LLM judges to give a very short "choice + reasoning" judgment. Besides position bias, the length statistics are related to the verbosity/length bias. The verbosity/length bias describes a phenomenon that which the LLM judge favors a longer response. In this study, we considered the impact of length on position bias, which further explores the intrinsic relationship between position bias and verbosity/length bias. While the task input length, or the question length, is purely a Task-level factor, the task output length and prompt length may also encode some Model-level information since for specific questions, an expectedly stronger model with more parameters will likely generate longer answers. Although this argument is valid, we point out that the range of the output length is mainly determined by the task and particularly the question whereas the Model-level difference accounts for the relatively small fluctuation. For example, solving a simple math equation generally yields a shorter response than writing a travel journal regardless of Model ability. Therefore, despite being marked as both Model-level and Task-level factors in Table <ref>, task output and prompt lengths are more influenced by the task itself than Model-level variations. §.§.§ What a "Perfect" Judge Would Look Like A "perfect" or positionally optimal judge model should satisfy: * 100% Repetitional Consistency: , This ensures little to no repetition bias, meaning the judge consistently makes the same decisions across repeated evaluations. * 100% Positional Consistency: Ideally, the judge should achieve complete positional consistency. In practice, however, surpassing 70% is generally considered good, while exceeding 80% is regarded as highly consistent. Even human evaluators struggle to reach 100% positional consistency or agreement across diverse tasks. * Balanced Positional Preference: When position bias occurs, the model should ideally exhibit an even distribution of primacy and recency preferences, resulting in a positional preference score close to zero. In practical applications, users may prioritize either positional consistency or positional fairness based on their specific requirements. Sometimes, achieving the highest positional consistency is paramount, and a strong bias toward one side may not affect the evaluation if inconsistent judgments are treated as ties. Other times, users may accept slightly lower positional consistency to ensure significantly fairer judgments. In addition to these factors, the price-performance ratio or cost-effectiveness also plays a role in selecting the best LLM judge for specific needs. There is always a trade-off between positional consistency, positional fairness, and cost. Therefore, while the concept of a "perfect" model provides a benchmark, it's often more practical to select the best model based on particular needs rather than relying on an arbitrary notion of "perfection." To offer judge selection support, we provided a cost-effectiveness analysis of our studied judge models on the two benchmarks in Appendix <ref>. §.§ Experiment Settings §.§.§ Pairwise Comparative Assessment In this study, we propose a framework for comprehensively understanding position bias specifically in pairwise comparative assessment of LLM judges. A pairwise comparative assessment presents LLM judges with a pair of model-generated answers to a given question and asks them to choose the better one between the two according to specific evaluation metrics, sometimes with a reference answer. As with prior research, we utilize Chain-of-Thought <cit.> where the LLM judges not only select the better response (or declare a tie) but also provide reasons for its judgment choice in a structured format. Then the position bias occurs when the judgment is {A, A} or {B, B} (A means response 1 is better than response 2 and B is the opposite) after the original and swapped ordering of responses in the prompt. An {A, A} is considered "primacy-preferred" and a {B, B} is similarly regarded as "recency-preferred". The reason for researching pairwise comparative assessment is as follows. First, the "pairwise" paradigm is in itself a very general and classic idea that impacted a wide range of fields <cit.>. Second, according to <cit.>, the discrepancy between LLM and human evaluative standards is reduced when performing pairwise comparisons compared to rating by scores., and "unlike calibrating the score-based LLM evaluators, which requires external information on the distribution of human preference prior, pairwise comparisons can fully leverage on a uniform human prior due to the random nature of pairwise sample selection". The MTBench <cit.> results also show that pairwise comparative assessment exceeds other types of judging in terms of positional consistency. Pairwise comparative assessment not only performs better than prompt scoring but also enables moderate-size open-source LLMs to achieve near SOTA performances across a range of NLG tasks for a diverse set of attributes <cit.>. Furthermore, the limitations of pairwise comparison, such as the ignorance of the transitivity assumption (Mathematical transitivity, not linguistic transitivity, refers to the property in which if A is preferred to B and B is preferred to C, then A is preferred to C) <cit.> and the intractable evaluation procedure <cit.> can be further addressed by advanced or corresponding ranking algorithms <cit.>. Hence, pairwise comparative assessment is the most optimal choice for LLM-as-a-Judge in terms of positional consistency performance compared to other types of judging, and its shortcomings can be addressed by subsequent ranking algorithms to extend to "listwise" evaluation for generalization. Although various open-source and fine-tuned judge models can be employed, we only focus on 9 closed-source black box commercial models - 5 GPTs <cit.>, 3 Claude-3s <cit.>, and Gemini-Pro <cit.>. A primary reason is that GPT-4 has consistently demonstrated its superiority as a judge model across various studies, outperforming both open-source and fine-tuned models despite its high cost. Claude-3 and Gemini-Pro, although not thoroughly tested due to their recent release, are anticipated to perform at a state-of-the-art level. To further investigate how the familial property (i.e., company and version) affects the position bias, we include 5 GPTs and 3 Claude-3s for a comprehensive evaluation. Our research comprehensively explores the performance of the 9 judges on two benchmarks, MTBench <cit.> and DevBench <cit.>, which together encompass 22 tasks and responses from around 40 different models (some repeated across benchmarks). In terms of (Judge, Model, Task) unit, where Model is the answer-generating model compared to the specified baseline model, our experiments include around 80,000 evaluation instances, costing approximately $2,500. §.§.§ Prompt Settings We follow the original prompt settings of MTBench and DevBench in our study of pairwise comparative LLM-as-a-Judge. Though written differently, these prompts all share same key components: * A system prompt explaining the judging task and the role the LLM should be playing. * Emphasized "should" and "shouldn't"s. * A prompt structure with placeholders for specific quetions and model answers * A specified output format for later judgment extraction * Chain-of-Thought <cit.> prompts requiring the LLM judge to provide reasons for its judgment The detailed prompts settings are specified below. §.§.§ Example of Position Bias Here we provide an example evaluation pair on MTBench's Question 117 (math) using gpt-4-0613 (Figure <ref>). This is typical because: * It shows how the LLM judge favors position regardless of content, as gpt-4-0613 choosing A for both original and swapped order query. * It shows how sometimes Option-3 mode does not necessarily trigger LLMs to choose "tie" for similar-quality answer pairs. In this case, both models answer both questions wrong, but gpt-4-0613 still attempts to choose the better one, causing position bias. * It demonstrates the reasonability of considering "inconsistency-as-tie" and using the overall win rate to quantify the answer quality gap. In this example, these two models should be considered as tied as they are both incorrect on both answers. In such similar-quality scenario, the LLM judge exhibits position bias. §.§.§ Judges, Models, and Tasks In this study, we choose 5 GPTs, 3 Claude-3s, and gemini-pro-1.0 as the judges. The 9 judges can hence be grouped into families. * GPT-4-Turbo: gpt-4-0125-preview, gpt-4-1106-preview * GPT-4: gpt-4-0613 * GPT-3.5: gpt-3.5-turbo-0125, gpt-3.5-turbo-1106 * Claude-3: claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307 * Gemini: gemini-pro-1.0 The reference (or baseline) models are vicuna-13b-v1.3 for MTBench and human for DevBench. They are chosen to ensure a baseline quality of responses and an expected widely spread quality gap across evaluations. The other models that are compared to the reference models, namely Model in our context, are listed as follows. * MTBench (30): alpaca-13b, baize-v2-13b, chatglm-6b, claude-instant-v1, claude-v1, dolly-v2-12b, falcon-40b-instruct, fastchat-t5-3b, gpt-3.5-turbo, gpt-4, gpt4all-13b-snoozy, guanaco-33b, guanaco-65b, h2ogpt-oasst-open-llama-13b, koala-13b, llama-13b, mpt-30b-chat, mpt-30b-instruct, mpt-7b-chat, nous-hermes-13b, oasst-sft-4-pythia-12b, oasst-sft-7-llama-30b, palm-2-chat-bison-001, rwkv-4-raven-14b, stablelm-tuned-alpha-7b, tulu-30b, vicuna-33b-v1.3, vicuna-7b-v1.3, wizardlm-13b, wizardlm-30b * DevBench (10): codellama-7b-instruct, codellama-13b-instruct, codellama-34b-instruct, deepseek-coder-1.3b-instruct, deepseek-coder-6.7b-instruct, deepseek-coder-33b-instruct, gpt-3.5-turbo-1106, gpt-4-0125-preview, gpt-4-0613, gpt-4-1106-preview The model names are exactly what MTBench <cit.> and DevBench <cit.> use in their studies. That is why for GPTs, DevBench specifies the exact version (e.g., gpt-4-0613) while MTBench doesn't (e.g., gpt-4). For tasks, we also follow the original studies of these two benchmarks, except for DevBench we separate the gerenal metrics into detailed ones and considered them as different tasks. In this sense, our study experiments on the following tasks to provide a comprehensive study on the positon bias of LLM-as-a-Judge: * MTBench (8): coding, extraction, humanities, math, reasoning, roleplay, stem, and writing. * Devbench (14): * UML class (4): cohesion_and_decoupling, complexity, practicability, and faithfulness * UML sequence (5): cohesion_and_decoupling, interaction_complexity, practicability, uniformity_and_integration, and faithfulness * architecture design (5): conformance, design_and_coding, practicability, uniformity_and_integration, and faithfulness §.§.§ Task Description and Question Examples MTBench consists of 80 multi-run questions across 8 tasks. The question examples are as follows. The original question and follow-up question are separated by ";". * Writing * Question ID 81: Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions; Rewrite your previous response. Start every sentence with the letter A. * Question ID 82: Draft a professional email seeking your supervisor's feedback on the "Quarterly Financial Report" you prepared; Take a moment to evaluate and critique your own response. * Question ID 83: Imagine you are writing a blog post comparing two popular smartphone models; Take your previous response and rephrase it as a limerick. * Question ID 84: Write a persuasive email to convince your introverted friend, who dislikes public speaking, to volunteer as a guest speaker at a local event; Can you rephrase your previous answer and incorporate a metaphor or simile in each sentence? * Question ID 85: Describe a vivid and unique character, using strong imagery and creative language; Revise your previous response and incorporate an allusion to a famous work of literature or historical event in each sentence. * Question ID 86: Write a descriptive paragraph about a bustling marketplace; Rework your previous response. Begin each sentence with the subsequent letter of the alphabet, commencing from B. * Question ID 87: Could you write a captivating short story beginning with the sentence: "The old abandoned house at the end of the street held a secret that no one had ever discovered"; Now, do the same task again but only use four-word sentences. * Question ID 88: Craft an intriguing opening paragraph for a fictional short story involving time travel; Summarize the story with three bullet points using only nouns and adjectives, without verbs. * Question ID 89: Help me construct a catchy, yet scientifically accurate, headline for an article on the latest discovery in renewable bio-energy; Alter your previous response. Make the following adjustments to the 2nd option: 1. Make the tone sound casual 2. Embed an advertisement for a company called "FlexPower" 3. Fewer than 10 words. * Question ID 90: Edit the following paragraph to correct any grammatical errors; Modify your earlier reply and eliminate the use of gendered pronouns. * Roleplay * Question ID 91: Pretend yourself to be Elon Musk in all the following conversations; How do you like dancing? Can you teach me? * Question ID 92: Embrace the role of Sheldon from "The Big Bang Theory" in a conversation; Let’s grab dinner in town. Would you like to take the bus with me? * Question ID 93: Imagine yourself as a doctor tasked with devising innovative remedies for various ailments; But I have been pregnant for 20 weeks and I am allergic to many medicines. * Question ID 94: Please take on the role of a relationship coach; My spouse has conducted domestic violence on me but I do not want to call the police to put her in legally troubled situations. * Question ID 95: Assume the role of an English translator, tasked with correcting and enhancing spelling and language; Ich verstehe nur Bahnhof. * Question ID 96: Now you are a machine learning engineer. Explain complex machine learning concepts in a simplified manner; Is this true? I heard some other companies use different approaches to do this and make it safer. * Question ID 97: Act as a math teacher, explaining mathematical equations or concepts; What are the differences between Riemannian geometry and Euclidean geometry? * Question ID 98: Embody the persona of Tony Stark from "Iron Man" throughout this conversation; What do you think about GPT-4 as a replacement for your JARVIS? * Question ID 99: Suppose you are a mathematician and poet and write proofs as poems; Prove the Pythagorean theorem. * Question ID 100: Picture yourself as a 100-year-old tree trying to stop deforesters; Come up with a proposal to convince the deforesters to stop cutting you down and other trees. * Reasoning * Question ID 101: Imagine you are participating in a race with a group of people; If the "second person" is changed to "last person," what would the answer be? * Question ID 102: You can see a beautiful red house to your left and a hypnotic greenhouse to your right; Does the original question contain any clues to definitively determine the location of the White House? * Question ID 103: Thomas is very healthy but has to go to the hospital every day; Can you explain why the above question is interesting? * Question ID 104: David has three sisters. Each of them has one brother; If we change the previous question and assume that each sister of David has two brothers, how many brothers would David have? * Question ID 105: At a small company, parking spaces are reserved for the top executives; List car colors in order from last to first. * Question ID 106: Each problem consists of three statements, and the third statement is based on the first two; If the third statement is true, is the first statement true, false, or uncertain? Please explain. * Question ID 107: A is the father of B, and B is the father of C; Building on the previous question, what's the relationship between A and Z in terms of generations and familial relationship? * Question ID 108: Which word does not belong with the others?; Could you replace it with a word that belongs with the others? * Question ID 109: Suresh was standing facing a pole, and the shadow fell exactly to his right; To which direction was Suresh facing? How do you solve this? * Question ID 110: Parents have complained to the principal about bullying during recess; If the aides confront the group of girls from situation (c) and they deny bullying, what evidence should the aides look for? * Math * Question ID 111: The vertices of a triangle are at specific points; What's the area of the circle circumscribing the triangle? * Question ID 112: A tech startup invests $8000 in software development in the first year; If the startup maintains the same strategy for the third year, how much will they invest in the third year? * Question ID 113: In a survey conducted at a local high school, preferences for a new school color were measured; If we select a student who liked green, what's the probability that he or she would dislike both colors? * Question ID 114: When rolling two dice, what is the probability that you roll a total number that is at least 3?; What's the probability that you roll a number which is even or at least 3? * Question ID 115: Some people got on a bus at the terminal; If the ticket is $2 per person, how much is the total money earned by the bus? * Question ID 116: x+y = 4z and x × y = 4z^2, express x-y in z; Express z-x in y. * Question ID 117: How many integers are in the solution of the inequality |x+5| < 10?; What about |x+10| < 5? * Question ID 118: When a number is divided by 10, the remainder is 4; What about when twice the number is divided by 5? * Question ID 119: Benjamin went to a bookstore and purchased a variety of books; Suppose Benjamin decides to sell each of these books at a 25 * Question ID 120: Given f(x) = 4x^3 - 9x - 14, find the value of f(2); Find x such that f(x) = 0. * Coding * Question ID 121: Develop a Python program that reads all the text files under a directory and returns the top 5 words with the most occurrences; Can you parallelize it? * Question ID 122: Write a C++ program to find the nth Fibonacci number using recursion; Now we define a sequence of numbers in which each number is the sum of the three preceding ones. The first three numbers are 0, -1, -1. Write a program to find the nth number. * Question ID 123: Write a simple website in HTML. When a user clicks the button, it shows a random joke from a list of 4 jokes; How to use CSS to change the color of jokes to red? * Question ID 124: Identify any bug in this Python function to find the length of the longest common subsequence of two input strings; what about this one? * Question ID 125: Write a function to find the highest common ancestor (not LCA) of two nodes in a binary tree; What if it is not a binary tree? * Question ID 126: Implement a function to find the median of two sorted arrays of different sizes; Does there exist an implementation with better time complexity? * Question ID 127: Write a function to find the majority element in a given integer array using the Boyer-Moore Voting Algorithm; How about finding the top-2 most occurring elements? * Question ID 128: A binary tree is full if all its vertices have either zero or two children. Find B_n; What if the problem changed from a binary tree to a ternary tree? * Question ID 129: You are given two sorted lists of size m and n. Find the k^th smallest element in their union; Does there exist an algorithm with better time complexity? If so, implement it. * Question ID 130: Implement a program to find the common elements in two arrays without using any extra data structures; Now the constraint of not using extra data structure is removed, implement one with the best time complexity. * Extraction * Question ID 131: Evaluate the following movie reviews on a scale of 1 to 5; Update your previous reply by including the release date as part of the JSON content. * Question ID 132: Analyze the following questions and assign them to one of these categories: Literature, History, Science, and Art; Amend your earlier answer by mentioning a person who is most relevant to each point. * Question ID 133: Extract the name of the book, the author, the main character, and the year of publication; Reformulate your earlier reply, output it in JSON format, and only include books published after 1980. * Question ID 134: Identify the company with the highest profit in 2021 and provide its CEO's name; Which company had the highest profit margin (profit/revenue ratio)? * Question ID 135: Identify the countries, their capitals, and the languages spoken in the given sentences; Come up with 3 similar examples in the YAML format. * Question ID 136: Count how many times the words "Amazon," "river," and "you" appear; Please repeat the same task using the words "the," "and," and "to." * Question ID 137: Identify the named entities (people, organizations, locations) mentioned in the given news article; Now make the JSON object shorter by replacing each value with its first letter. * Question ID 138: Analyze customer reviews for three different smartphones; Can you change the ratings from numbers to letters? Capital letters MUST be used when writing the names of phones. * Question ID 139: Extract all unique variable names from each equation given in a set of complex equations; Please rearrange the equations and use "a," "b," "c," "d," etc. as variables. * Question ID 140: Extract the highest and lowest closing prices for each month in the year 2022 from the given stock records; Do the same task again with the JSON format and round all numbers in your response to the nearest integers. * STEM * Question ID 141: In quantum physics, what is superposition, and how does it relate to quantum entanglement?; What assumptions have you made in your response? Are they valid? * Question ID 142: Consider a satellite that is in a circular orbit around the Earth. The speed of the satellite decreases; What are some corner cases or edge cases in your solution? How do you handle them? * Question ID 143: Photosynthesis is a vital process for life on Earth. Outline the two main stages of photosynthesis; How much energy can a tree produce through photosynthesis in its lifetime? Provide an estimate. * Question ID 144: What is the central dogma of molecular biology? What processes are involved?; Identify and fix one incorrect fact in your previous response. * Question ID 145: Write out the balanced chemical equation for the reaction between solid calcium carbonate and hydrochloric acid; How can we reverse this process? * Question ID 146: Explain the differences between exothermic and endothermic reactions; Can a process involve both reactions? List one. * Question ID 147: The city of Vega intends to build a bridge spanning the Vegona River; What are the key disadvantages or flaws of your solution? Please perform calculations and use numbers to illustrate them. * Question ID 148: Design a solar-powered water heating system for a residential building; If the system is intended for a building with a capacity of 100 individuals, what would be the estimated budget? * Question ID 149: Describe the concept of machine learning; In your last example of reinforcement learning, can we use supervised learning to solve it? * Question ID 150: How have the Alps and Rhine River influenced settlement and agriculture in Western Europe?; How could you design a concrete but simple experiment to validate the first impact? * Humanities * Question ID 151: Provide insights into the correlation between economic indicators such as GDP, inflation, and unemployment rates; Now, explain them again like I'm five. * Question ID 152: How do the stages of life shape our understanding of time and mortality?; Write an allegorical poem that illustrates the above. * Question ID 153: Discuss antitrust laws and their impact on market competition. Compare the antitrust laws in the U.S. and China; Pick one case study and explain it in detail. * Question ID 154: Create a lesson plan that integrates drama, mime, or theater techniques into a history class; Provide more details for Day 1 and include three homework questions. * Question ID 155: Share ideas for adapting art masterpieces into interactive experiences for children; Write a concrete plan for your second example. Include budget estimates. * Question ID 156: Explain what's base rate fallacy and give five specific examples of how politicians use it for campaigns; Provide a detailed plan for an election campaign using the first example. * Question ID 157: Describe five key principles in evaluating an argument in analytical writing; Write a response in which you discuss what specific evidence is needed to evaluate the argument. * Question ID 158: Which methods did Socrates employ to challenge the prevailing thoughts of his time?; Generate a conversation between Socrates and Bill Gates to debate generative AI for education. * Question ID 159: What are some business etiquette norms when doing business in Japan?; Create a video script for training new employees of a car wash business in Japan. * Question ID 160: Suggest five award-winning documentary films with brief background descriptions for aspiring filmmakers to study; With the spirit of the first film, craft a succinct and persuasive pitch for a film about overcoming adversity. DevBench consists of three software design files that will be created by Models and judged with different evaluating metrics. All of them require a Product Requirement Document (PRD) of the specified code repository or project as input. The UML class will be directly created based on the PRD. Then, the UML class diagram, along with the PRD, will be given as input to the Model to generate the UML sequence diagram and Architecture Design. To ensure the quality of generated software design files, the DevBench authors tested and created long prompts for LLMs to follow specific requirements. This makes the question length, or task input length in the context of our study, significantly longer than MTBench's tasks. Also, the UML diagrams and Architecture Design file tree, though written by text, could be perceived as various modalities. Therefore, choosing DevBench along with MTBench largely increases the versatility and comprehensiveness of our findings. The detailed questions for the three software design files are specified below: §.§ Repetitional Consistency To investigate position bias, it's crucial to rule out the impact of randomness in LLM judgments. To this end, we measure repetition bias to determine whether the LLM judge provides consistent evaluations over multiple runs. A higher resulting repetitional consistency represents less repetition bias and hence fewer random variations in LLM judgments. We study the repetitional consistency of the 9 chosen LLM judges on a sampled dataset of MTBench and DevBench, randomly selecting 3 questions and 4 answer-generating models for each task. For simplicity, we use the original tasks rather than our metric-separated ones for DevBench at this stage. Hence, in total, 9 judge models are evaluated on 14 tasks (8 MTBench + 6 DevBench) with swapped-order settings in 3 repetitive runs, where each task includes 3 questions and 4 answer-generating models, resulting in 9072 evaluation instances. Specifically for each judge, repetitional consistency is calculated based on 576 and 432 evaluation instances for MTBench and DevBench respectively. The sample selections are specified below: MTBench: * Model: stablelm-tuned-alpha-7b, falcon-40b-instruct, claude-v1, llama-13b * Questions for each Task: * Writing: 82, 86, 89 * Roleplay: 92, 96, 99 * Reasoning: 102, 106, 109 * Math: 112, 116, 119 * Coding: 122, 126, 129 * Extraction: 132, 136, 139 * STEM: 142, 146, 149 * Humanities: 152, 156, 159 DevBench: * Models: gpt-3.5-turbo-1106, gpt-4-1106-preview, deepseek-coder-1.3b-instruct, codellama-13b-instruct * Repositories and Their Tasks: * UML Sequence: * General: xlsx2csv, redis-cache, idcenter * Faithfulness: xlsx2csv, redis-cache, idcenter * Architecture Design: * General: xlsx2csv, redis-cache, idcenter * Faithfulness: xlsx2csv, redis-cache, idcenter * UML Class: * General: people management, Actor relationship game, idcenter * Faithfulness: people management, Actor relationship game, idcenter As shown in Table <ref>, though varied, all judge models demonstrated high consistency across repetitions. Gemini-pro-1.0 and claude-3-haiku are the two models that exhibit relatively low repetitional consistency but still have a high numerical value above 0.85. Another main discovery is that the standard deviation decreases for higher repetitional consistency, implying that more repetitively consistent judges are also more stable. These results indicate minimal repetition bias of LLM judges, meaning that position bias is unlikely to arise due to random variations. This validation is crucial because it confirms that the observed position bias predominantly originates from other factors such as intrinsic properties of the judge models, tasks, and positional information in the prompt rather than the randomness of LLM output. Furthermore, it demonstrates that one-shot judgments can adequately represent the judging abilities of LLMs. In other words, we do not necessarily need to obtain judgments from multiple runs and take the mode to ensure robustness and reliability - one shot is enough. Additionally, high repetitional consistency reinforces the reliability of LLM judges for further studies. If the models maintained such consistency under controlled conditions, future assessments using the same judges should yield similarly stable results, allowing researchers to better understand and interpret the factors influencing position bias. In conclusion, the repetitional consistency analysis lays a solid foundation for investigating other aspects of position bias. By eliminating the impact of random variations, subsequent analyses can focus on factors like positional consistency and fairness with greater confidence in the stability of these LLM judgments across evaluation instances. §.§ Judge Agreement Since our study extended the two benchmarks to a more comprehensive perspective, we lack sufficient human annotations to compute LLM-human agreement directly. However, the original MTBench and DevBench studies demonstrated a high level of LLM-human agreement on their benchmarks. In particular, GPT-4 achieved over 80% agreement on MTBench, reaching the same level as human-human agreement <cit.>. DevBench presented a different scenario, with LLM judges using Option-2 mode and applying "inconsistency-as-tie," while humans are given Option-3 mode (allowing them to choose "tie"). On DevBench, GPT4-Turbo achieved 60.4% and 51.6% human agreement on the general and faithfulness metrics, respectively, in the w/tie setting, and achieved 79.2% and 83.2% in the w/o tie setting <cit.>. Thus, although our study did not analyze LLM-human agreement directly, the robustness and effectiveness of our methodology are ensured by the results of these benchmarks. The absence of human-annotated judgments motivated us to explore an often overlooked aspect of LLM-as-a-Judge: LLM-LLM judge agreement (or LLM mutual agreement). Analyzing LLM-LLM judge agreement can provide valuable insights into evaluating LLM-as-a-Judge and understanding position bias in the following ways: * Scalability and Cost-Effectiveness: If certain LLM judge models, like GPT-4, have demonstrated high agreement with human evaluations across tasks, comparing new LLM judges to these models rather than to human judgments can effectively validate the new judges' performance. * Enhanced Confidence: If we aggregate judgments from highly mutually agreed-upon and strong LLMs, taking the mode of their collective judgment increases confidence in the results. The likelihood of making a biased judgment decreases as the number of convincing judgments rises. * Familial Properties: LLM-LLM agreement helps reveal the familial properties of LLMs. Models from the same company or version are expected to agree with each other. Moreover, models with similar performance on other tasks are expected to perform similarly as judges, which can be confirmed through LLM-LLM judge agreement analysis. To this end, we analyzed the LLM-LLM judge agreement in terms of Agreement Heatmap and Cumulative Disagreement Analysis. §.§.§ Agreement Heatmap According to Figure <ref>, the "familial property" is apparent for both w/ C and w/o C settings (the following stats are reported in "w/ c / w/o C" format): * The two GPT-4-Turbos have the highest Judge Agreement with each other, reaching 79.49%/ 88.67%. GPT-4 also highly agrees with them, reaching around 75%/85%. * The two GPT-3.5s share a unique family, achieving 80.80%/82.47% Judge Agreement while only reaching about 65%/70% agreement with other judges. * Claude-3-opus and claude-3-sonnet are in a family, evidenced by a 77.5%/81.25% agreement, whereas claude-3-haiku is not in the same family as them, having only around 66%/70% agreement. However, if looking at claude-3-haiku individually, its Judge Agreement is highest when calculated with the other two Claude-3s compared to others, showing that it can be regarded as a "distant relative" to claude-3-opus and claude-3-sonnet. * Claude-3-opus, among others, have the highest Judge Agreement with the GPT-4/GPT-4-Turbo family, reaching around 68%/78%. This implies that the state-of-the-art (SOTA) models, though belonging to different families or trained by different companies, share similar capabilities as judges. Therefore, other than the families that are grouped by companies and versions, we can also form a "SOTA family" whose members share similar SOTA performances. * Gemini-pro-1.0 doesn't exhibit significant agreement with other judges, marking it as part of its own distinct family. Separately for each benchmark (Figure <ref>), the observations remain similar, whereas there are variations: * Familial Grouping remains consistent: GPT-4/GPT-4-Turbo, GPT-3.5, Claude-3 (opus and sonnet as close family members while haiku is a "distant relative"), and Gemini. * MTBench highlights the familial distinction more clearly, with larger agreement gaps across families. The finding that claude-3-opus can align with the GPT-4/GPT-4-Turbo family as part of the "SOTA family" remains consistent. * DevBench largely confirms these observations but with some differences: * In DevBench, gpt-4-0125-preview now has the highest agreement with claude-3-opus instead of claude-3-sonnet, though this does not conflict with the "SOTA family." * claude-3-haiku has an exceptionally low agreement rate (around 50% with some models and about 60% with others). This is expected given its poor 23% positional consistency on DevBench. In conclusion, despite variations across benchmarks and tasks, familial properties mainly influence how LLM judges agree with each other. This can serve as evidence that position bias is significantly affected by the familial properties of LLM judges. §.§.§ Cumulative Disagreement Analysis In addition to understanding the extent to which LLM judges agree with each other, analyzing their disagreements can also reveal crucial insights. Understanding how the nine judges agree or disagree on the same evaluation instance can offer valuable information. If all judges—or all but one or two—reach a consensus, given that most are proven reliable through high LLM-human or LLM-LLM judge agreement, the mode judgment is likely to be convincing. However, if judgments vary widely among the judges, it may suggest that the evaluation instance is challenging to assess, or the capabilities of the models differ significantly. In this study, since most chosen LLM models have demonstrated reliability through high LLM-human and LLM-LLM judge agreements, significant variations in judgment imply that the specific evaluation instance (questions and model-generated answer pairs) is inherently difficult to judge. For our study with nine judges, Option-2 mode has disagreement values from 0 to 4, while Option-3 mode ranges from 0 to 6. The most disagreed scenarios might include 5A-4B for Option-2 and 3A-3B-3C for Option-3. Figure <ref> shows that all nine judges reached a consensus on 23.4% of overall (MTBench + DevBench) evaluation instances. Allowing for some tolerance due to weaker models like claude-3-haiku, judgments for 58.4% of evaluation instances can be considered strongly convincing since fewer than two judges disagree with the majority. This suggests that more than half of the evaluation instances are relatively straightforward to judge and the mode of judgments is very likely to be accurate and convincing. Looking at MTBench and DevBench separately reveals a similar pattern. Allowing up to two disagreements, MTBench and DevBench include 56.1% and 63.2% of evaluation instances that are relatively easy to judge, respectively. This percentage rises to 73.9% and 81.2% if we permit one additional disagreement. For the evaluation instances that are challenging to judge, DevBench contains 19.8% with a disagreement level of 4, while MTBench has 26.1% with a disagreement level greater than or equal to 4. However, instances with a disagreement level of 5 and 6 sharply decline to less than 8% of the total. Based on the cumulative disagreement analysis, the following insights can be drawn: * Majority of evaluation instances have relatively few disagreements, indicating that consensus is generally attainable. However, a small subset of evaluation instances exhibit significantly more disagreement, making them difficult to judge by LLMs. * Most LLM judges reach a consensus for around two-thirds of evaluation instances across both benchmarks. This demonstrates the potential for using multiple LLM judges on the same evaluation instance to ensure effective and reasonable evaluation. * However, a quarter of evaluation instances exhibit significant disagreement among the judges, indicating the difficulty of assessing these evaluation instances. These instances likely involve prominent position bias, as the positional consistency among the judges typically ranges between 0.6 and 0.8. This aligns with the proportion of instances where the number of disagreements is three or fewer. §.§ Positional Consistency vs. Fairness Since we measured the position bias in terms of positional consistency and fairness, analyzing how the positional preference changes along with positional consistency could also give more insight into understanding of position bias, Also, it makes the analysis on positional consistency and preference score vs. other factors more interpretable. Overall, the positional preference scores decrease in scale (i.e., become closer to 0) as positional consistency increases, resulting in the right-arrow shape (See Figure <ref>). Since more of the 9 judges are recency-preferred (e.g. Claude-3s), the regression line inclines on recency. However, this does not interfere with the general conclusion that if the judgments are more positionally consistent, they are anticipated to be more positionally fair as well. The results of each judge on the two benchmarks reveal the same discovery (see Figure <ref>). An overall positionally fair judge, such as gpt-3.5-turbo-0125 on MTBench, exhibits the same arrow shape and a near horizontal regression line, implying consistent fair judgments regardless of positional consistency. Overall conspicuously preference-biased judges, such as Claude-3s, in contrast, approach positional fairness from extreme preference on one side, exhibiting only the top or bottom half of the arrow. In summary, it helps to interpret why * The more positionally consistent judges are usually less likely to have large-scale preferences. * The analysis of positional consistency and preference score vs. other influential factors aligns with each other. For example, the parabolic shape exhibits both in positional consistency and preference score vs. answer quality gap, which can be interpreted as larger quality gaps result in more positionally consistent and hence positionally fairer judgments. §.§ Position Bias vs. Win Rates For consistent pairs of judgments, one model is clearly of better quality than the other, as LLM judges consistently prefer it regardless of position. Such consistent judgments include consistent wins and consistent ties, with a consistent tie counted as a "half-win." Unlike the overall win rate, if only consistent pairs are considered, this is termed the consistent win rate. The overall win rate differs from the consistent win rate by considering "inconsistency-as-tie". Zheng et al. <cit.> introduce this way of calculating the win rate in their original work, and we formally define it as the overall win rate to distinguish it from the consistent win rate. The idea aligns with our intuition: LLMs are more likely to exhibit position bias when the responses in a pair are of similar quality. Our analysis demonstrates that this method is effective and straightforward to apply. The overall win rate is preferable compared to the consistent win rate because it provides a more accurate measure of the answer quality gap and avoids data vanishing when judgments are inconsistent. For example, if two responses are of similar quality and the LLM makes a positionally biased judgment, the consistent win rate excludes this crucial "similar-quality" information, while the overall win rate correctly captures it. In this sense, though "inconsistency-as-tie" may appear arbitrary, it more accurately measures the answer quality gap compared to the naive but problematic consistent win rate methodology. The example judgment pair of Figure <ref> illustrates how considering "inconsistency-as-tie" is reasonable as the LLM judge exhibits position bias when both models are answering both questions wrong. §.§.§ Position Bias vs. Overall Win Rate Below are the graphs (see Figure <ref>,<ref>) for positional consistency and preference covers vs. overall win rate, which indicates how these metrics are related to answer quality gap. §.§.§ Position Bias vs. Consistent Win Rate Although we use the overall win rate instead of the consistent win rate to better capture the quality disparity between model-generated answers, we also elaborate on the relationship between position bias and consistent win rate as a supplementary study. Recall that consistent win rate is calculated based on only the consistent pairs of judgments. In other words, the consistent win rate measures how a model response is consistently better than the other via a swapped-order dual evaluation. The position consistency and positional preference score vs. consistent win rate graphs (<ref>,<ref>,<ref>,<ref>) exhibit some similar information as with they vs. overall win rate, but generally lack a clear pattern or a definite relationship. This validates the reasonability of considering “inconsistency-as-tie” and the measurement of the answer quality gap in terms of overall win rate rather than consistent win rate. §.§ Price vs Performance §.§.§ Judge-level There are always trade-offs between positional consistency, fairness, and costs regarding judge model selection, especially when commercial closed-source models are incorporated. Therefore, we provide the price vs. performance figures to help researchers and users of LLM-as-a-Judge select their ideal judge models based on specific needs. Although we stress that positional fairness is as important as positional consistency, positional preference exhibits a larger variance in both direction and scale across tasks. Also, unidirectional positional preference may not necessarily be inferior to evenly distributed fair preferences, because in practice a certain biased preference may be more interpretable than uncertain fairness. Therefore, in this session, the performance of LLM judges refers to positional consistency only. We plot the price vs. performance graphs with the x-axis being the average positional consistency and the y-axis being the mean approximate price estimation (see Figure <ref>). The price estimation calculates the mean expense of LLMs considering both input (which is the same for all LLMs) and output (which varies among LLMs). For generalizability and simplicity, since different LLMs may use different tokenization methods, we uniformly apply the python function len() to offer a rough estimation of the number of tokens that align with the pricing policies of OpenAI, Anthropic, and Google Deepmind. Though standing out in positional consistency on both benchmarks, gpt-4-0613 is significantly more expensive than other LLMs. claude-3-opus is the second most expensive judge model, whereas does not surpass some cheaper GPT-4 turbos and GPT-3.5s. GPT-4-Turbos (gpt-4-0125-preview and gpt-4-1106-preview) then follow with a slightly lower consistency but half the price of gpt-4-0613. GPT-3.5s (gpt-3.5-turbo-0125 and gpt-3.5-turbo-1106) can be regarded as the most cost-effective judges, as they surpass all Claude-3 models and gemini-pro-1.0 in terms of positional consistency and only lay behind gpt-4-0613 and GPT-4-Turbos a little bit with few costs. gemini-pro-1.0, though by the time we run the experiments free, does not show a significant advantage regarding cost-effectiveness due to the considerable performance gap with SOTA judge models. However, it is still an ideal choice with limited computational resources. In a nutshell, gpt-4-0613 is the generally best-performing judge model among the nine we have chosen for this study, worth its highest price. GPT-3.5s, sacrificing a little positional consistency compared to gpt-4-0613, can achieve ideally consistent evaluations with few costs. Claude-3 series, on the other hand, are not cost-effective judge models. §.§.§ Judge-Task-level In this session, we provide judge model selection recommendations for tasks based on price vs. positional consistency graphs for each task on each benchmark (Figure <ref>, <ref>). MTBench: * coding: gpt-3.5-turbo-0125 is superior to all other judges, being very cheap and most positionally consistent. gpt-4-0125-preview also surpasses gpt-4-0613, demonstrating an effective training of the 0125 version on coding judgments. * extraction: Expensive models are highly recommended for positional consistency. GPT-4, GPT-4-Turbo, and claude-3-opus perform similarly, with GPT-4-Turbo being slightly better, but all of them perform significantly better than other cheaper models and are worth their higher costs. However, regarding only cheap models, gemini-pro-1.0 is ideal, having no costs but reaching not-bad consistency. * humanities: GPT-4 is the optimal choice, but GPT-4-Turbo and GPT-3.5 are good alternatives. Claude-3 and Gemini Pro are not recommended. * math: GPT-4 and GPT-4-Turbo are ideal choices, significantly more consistent than all other models. They are worth their high prices. * reasoning: GPT-4 and claude-3-opus reach over 80% positional consistency and are the best choices, whereas GPT-3.5 are more cost-effective alternatives. GPT-4-Turbo is not recommended. * roleplay: GPT-4 significantly performs better than all other models, whereas GPT-4-Turbo is ideal for a lower price. Other than that, GPT-3.5 is your optimal choice, but it sacrifices large potential of highly positionally consistent judgments achieved by GPT-4. * stem: Although GPT-4 is the most consistent judge, GPT-3.5 is your most cost-effective choice, performing better than all other models including GPT-4-Turbo. * writing: GPT-4 should be your priority if available. GPT-4-Turbo and claude-3-opus are the lower-level performers, yet GPT-3.5 can be good alternatives. DevBench: * UML sequence: GPT-4 is the optimal choice, with sometimes GPT-3.5 as comparable but a lot more cost-effective alternatives. * architecture design: While GPT-3.5 is the most cost-effective choice, gpt-4-0125-preview is the best in general. * UML class: GPT-4 and gpt-4-0125-preview are all ideal choices regarding best-can-obtain consistency. If cost is limited, GPT-3.5s are also appropriate models with some sacrifice in consistency. * faithfulness: GPT-3.5 is the best or the best alternative for all three software design files if evaluated based on faithfulness. For UML class, claude-3-sonnet is also an ideal choice regarding cost-effectiveness. * cohesion_and_decoupling/uniformity_and_integration: GPT-4 leads significantly and should be your undoubted choice if available. §.§ Judge-level Position Bias To examine the impact of Judge-level factors on position bias, we analyzed the average positional consistency and preference scores of the nine selected judges. Table <ref> present their performance across MTBench and DevBench, providing insights into their position bias characteristics. From MTBench's Judge-level position bias table, we have the following observations: * Positional Consistency (PC): * Gpt-4-0613 has the highest positional consistency at 0.815, indicating the least position bias in its evaluations. * GPT-4's also show high consistency, above 0.74. * GPT-3.5s along with claude-3-opus show similar consistency at around 0.7. * Claude-3-haiku, claude-3-sonnet, and gemini-pro-1.0 exhibit a lower consistency of around 0.57, significantly trailing behind stronger models. * Standard deviation of PC: The standard deviations of positional consistency are all above 0.14, indicating significant variability across tasks and evaluation instances. However, more consistent models generally have a smaller standard deviation, highlighting their stability in positional consistency. * Positional Preference Score (PF): * Claude-3 and Gemini models are apparently recency-preferred, while GPT series are positionally fairer judges. * Gpt-3.5-turbo-0125 is the most positionally fair judge on MTBench, achieving -0.013 pp_score. * Gpt-3.5-turbo-0125 and gpt-4-0125-preview are the only primacy-preferred judges, while all others are recency-preferred. This may be attributed to the fine-tuning of the 0125 version of GPTs. * Claude-3-opus is notably biased toward recency, despite a positional consistency similar to GPT-3.5 models. Its preference score scale is over three times higher than gpt-3.5-turbo-1106 and 22 times greater than gpt-3.5-turbo-0125. * Error Rates: The error rates are low for all judges, having little impact on the analysis of position bias. However, the error rate does reflect the faithfulness of the judge models. The low error rates reflect the reliability of the judges in adhering to the required output format. For DevBench's Judge-level position bias, the following patterns emerge: * Positional Consistency (PC): * Gpt-4-0613 leads again with a consistency of 0.828, demonstrating its superiority as the most positionally consistent judge model. * Gpt-4-1106-preview maintains a high consistency of 0.786, followed by gpt-3.5-turbo-0125 and gpt-3.5-turbo-1106 at around 0.75. * Gpt-4-0125-preview surprisingly underperforms on DevBench, achieving a consistency of only 0.6, which is just slightly higher than claude-3-haiku. * Claude-3-sonnet shows better positional consistency than claude-3-opus, surpassing it by more than 0.02. * Gemini-pro-1.0 improves on DevBench compared to MTBench, reaching 0.65 consistency, but remains generally lower than other stronger models. * Claude-3-haiku shows a much lower consistency at 0.227, indicating significant position bias and variability in evaluations. * Standard Deviation of PC: Positional consistency deviations are high, with values ranging from 0.14 to 0.23. However, gpt-4-0613 maintains stability with the lowest deviation, signaling more consistent positional evaluations. * Positional Preference Score (PF): * Overall, pp_scores are higher on DevBench than MTBench, possibly due to a lack of most model capabilities in handling tasks with longer prompts in DevBench. * The two primacy-preferred judges on MTBench are now significantly recency-preferred on DevBench. Similarly, the three primacy-preferred judges on DevBench (gpt-4-0613, gpt-3.5-turbo-1106, and gemini-pro-1.0) are all recency-preferred judges on MTBench. This shows that positional preference and fairness vary significantly among tasks and models. * gpt-3.5-turbo-1106 beats the MTBench's winner gpt-3.5-turbo-0125 on DevBench and achieves the lowest scale of pp_score. This implies that GPT-3.5s are ideal judges in terms of positional fairness. * gpt-4-0613 remains fairer than most of the other judges but is more biased in pp_score on DevBench than on MTBench. * Gemini-Pro, shifts from being highly recency-preferred on MTBench to achieving a primacy-preferred score of -0.045 on DevBench, indicating fairness. * claude-3-haiku is not only highly inconsistent but also strongly recency-preferred, with a pp_score of 0.745. * claude-3-opus and claude-3-sonnet still show moderate preference towards recency, aligning with MTBench's performance. This further validates that Claude-3 models are inherently recency-preferred. * Error Rates: Error rates are negligible across all models, indicating that position biases are not influenced by output extraction issues. By comparing the results for MTBench and DevBench, we can have the following overall insights: * Positional Consistency vs. Fairness: Positional consistency does not always correlate with positional fairness. For instance, gpt-4-0613 achieves higher consistency than GPT-3.5, yet is less fair. Similarly, claude-3-opus has comparable consistency to GPT-3.5 but is strongly biased toward recency. * Variance Across Judges Tasks: Positional consistency and preference scores vary significantly across judges and tasks, emphasizing their impact on position bias. Some judges show marked improvement from MTBench to DevBench, reflecting their adaptability to specific task requirements. For example, Gemini-Pro's shift to primacy-preferred on DevBench indicates its evolving ability to handle longer prompts. * Familial Trends: GPT models, especially GPT-4 series, consistently outperform other models in both consistency and fairness across tasks. Claude models, particularly claude-3-haiku, tend to have more noticeable biases and significant recency-preferred issues. * Stable Judges: Models with a smaller standard deviation in positional consistency are generally more stable across tasks and evaluation instances. GPT-4 models stand out in this regard. §.§ Judge-Task-level Position Bias As observed, position bias varies across judges and tasks. Therefore, besides considering only the Judge-level factors, we investigate how each judge model performs on each task in this session. This analysis is especially valuable in providing insights about LLM Judge choices on different tasks. A certain judge model may perform well on some tasks while worse on others; for certain tasks, some overall weaker models may perform as well or superior to SOTA models. §.§.§ Baseline Comparison Here are some supplemental observations of the baseline comparison graphs (see Figure <ref>,<ref>). The MTBench's Positional Consistency Baseline Comparison Figure shows that: * Gpt-4-0613 is significantly more consistent on all tasks compared to claude-3-haiku, claude-3-sonnet (except for coding task), and gemini-pro-1.0, * The judges and tasks that perform similarly (i.e., not significantly stronger or weaker) to gpt-4-0613 include: * claude-3-opus: coding, extraction, and reasoning * GPT-3.5s: coding, reasoning, and stem * GPT-4's: coding, extraction, humanities, and math * There are certain tasks where other models outperform gpt-4-0613, though not significantly. For example, gpt-3.5-turbo-0125 and gpt-4-0125-preview exhibit a higher positional consistency for coding task, and GPT-4's also perform superiorly on extraction task. * Overall, gpt-4-0613 is shown to be the SOTA model across all tasks in terms of positional consistency, dominating most of the time without any significant "lose". The MTBench's Positional Preference Baseline Comparison Figure indicates: * Positional Preference (direction and extent) varies significantly across tasks for all judges. * Claude-3s and Gemini-Pro are significantly recency-preferred across most tasks, while GPTs exhibit a balanced preference. * gpt-4-0613 maintains the most balanced preference across tasks, with only significant recency preference in reasoning and roleplay. The DevBench Positional Consistency Baseline Comparison Figure indicates: * Gpt-4-0613 remains the most consistent judge across all tasks compared to other models except for gpt-4-1106-preview in architecture_design-practicability task. However, gpt-4-1106-preview has a higher positional consistency than gpt-4-0613 in 7 tasks (mainly architecture_design and UML class), which can be considered a strong competitor. Therefore, the SOTA performance is obtained for gpt-4-0613 on the UML sequence task and gpt-4-1106-preview on the UML class and Architecture Design tasks. * Claude-3s, gemini-pro-1.0, and gpt-4-0125-preview have significantly lower positional consistency compared to gpt-4-0613 across almost all tasks. * Claude-3-haiku is performing extremely worse than all other models, which aligns with its average of 0.23% positional consistency. This may imply that the DevBench tasks are too difficult for it to understand and judge. * Generally, gpt-4-0613 does not perform significantly superior in UML class tasks, especially compared to other GPTs and claude-3-opus. For UML sequence tasks though, gpt-4-0613 outperforms most other judge models significantly. * GPT-3.5s outperform gpt-4-0613, though not significantly, in the architecture_design-design_and_coding task, which aligns with their superior performance on MTBench in coding task. The DevBench Positional Preference Baseline Comparison Figure shows: * Similar to the Judge-level findings, nearly all judges exhibit more extent of positional preference, some with directional shifts. * Claude-3s remain to be highly recency-preferred, where claude-3-haiku shows an extremeness. * Gemini-pro-1.0, unlike recency-preferred on MTBench, becomes the seamlessly fairest judge model on DevBench. * Gpt-4-0125-preview, unlike balanced on MTBench, becomes significantly recency-preferred across all tasks. This is also prominent with gpt-3.5-turbo-0125, showing significant preference on recency except for architecture_design-faithfulness. This implies some fine-tuning issues with the GPT's 0125 version. * Gpt-3.5-turbo-1106 exhibits a rather balanced performance; gpt-4-0613, especially, shifts from recency-preferred on MTBench to primacy-preferred on DevBench; gpt-4-1106-preview, contrastingly, is significantly biased on recency except for architecture_design-design_and_coding where it is primacy-preferred. To summarize, by examining the baseline comparison figures on MTBench and DevBench, it is obvious that positional consistency and preference vary significantly across judges and tasks. This allows the opportunity for certain judges to become comparable or preferable to the general SOTA comparison model, gpt-4-0613, on certain tasks. Another conclusion that can be derived is that for a specific task, a positionally more consistent judge may not necessarily be fairer, leading to trade-offs between positional consistency, fairness, and cost-effectiveness regarding judge selection. §.§.§ By Judge and Task This session includes the figures (<ref>,<ref>,<ref>,<ref>) for the baseline comparison graphs but includes confidence intervals rather than a t-test *. Also, for positional consistency, the figures in this session use actual values instead of percentage comparison. Although telling the same story as baseline comparison graphs, the confidence intervals in these figures reveal that the variation of positional consistencies across tasks and judges varies, and the variance is not negligible even for the SOTA models. §.§.§ By Task and Judge Besides plotting each judge's positional consistency and preference score by task, we also plot the metrics for each task by judges to show insights from another dimension. For positional consistency, it can be deduced that on MTBench, LLM judges commonly perform worse on math and extraction and better on coding, humanities, reasoning, and stem tasks; on DevBench, there's no big difference of LLM's common performance across tasks, especially regarding the same software design file. For positional preference scores, more patterns can be observed. On MTBench, humanities, writing, and extraction tasks have fairer preferences across judges whereas most judges are strongly recency-preferred for roleplay, reasoning, and stem tasks. On DevBench, judge preferences are more balanced for architecture design and UML class, whereas mostly recency-preferred for UML sequence. These figures (<ref>,<ref>,<ref>,<ref>) provide invaluable judge model selection support for certain tasks, as the LLM judges are directly compared to each other on each task. §.§.§ MLE Analysis for Positional Preferences As observed from the t-tests of the baseline comparison analysis, there is a set of comparisons that are not statistically significant. Therefore, we followed up with a Maximum Likelihood Estimation (MLE) analysis for the positional preference scores. The MLE analysis provided insights in telling to what extent will the judge model give random or positionally fair judgments. We conducted MLE analysis similarly on both Judge-level and Judge-Task-level. Examining the MLE results for MTBench + DevBench together(Figure <ref>) gives the following insights: * Judge-level: * Familial property is prominent and verified again, with some exceptions. GPT series are all likely primacy-preferred, while Claude-3 and Gemini are likely recency-preferred. * Gpt-4-0613, instead of grouping similarly with GPT-4 turbos that are closest to absolute fairness likelihood, is significantly likely to be primacy-preferred. * GPT-3.5s are more likely than GPT-4 turbos to be primacy-preferred but are still very likely to be positionally fair across all tasks. * Judge-Task-level: * For all judges, there are certain tasks where the judge exhibits an extreme likelihood of preferring primacy or recency. * The Judge-Task level results validate the Judge-level results in that the likely fair judges exhibit evenly distributed or spread-out preference curves across tasks, and the likely significantly preference-biased judges are mainly biased on that preference across all tasks. Generally speaking, the MLE analysis validates the observations on positional preference scores and provides more insights, It again demonstrates the variation of positional preferences of judges across different tasks, stressing the need to select the appropriate judges regarding task types. Moreover, the varied preferences among judges essentially support the effectiveness of Multi-agent judge discussions <cit.> to correct the position bias. Taking a mode of judgment from different judges on the same evaluation instance can also be illustrated because the positional preference is diluted across varied preferential judges. Separately, for MTBench (Figure <ref>), at the Judge-level, all GPT models are likely to be primacy-preferred while Claude-3 and Gemini are recency-preferred. Among them, gpt-3.5-turbo-1106 is nearly fair, aligning with its closest-to-zero positional preference score. However, when examining further at the Judge-Task level (Figure <ref>), though the recency-preferred models all exhibit significant skewness to recency across tasks, the behavior of GPT models across tasks varies. While gpt-3.5-turbo remains evenly distributed for preference directions and GPT-4 and GPT-4 turbos demonstrate almost evenly distributed preferences with some apparently skewed exceptions, gpt-3.5-turbo-0125 exhibits a sharp jump from recency to primacy preference. For DevBench (<ref>), despite Claude-3 remains to be significantly recency-preferred, there are different discoveries as compared to MTBench. First of all, gemini-pro-1.0 shifts from recency-preferred to primacy-preferred, reaching a similar preference as gpt-3.5-turbo-1106. Moreover, gpt-3.5-turbo-0125 and gpt-4-1106-preview now become likely recency-preferred on DevBench, with gpt-3.5-turbo-0125 becoming the likely fairest judge. Interestingly at the Judge-Task level (Figure <ref>), all judges demonstrate extreme skewness on certain tasks and claude-3-haiku particularly prefers recency for all evaluation instances. GPT-4 now is likely to prefer primacy on all tasks and gpt-4-0125-preview on the opposite, whereas other GPT models exhibit smooth evenly distributed preferences on both sides. §.§ Linear Regression Overall Prediction Results The regression results (see Table <ref>,<ref>) for both positional consistency and preference score indicate that these outcomes are challenging to predict linearly. The relatively low R-squared values for both models suggest that positional consistency and preference scores are not easily captured by a simple linear regression model. For positional consistency, the R-squared value is 0.077 and the adjusted R-squared is 0.069. Similarly, for the preference score, the R-squared value is 0.055 and the adjusted R-squared is 0.046. These values imply that only a small portion of the variability in positional consistency and preference scores is explained by the factors included in the regression models, indicating the presence of more complex interactions and factors influencing position bias beyond those captured in this linear model. Significant Factors Impacting Positional Consistency Several factors were found to significantly impact positional consistency. Significant factors (p < 0.05) include: * Judge-level Factors: * Context Window (p < 0.0001) * Judge Models: * Claude-3-haiku-20240307 (p < 0.0001) * Claude-3-opus-20240229 (p < 0.0001) * Gpt-4-0613 (p < 0.0001) * Gpt-4-1106-preview (p = 0.002) * Gemini-Pro (p = 0.01) * Familial Properties: * Family GPT-4 (p < 0.0001) * Family GPT-4 Turbo (p = 0.004) * Family Gemini (p = 0.01) * Model-level Factors: * Quality Gap (p = 0.002) * Task-level Factors: * Task_architecture_design-faithfulness (p = 0.033) * Task_stem (p = 0.026) * Task_coding (p < 0.0001) * Task_extraction (p = 0.013) * Task_math (p = 0.006) * Task_writing (p = 0.004) Significant Factors Impacting Preference Score Similarly, several factors significantly impact the preference score. Significant factors (p < 0.05) include: * Judge-level Factors: * Context Window (p = 0.001) * Judge Models: * Claude-3-haiku-20240307 (p < 0.0001) * Claude-3-opus-20240229 (p < 0.0001) * Claude-3-sonnet-20240229 (p < 0.0001) * GPT-4 Turbo (p = 0.015) * Familial Properties: * Family Claude-3 (p = 0.020) * Model-level Factors: * Quality Gap (p < 0.0001) * Task-level Factors: * Task_UML_sequence-faithfulness (p < 0.0001) * Task_uml_class-cohesion_and_decoupling (p = 0.030) * Task_uml_class-complexity (p = 0.030) * Task_uml_class-faithfulness (p = 0.030) Common Influential Factors Analyzing the results for both positional consistency and preference score reveals common influential factors: * Judge-level Factors: * Context Window: Significantly impacts both positional consistency (p < 0.0001) and preference score (p = 0.001), indicating that the amount of context provided plays a crucial role in position bias. * Familial Properties: Several familial properties significantly impact both outcomes: * Family Claude-3 (p = 0.020) * Family GPT-4 (p < 0.0001) * Family GPT-4 Turbo (p = 0.004, p = 0.015) * Family Gemini (p = 0.01) * Model-level Factors: * Quality Gap: Significantly impacts both positional consistency (p = 0.002) and preference score (p < 0.0001), suggesting that differences in answer quality are a major factor in position bias. * Task-level Factors: * Task_UML_sequence-faithfulness: Significantly impacts both positional consistency (p < 0.0001) and preference score (p < 0.0001). * Task_uml_class-cohesion_and_decoupling, Task_uml_class-complexity, Task_uml_class-faithfulness: Significantly impact preference score (p = 0.030). Conclusion The results indicate that positional consistency and preference scores are influenced by a combination of judge-level, model-level, and task-level factors. However, the low R-squared values demonstrate that these factors alone do not fully explain the position bias, suggesting that more complex and non-linear relationships might be at play. Significant factors should be considered when designing and evaluating LLM judges to mitigate position bias effectively. §.§ Position Bias vs. Length Stats For Task-level influential factors, besides the tasks themselves, we also elaborate on the length stats, including task input length, task output length, and prompt length. This is also inspired by the discovery of length/verbosity bias <cit.>, where LLM judges will prefer longer responses. Specifically, we want to illustrate how the length statistics impact positional consistency and fairness through our extensive research. Task input length refers to the sum of the task question lengths, which completely depends on the task and specific question setting. Task output length, the length sum of responses generated by models answering the questions, also suffers from fluctuation root from the models, where usually stronger models will generate longer answers. Prompt length, the total length of the query given to the judge model, is a combination of task input and output length along with system prompts and optional reference answers or evaluating guidance. We do not consider the judgment length, or the output length of the LLM judge, because the judges are prompted to generate only a choice and a few sentences of reasoning. Overall, we conclude that there are no clear patterns between positional consistency or fairness and length stats, implying little influence of length on position bias. This challenges the length/verbosity bias conclusion that LLMs favor longer responses. Therefore, we deduce that the observed length/verbosity bias is essentially position bias stemming from answer quality gap. LLMs favoring longer responses may actually be favoring responses with higher quality. Since LLMs have context window restrictions, it poses a great challenge to LLMs if the prompt length exceeds the context window limitation too much. However, through our research where most of the query lengths are within the judges' context windows, length makes little impact on position bias in the usual case. The visualization of positional consistency and positional preference scores vs. the three lengths stats can be seen below (Figure <ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>,<ref>).
http://arxiv.org/abs/2406.09006v1
20240613111935
Tunneling of Hawking Radiation in Starobinsky-Bel-Robinson gravity
[ "Dhanvarsh Annamalai", "Akshat Pandey" ]
gr-qc
[ "gr-qc" ]
[ [ June 17, 2024 ================= § ABSTRACT We examine Hawking radiation for a Schwarszchild-type black hole in Starobinsky Bel Robinson (SBR) gravity and calculate the corrected Hawking Temperature using the tunnelling method. We then discuss the deviation of our Hawking temperature from the standard Schwarszchild result. We relate the corrections to the Hawking temperature beyond the semi-classical approximation. We highlight that starting with a modification of the classical black hole geometry and calculating the semi-classical Hawking temperature, yields temperature corrections comparable to those obtained when the classical background is kept unchanged and beyond semi-classical terms in the temperature are included. § INTRODUCTION Ever since Hawking's initial work <cit.>, the physics of black hole radiance has seen continued interest. In their seminal paper, Parikh and Wilzeck proposed a heuristic derivation for the Hawking temperature using the WKB approximation where Hawking radiation was thus described as tunnelling <cit.>. This helped emphasise the simplicity of the underlying physics of Hawking radiation. For the application of this method to derive Hawking temperature in various black hole solutions, see <cit.>. A key feature of the above mentioned tunnelling method is that it is performed within a semi-classical scenario. Indeed, these results have later been generalised beyond the semi-classical approximation, thus including higher-order corrections. For instance, Banerjee and Majhi <cit.> obtained the higher order corrections to the Hawking temperature of black hole solutions within GR using the Hamilton-Jacobi method. In particular, for the Schwarszchild black hole, written in gravitational units, the corrected Hawking temperature T_H has the form, T_H = 1/8 π G M(1 + ∑_iλ_i/G^iM^2i)^-1 Here the first term on the right-hand side is the one obtained via semi-classical approximation. The subsequent terms arise as quantum corrections, the first among which corresponds to the effects of backreaction <cit.>. For more details on this method see <cit.> In the present work, we try to explore the other path, that is to answer the following question — if we start with a modified black hole solution, which includes quantum gravity-inspired corrections in the metric, and use simply the semi-classical approximation, how close can we come to reproducing the higher order correction terms in the Hawking temperature? For this purpose, we turn towards Starobinsky-Bel-Robinson (SBR) gravity <cit.>, which has been gaining interest recently. SBR gravity is a quantum-gravity-inspired modification to the usual Einstein-Hilbert (EH) action. As the name suggests, this includes the addition of an R^2 term like the one in Starobinsky's inflationary model <cit.>, and another term related to the Bel-Robinson tensor from M-theory <cit.>. More recently, Ketov et al <cit.> showed that SBR gravity allows for the presence of a modified Schwarszchild-like spherically symmetric black hole solution. The modifications to the black hole metric are a consequence of the modifications to the EH action as mentioned above. This is exactly the type of solution we need, to answer the question that we have asked above. In the following sections, we will make use of this black hole solution to calculate the Hawking temperature of this black hole solution via the Parikh-Wilzceck method and compare it against the results available in the literature. This paper is organised as follows. In section 2, we briefly review SBR gravity and the Schwarzschild-like black hole solution as obtained by Ketov et al <cit.>. In section 3, we employ the semi-classical tunnelling method to calculate the transmission rate for Hawking radiation. In section 4, we discuss the relevant thermodynamical quantities and their associated corrections. We compare our results against the expected corrections from <cit.>. We also relate our results to Banerjee et al <cit.> as mentioned above. We end with a summary of our work in section 5. § SBR GRAVITY AND A BLACK HOLE SOLUTION The SBR action reads S_SBR= M_Pl^2/2∫ d^4 x √(-g)[R+1/6 m^2 R^2-β/32 M_Pl^6(𝒫^2-𝒢^2)] Here, β > 0 is the new dimensionless coupling constant to be determined by compactification of M-theory, m is the inflation mass ∼ 10^-5 M_pl determined by the COBE/WMAP normalization, where the reduced Planck mass M_Pl=1/√(8 π G).The R^2 term corresponds to the Starobinsky modification to the Hilbert action <cit.>. 𝒫 and 𝒢 are the Euler and Pontryagin topological densities in D = 4 dimensions, respectively such that 𝒢=R^2-4 R_μν R^μν+R_μνρσ R^μνρσ and 𝒫=1/2√(-g)ϵ_μνρσ R_αβ^ρσ R^μναβ . These are related to the squared Bel-Robinson tensor 𝒯^αβμν <cit.> through 𝒯^2 = 1/4(𝒫^2 - 𝒢^2) For details on equation (5) and its significance, see <cit.>. The purely gravitational SBR action (2) has several interesting features including its robustness and predictive power given the presence of only two parameters m and β. However, in the present work, we are interested in the corrections this action imposes on the Schwarszchild black hole metric. The β-corrected Schwarzschild metric as worked out by Ketov et al <cit.> is given by ds^2= -ℱ(r)dt^2 + 1/ℱ(r)dr^2 + r^2 dΩ ^2 Where ℱ(r) corresponds to ℱ(r)= 1 - r_s/r + β128 π ^3/5(G r_s/r^3)^3(108 - 97r_s/r) Here r_s is the standard Schwarzschild radius from GR, r_s=2GM. Given the β correction to the metric components, it is natural to expect that the black hole corresponding to it will also have corrections to its physical properties, like the radius, temperature, entropy, etc. As we shall see in the next section, it is essentially the form of ℱ(r) that governs the imaginary action and consequently the tunneling rate; a modification to ℱ(r) would thus imply modifications to the tunneling rate. As <cit.> showed, the black hole radius r_H with corrections up to first order in β is r_H = 2GM - β44 π^3 /5G^2 M^5 + O(β^2) Therefore, there is a shift in the horizon given by the β term. We are interested in the corrections induced because of the β term upon the Hawking temperature. As we will see in the next section, up to the first order in β, the only relevant correction is the one associated with the radius. The problem we address, thus, reduces to the following — given that the radius of a Schwarszchild-like black hole is prescribed by equation (7), how does this alter the Tunneling coefficient and in turn the Hawking temperature? § THE TUNNELING COEFFICIENT We begin by transforming equation (6) to Painleve^' coordinates. This, among other things, eliminates the coordinate singularity at the horizon. ds^2 = - ℱ(r) dt^2_p + 2 √((1-ℱ(r))) dt_p dr+ dr^2 + r^2 dΩ^2 In (9), t_p denotes the Painleve^' time coordinate. Then the radial null geodesic is given by, ds^2 = 0 = - ℱ(r) dt^2_p + 2 √((1-ℱ(r))) dt_p dr + dr^2 hence we have, - ℱ(r) + 2 √((1-ℱ(r)))ṙ + ṙ^2 = 0 thus, ṙ = ± 1 - √((1-ℱ(r))) The positive (negative) sign refers to outgoing (incoming) geodesics. The imaginary part of the action over the classically forbidden path is given by, ImS = Im∫^r_out_r_in p dr ImS = Im∫^r_out_r_in dr ∫^p_0 dp Here r_in is slightly inside the black hole, where the particle is produced and r_out is slightly outside the shrunken radius of the black hole after the particle has tunnelled through it. Changing the order of integration and the variables from momentum to energy via Hamilton's equation we get, ImS = Im∫^M-ω_M dH ∫^r_out_r_in1/ṙ dr = Im - ∫^ω_0 dω^'∫^r_out_r_in1/(1-√((1-ℱ(r))) dr where we have used equation (11). Now, since the radius is defined by ℱ(r) = 0, and we care only about the near horizon behaviour i.e. ℱ(r) << 1, we can safely use the following approximation 1/(1-√((1-ℱ(r)))≈2/ℱ(r). Thus, using equation (7) and Taylor expanding around r_H up to the first order, ℱ(r) turns out to be, ℱ(r) ≈[r - 2G(M-ω^') - β44 π^3 /5G^2 (M-ω^')^5] ∂_rℱ(r) The derivative is taken at r=r_H which up to linear order in β is given by ∂_rℱ|_r = r_h = 1/2G(M-ω^') + βπ^2/G^4 (M-ω^')^7 Note that the integration limits are r_in = 2GM - β44 π^3 /5G^2 M^5 - ϵ , r_out = 2G(M-ω^') - β44 π^3 /5G^2 (M-ω^')^5 + ϵ Check ref [23] for further details on this change in the black hole radius, in the presence of a scalar field. Substituting (16) in (15) and changing the integration variable to ϵ, ImS = Im∫^ω_0 -dω^'∫^0^+_0^-2/ϵ∂_rℱ|_r = r_h dϵ = ∫^ω_0 dω^'4π G(M-ω^')/(1 + 2 βπ^2 /G^3 (M-ω^')^6) Where we have used the property that Im(1/ϵ) = iπδ(ϵ) for this integral. Upon Taylor expansions and keeping terms only up to linear order in β, we get ImS ≈∫^ω_0 dω^' 4π G (M-ω^') - β∫^ω_0 dω^'8π^3/G^2 (M-ω^')^5 The imaginary part of the action, thus, comes out to be ImS = 4π G ω(M - ω/2) - 2 π^3 β/G^2[ 1/ (M-ω)^4 - 1/M^4] using this, the exponential part of the semi-classical tunnelling rate Γ∼ exp(-2 ImS) ∼ exp( -8π G ω(M - ω/2) + 4 π^3 β/G^2[ 1/ (M-ω)^4 - 1/M^4] ) neglecting the quadratic and higher order terms in ω, we get Γ∼ exp( -8π G ω M + 16 π^3 βω/G^2 M^5 + ωO(β^2)) This is our desired Boltzmann factor for a particle with energy ω. In the next section, we will use this exponential to define the Hawking temperature in terms of the coefficients of ω. § HAWKING TEMPERATURE Upon identifying (23) with the definition of the Boltzmann factor exp(-ω / T_H), we get the Hawking temperature to be T_H = 1/8π G M + βπ/4 G^4 M^7 + O(β^2) Some remarks are in order: * There are corrections to the semi-classical Hawking temperature induced by the Bel-Robinson coupling constant β. This was to be expected and somewhat trivial. * In their paper, Delgado and Ketov [2] calculated the Hawking temperature in terms of the surface gravity, using the derivative of the metric coefficient d ℱ(r) /dr. Comparing this result with what we have obtained above, we see that we get the correct mass dependence (M^-7) in the β term. However, we are off by a factor of 4 π in the coefficients of this term. This is a consequence of the crudeness of the semi-classical approximation which we have employed. * Comparing equation (24) with equation (1), we see that the β correction term to the Hawking temperature can be interpreted in terms of Hamilton-Jacobi-based quantum corrections to the semi-classical Hawking temperature, upon making the identification β∼ - λ_3 while ignoring all the other λ_i terms. Thus, we see that the same modification of Hawking temperature can be interpreted as either due to a modification of the black hole geometry or due to the modification of the semi-classical structure of the tunnelling approximation [3]. § SUMMARY In this work, we employed the semi-classical method to calculate the tunnelling coefficient for a Schwarszchild-like black hole in SBR gravity. This allowed us to calculate the Hawking temperature and its deviations from the standard semi-classical result. Further, we compared our corrections results with those existing in the literature. We emphasize that commencing with a modification of the classical black hole geometry and incorporating the semi-classical Hawking temperature, can yield corrections to the Hawking temperature which is analogous to maintaining the classical background unaltered and, instead, going beyond the semi-classical approximation itself. These corrections are of course, similar up to dimensionless parameters. § ACKNOWLEDGEMENTS We would like to express our sincere gratitude to Kaustubh Singh and Sauvik Sen for their insightful comments and valuable feedback. We also thank the anonymous reviewers for their constructive criticism and helpful suggestions. 20 Hawking S. W. Hawking, "Particle creation by black holes", Comm. Math. Phys., 43, 199 (1975). Hawking temp derivation J. B. Hartle, & S. W. Hawking, "Path-integral derivation of black-hole radiance", Phys. Rev. D 13, 2188 (1976). Parikh M. K. Parikh, & F. Wilczek, "Hawking Radiation As Tunneling", Phys. Rev. Lett., 85, 5042 (2000). SBR Blackhole R. C. Delgado, & S. Ketov, "Schwarzschild-type black holes in Starobinsky-Bel-Robinson gravity", Phys. Lett. B., 838, 137690 (2023). Manji 1 R. Banerjee, & B. R. Majhi, "Quantum tunneling beyond semiclassical approximation", J. High Energy Phys., 2008(06) (2008). Manji 2 R. Banerjee, & B. R. Majhi, "Quantum tunneling and back reaction", Physics Letters B., 662, 0730-2693 (2008). Manji 3 B. R. Majhi, "Fermion tunneling beyond semiclassical approximation" Phys. Rev. D, 79(4), 044005 (2009). Modak S. K. Modak, "Corrected entropy of BTZ black hole in tunneling approach", Phys. Lett. B., 671(1), 167-173 (2009). Parikh 2 M. Parikh, "A Secret Tunnel Through the Horizon", General Relativity and Gravitation 36, 2419-2422 (2004). Feng 1 Z. W. Feng, H. L. Li, X. T. Zu, & S. Z. Yang, "Quantum corrections to the thermodynamics of Schwarzschild–Tangherlini black hole and the generalized uncertainty principle", European Physical Journal., C76, 212 (2016). Feng 2 Z. W. Feng, Q. C. Ding, & S. Z. Yang, European Physical Journal., C79, 445 (2019). Feng 3 Z. W. Feng, X. Zhou, S. Q. Zhou, & D. D. Feng, "Modified fermion tunneling from higher-dimensional charged AdS black hole in massive gravity", Annals of Physics, 416, 168144 (2020). Flan E. E. Flanagan, "Order-Unity Correction to Hawking Radiation", Phys. Rev. Lett., 127, 041301 (2021). Bagchi B. Bagchi, & S. Sen, "Tunneling of hawking radiation for BTZ black hole revisited", Int. J. Mod. Phys., 37(02) (2022). Wang R. Li, & J. Wang, "Hawking radiation, local temperatures, and nonequilibrium thermodynamics of the black holes with non-Killing horizon", Phys. Rev. D., 104 , 026011 (2021). Usage of tunnelling Z. Z. Ma, "Hawking temperature of a Kerr–Newman–dS black hole from tunneling", Class. Quantum Grav., 26, 045002 (2009). Visser M. Visser, "ESSENTIAL AND INESSENTIAL FEATURES OF HAWKING RADIATION" Int. J. Mod. Phys. D., 12(04) (2003). SBR S. V. Ketov, "Starobinsky–Bel–Robinson Gravity", Universe, 8, 351 (2022). Star A. A. Starobinsky,"A new type of isotropic cosmological models without singularity", Phys. Lett. B, 91, 99–102 (1980). Bel L. Bel, Colloq. Int. Cent. Natl. Rech. Sci., 91, 119-126 (1962). Robinson I. Robinson, "On the Bel - Robinson tensor", Class. Quantum Gravity, 14, A331–A333 (1997). deser S. Deser, "The Immortal Bel-Robinson Tensor", arXiv: 9901007. Nozari Eslamzadeh, Sareh, and Kourosh Nozari, "Tunneling of massless and massive particles from a quantum deformed Schwarzschild black hole surrounded by quintessence", Nuclear Physics B 959: 115136 (2020).
http://arxiv.org/abs/2406.08900v1
20240613075354
On Improving Error Resilience of Neural End-to-End Speech Coders
[ "Kishan Gupta", "Nicola Pia", "Srikanth Korse", "Andreas Brendel", "Guillaume Fuchs", "Markus Multrus" ]
eess.AS
[ "eess.AS", "cs.SD", "eess.SP" ]
Dual Attribute-Spatial Relation Alignment for 3D Visual Grounding Yue Xu School of Information Science and Technology University of Science and Technology of China Hefei, China xuyue502@mail.ustc.edu.cn Kaizhi Yang School of Information Science and Technology University of Science and Technology of China Hefei, China ykz0923@mail.ustc.edu.cn Kai Cheng School of Data Science University of Science and Technology of China Hefei, China chengkai21@mail.ustc.edu.cn Jiebo Luo Department of Computer Science University of Rochester New York, United States jluo@cs.rochester.edu Xuejin Chen School of Information Science and Technology University of Science and Technology of China Hefei, China xjchen99@ustc.edu.cn June 17, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Error resilient tools like Packet Loss Concealment (PLC) and Forward Error Correction (FEC) are essential to maintain a reliable speech communication for applications like Voice over Internet Protocol (VoIP), where packets are frequently delayed and lost. In recent times, end-to-end neural speech codecs have seen a significant rise, due to their ability to transmit speech signal at low bitrates but few considerations were made about their error resilience in a real system. Recently introduced Neural End-to-End Speech Codec (NESC) can reproduce high quality natural speech at low bitrates. We extend its robustness to packet losses by adding a low complexity network to predict the codebook indices in latent space. Furthermore, we propose a method to add an in-band FEC at an additional bitrate of 0.8 kbps. Both subjective and objective assessment indicate the effectiveness of proposed methods, and demonstrate that coupling PLC and FEC provide significant robustness against packet losses. Error resilient tools like Packet Loss Concealment (PLC) and Forward Error Correction (FEC) have been an essential part of conventional speech codec systems. For applications like Voice over Internet Protocol (VoIP), where frequent packet losses and delays are unavoidable, such tools play a crucial role in maintaining the quality of service for end-users. In recent times, End-to-end neural speech codecs have seen a significant rise, due to their ability to transmit speech signal at very low bitrates. To obtain optimal quality, it is essential that the neural speech codecs also incorporate such error-resilient tools to handle packet losses. Recently proposed, the Neural End-to-End Speech Codec (NESC) efficiently encodes speech signal at a low bitrate of 3.2 kbps and is robust to noisy and reverberant speech signals. Extending the robustness of NESC to packet losses, we propose a low-complexity neural network that can perform PLC by predicting the lost codebooks indices based on past codebook latents. We also propose a simple method to perform FEC at an additional bitrate of 0.8 kbps. Our method operates on the latent representation of NESC and is trained independent of the codec. Through both subjective and objective assessment, we evaluate the effectiveness of our proposed solution and show that the usage of FEC can be an effective tool of concealment for NESC. § INTRODUCTION Voice over Internet Protocol (VoIP) is the most widely used application in modern digital communication systems. In order to ensure real-time communication, VoIP uses the User Datagram Protocol (UDP) in conjunction with the Real-Time Transport Protocol (RTP) to send encoded audio packets over the network <cit.>. Since UDP is an unguaranteed connectionless protocol, the transmission is prone to delay and jitter (delay variation) in packet arrival, and even to packet losses. Modern communication codecs must be capable of handling such packet delays and losses in order to maintain good quality of service. Basic PLC <cit.> techniques includes methods like silencing the lost frame, repeating the past received frame or some form of time-scaling. Such methods are not very effective and produces audible artefacts. The transmission jitter is generally compensated by a Jitter Buffer Management (JBM) <cit.> at the receiver side that can handle out-of-order packets and maintain a steady rate of playback. More advanced state-of-the-art communication codecs like 3GPP Enhanced Voice Service (EVS) <cit.> support two types of error resilient tools. The first type of tools is the PLC <cit.>, which extrapolates coded parameters such as line spectral frequencies (LSF) from previous frames and can also be guided by additionally transmitted parameters.. The other type is an in-band FEC <cit.> where information of distant past frames are summarily coded and the generated additional information is piggy-backed on the primary payload of future frames. When the current frame is declared as lost during the decoding process, a JBM can exploit the in-band FEC, where a future frame containing redundant information of the current frame might be available in the buffer. Transmitting redundant information in anticipation of a loss has to be done with care usually in a channel-aware mode since it puts an additional strain on a network connection and could engender additional latency. In recent times, Deep Neural Network (DNN)-based solutions have been shown to outperform conventional PLC methods for large bursts and high error rates. In the earliest DNN-based solution for concealment <cit.>, a small network of fully connected layers estimates the log power spectrum and the phase of the lost frame. The DNN-based PLC solutions are mostly predictive in nature as they aim to estimate or generate the lost frames based on available past frames. Thus, autoregressive networks are widely used for PLC. In  <cit.>, a Recurrent Neural Network (RNN)-based network is trained to predict the samples of the next frame given past samples as input. The network has limited concealment capability as prediction error of samples may accumulate quickly over multiple frames and will not be effective for burst losses. Another approach which is based on WaveRNN <cit.>, uses a conditioning network that takes a mel-spectrogram as an input and conditions an autoregressive network to generates the samples. A more powerful method in <cit.> uses a predictive network along with an autoregressive LPCNet vocoder. The predictive network estimates the features of lost frames which are then used to condition LPCNet to generate the missing samples. The powerful generation capability of Generative Adversarial Networks (GANs) has also been explored for PLC and in most cases outperforms autoregressive methods. GANs generally employ a generative network that can produce an entire lost frame in one forward pass and are trained adversarially with multiple discriminators representing a trainable loss function. These networks can generate the lost frame either in time domain <cit.> or in time-frequency domain <cit.>. All the aforementioned networks generally work as post-processor in combination with conventional or neural speech codecs. Such systems require further processing like cross-fading, overlap-add etc. to ensure seamless transition from decoded to concealed frame and vice-versa. With the advent of end-to-end self-supervised neural speech codecs like <cit.>, there is a need for more integrated error resilient tools for concealment. The common architecture of end-to-end codecs includes an encoder, a decoder and a Vector Quantizer (VQ) consisting of multiple residual stages to calculate a quantized representation of the encoder output, i.e., the latent. Some approaches have been developed to perform PLC using lost latent representations <cit.>. In TFnet codec <cit.>, the latent is masked to indicate lost frame and is recovered by either using an additional module after the decoder or with an optimized decoder capable of handling such frame losses. In <cit.>, an additional block called FD-PLC is inserted between encoder and decoder, and is trained end-to-end to recover the lost quantized features. For in-band FEC, a neural network based solution has been developed for conventional codec <cit.> but to the best of our knowledge no work have been done so far for neural codecs. In this paper, we propose a method to perform PLC and in-band FEC in the latent domain for neural end-to-end speech coders like NESC. Our low complexity PLC model uses the quantized latent of the past frame and predicts the likelihood of the next codebook indices. Codebook indices prediction has been proposed previously for entropy coding <cit.> or to predict fine residual codebooks <cit.> but not for PLC. Moreover, networks used for these predictions are highly complex language models. Our contribution in this paper can be summarized as below: * We propose a causal, convolutional, lightweight model trained to predict future codebook indices. During inference, the model can run auto-regressively to conceal burst losses. * We propose to distill sum of multiple code-vectors of residual VQ onto a single low bitrate codebook and use it for concealment. * We propose a in-band FEC method for NESC that piggy-back the low bitrate codebook with a future frame. Our solution only adds 0.8 kbps of additional bitrate and do not use neural network, thus it does not introduce any complexity overhead. * Our proposed method is trained independently of the codec and does not require fine-tuning or re-training of the codec. Also, it does not require any extra information regarding the occurrence of packet losses during the decoding process. The predicted and the distilled code-vectors are directly used as input to the neural decoder. § PROPOSED METHODS §.§ NESC NESC <cit.> is an adversarially trained end-to-end neural codec designed to generate good quality speech signals at low bitrates. It consists of a neural encoder, a neural decoder and a learned quantization layer. The encoder operates on a frame size of 10ms with an additional 5 ms of lookahead and 5 ms of past samples. It produces a learned latent representation, for each frame. The latent vector is then quantized with a residual VQ that learns multiple codebooks where the code-vectors in each subsequent codebook quantizes the residual from the previous ones. The output of the quantizer is a sum of one or more code-vectors. The paper <cit.> proposes to use 3 codebooks, each with 1024 code-vectors, thus, the codec operates at bitrates ranging from 1 to 3 kbps. For our implementation, we train a new NESC model that quantizes latent vector with 4 codebooks, each with 256 code-vectors. We found that this setup increases the quality of the codec and provides more scalability as the operating bitrate now ranges from 0.8 to 3.2 kbps. §.§ Codebook Distillation In our proposed model for PLC, we predict the codebook indices of a lost frame using past latent code-vectors. Because of multiple codebooks used during quantization, the prediction of all the indices requires multiple models that in-turn increases the complexity overhead. An obvious choice would be to use only the first codebook of NESC but this only provides a low quality concealment and does not model all variations in the speech signals. Hence, we propose a distillation method where a new single codebook is learned using the sum of code-vectors from multiple codebooks. We choose to distill the information from the first two codebooks (1.6 kbps) of NESC onto another "distilled codebook" with 256 code-vectors (0.8kbps). The choice of using only first two codebooks is motivated by the trade-off between achievable quality and effective distillation given a target codebook size. The distilled codebook is used for FEC as low bitrate redundant information as well as for PLC. §.§ PLC For concealment, a causal convolutional model predicts the newly trained distilled codebook index from the past code-vectors. The PLC model takes the code-vectors of last seven frames {C'(n-1), C'(n-2),..,C'(n-7)} from the distilled codebook as an input and outputs the conditional distribution of the codebook index c'(n) of the current frame. P_c'(c'(n) | C'(n-1), C'(n-2),....,C'(n-7)). The architecture of the proposed model contains a 1-D convolutional layer with kernel size = 7, followed by two 1-by-1 convolutional layer with kernel size = 1. Finally, the output is passed through a fully connected layer with 256 hidden units and a softmax layer that outputs the probability distribution over the possible 256 codebook indices. We use LeakyRelu activation after each convolutional layer. §.§ FEC Our FEC solution consists of sending the low bitrate distilled codebook index as a redundant data. The FEC method works in conjunction with JBM <cit.> and is made possible because of availability of future frames in the jitter buffer. As shown in Fig. <ref>, each (n+k)^th packet contains a primary data along with redundant information of the past n^th frame. When n^th packet is marked as lost, the corresponding redundant information at (n+k)^th frame can be employed for correction. The parameter k denotes the separation in terms of number of frame between the primary and the redundant payload and is called "FEC offset". The optimal value of k is dictated by the length of the jitter buffer and the network conditions. For optimal transmission, the offset can be made adaptive and optimized depending on the network conditions and is usually sent along with the packet. FEC is particularly advantageous as it can provide an optimal guided correction as well as assist the PLC network to reduce its prediction error in case of burst losses. Depending on the quality or robustness requirements, the method is extremely flexible and can be used in multiple ways: a distilled codebook of larger bitrate can provide better quality of concealment whereas multiple redundant information from different offsets can be appended with primary information to provide better robustness against delayed and lost packets. Both design choices come at the cost of additional bitrate. For sake of simplicity, in this paper, we only explore the low-bitrate version of FEC at 0.8 kbps with a fixed offset and single redundant frame transmitted along with the primary frame. §.§ Heuristics In case of long burst losses, performing predictive generation for the concealment may lead to inaccurate or falsified speech content. As countermeasure, we adjust the concealment based on the type of last received frame. If the last available frame is voiced we conceal the frames for 100 ms of burst and in case of an unvoiced frame we stop the concealment after 60 ms. There is no direct classification and segmentation of speech at the decoder, but rather a mapping between distilled codebook indices and the voiced and unvoiced classes. The mapping is performed off-line by simply observing the statistics of code-vectors on different speech segments. It was found that the codebook indices can easily be clustered into a silence, a voiced and an unvoiced class. We maintain a list of indices for different classes and use it to classify the frames during inference. § EXPERIMENTAL SETUP §.§ Training & Inference The training of the PLC model requires a pre-trained NESC model. The sum of first two code-vectors of trained codebooks of NESC is used as input for distillation. The new code-vectors are updated with exponential moving average of the input with decay of 0.99 and MSE loss between input and output code-vectors is used for training. The distillation only requires few epochs for convergence after which the PLC model is trained with a teacher forcing method. A sequence of latent vectors corresponding to two seconds of audio data is presented to the PLC network that predicts the indices of subsequent code-vectors in the distilled codebook. Negative log-likelihood loss is used for training which is done for 420k iterations using ADAM optimizer at a learning rate of 0.0001 with batch size of 128. During inference, the received past primary information are re-quantized using the distilled codebook. We maintain a history buffer containing the last seven frames of distilled code-vectors which is then used as an input to the PLC network in case of packet loss. At the softmax layer, we select the index with maximum probability for concealment. In case of burst losses, the history buffer is injected with predicted code-vectors such that the PLC model can run auto-regressively. For in-band FEC, an offset of six frames was chosen and we use the same JBM as used in EVS <cit.>. Thus, the overall bitrate of NESC with FEC is 4 kbps. For both cases of PLC and in-band FEC, the predicted or the redundant distilled code-vectors is used as inputs to NESC decoder which then produces corresponding speech signals. §.§ Datasets & Loss Traces The dataset used for training the distilled codebook and the PLC model was total of 280 hours of speech from LibriTTS dataset <cit.> and VCTK dataset <cit.> at 16 kHz. The speech signal was also augmented with background noise from the DNS Challenge dataset <cit.> and reverberation from the SLR28 dataset <cit.>. We used two different datasets for evaluation: The blind dataset used from the Deep-PLC challenge 2022 <cit.> was used for objective evaluation. It contains 966 recordings along with corresponding loss traces. The loss traces are divided into three subsets according to the corresponding burst lengths of 120 ms, 320 ms and 1000 ms. For subjective evaluation, we select 24 items from the NTT-AT <cit.> dataset equally balanced between two female and two male speakers. We use two delay-loss profiles with the highest error rates from  <cit.>. It is obtained from real-world call logs of RTP packet collected in varying network conditions. Unlike the previous traces, it not only contains indication of lost packets but also marks the packet arrival time required by the JBM. In addition to lost packets, the JBM can declare a packet as lost if the arrival time of the packet exceeds the buffer capacity. All loss traces are provided for 20 ms frame size whereas NESC operates at 10 ms frame size. In order to achieve synchronized frame losses for comparison across all baseline methods, we pack two frames of NESC in a single packet and simulate packet loss with given traces. §.§ Evaluation For evaluation, we carry out both objective and subjective assessment. For objective assessment, we use POLQA v3 <cit.>, PLCMOS <cit.> and VISQOL v3 <cit.>. PLCMOS is exclusively designed to estimate the Mean Opinion Score (MOS) when some parts of speech signals are concealed for missing packets whereas VISQOL is designed to evaluate the overall quality of speech signals. Both the methods try to predict the MOS of subjective evaluations. They are probably better suited for our evaluation because the signals generated by neural codecs or other generative networks do not necessarily preserve the waveform and hence are penalized on other audio-feature based objective evaluation like POLQA. For subjective assessment, we conduct a P.808 ACR listening test <cit.> using the Amazon Mechanical Turk service involving 24 participants and accumulating 96 opinion scores per condition. §.§ Baseline Methods For evaluation, the proposed model is compared to the following baseline models: * For comparison with conventional methods, we use EVS codec at 5.9 kbps, 8 kbps and 13.2 kbps. It performs PLC at all bitrates, but only at 13.2 kbps, it supports FEC in the Channel-Aware (CA) mode and is used to compare our FEC solution. The 5.9 kbps codec is used for objective evaluation and the other bitrates are used for P.808 because of their comparable quality with NESC. * For PLC with a DNN-based solution, we select the LPCNet-based PLC method. The model is open source and we utilize the available pre-trained models in causal mode. Since it works as a post-processor, for comparison and in order to evaluate only the performance over the lost frames, the original signal is decoded by NESC and the concealment is performed over it. To keep the implementation simple, we do not use JBM with this method but use the traces obtained from the JBM with NESC to create loss traces. * The naive baseline is the zero-filled NESC output where we simply select the codebook index corresponding to a silent frame for lost packets and decode it. § RESULTS AND DISCUSSION In Table <ref>, we present the average objective scores obtained for various methods. In all the measures, the zero-filled NESC shows the lowest scores which illustrates the distortion caused by packet losses without concealment. The quality measures POLQA and VISQOL rates the concealment provided by EVS-PLC and LPCNet-PLC as the highest whereas PLCMOS rates our proposed solution as the best. This difference in quality measures can be attributed to the fact that our proposed PLC solution operates at the lowest bitrate with a very coarsely quantized level of the latent. The quality of the speech signal generated for concealment in our proposed solution is somehow equivalent to operating NESC at 0.8 kbps. On the other hand, the LPCNet-PLC operates at the output signal and uses a dedicated additional neural vocoder with calculated un-quantized feature for generation. However, given the low computational overhead that the model entails, NESC PLC shows an interesting trade-off between performance and complexity, and provides substantial benefit over the zero-filled baseline. We do not present the objective assessment of the FEC solution because of the paucity of loss traces with packet arrival time. The subjective scores are split into two parts based on the loss profile used and are shown in Figure <ref> and Figure <ref>. Profile 1 and 2 simulate packet loss rates of about 8% and 10%, respectively. In comparison to Profile-1, Profile-2 contains burst losses of higher lengths and simulates higher error rates due to delays in packet arrival time. We include EVS at multiple bitrates to understand the granularity of quality. In clean channel, without packet losses, the listeners reported NESC at 3.2 kbps to have similar quality as EVS 13.2 kbps. EVS in CA mode shows a slight drop in quality because the CA mode reserves 0 to 3.6 kbps of bitrate per frame for FEC. In error-prone channel, the NESC PLC performs at the same level as EVS 13.2 kbps and is slightly below the LPCNet-PLC method. On the other hand, the listening test results show the effectiveness of our proposed low-bitrate FEC solution. It is at least on par with the CA mode of EVS and is better than compared stand-alone PLC solutions. The results show that, for neural codecs, in-band FEC in conjunction with PLC is capable of providing very high-quality error resilience[Check our demo samples at: https://fhgspco.github.io/nesc_plc_fec/<https://fhgspco.github.io/nesc_plc_fec/>]. In terms of complexity, our proposed PLC model contains 0.6 million parameters and has a complexity of 65.5 MFLOPS. At 10% loss of packets, the integrated solution of NESC with PLC shows 3% decrease in real-time factor compared to stand-alone NESC. The measurement was done on a single thread of an Intel(R) Core(TM) i7-6700 CPU at 3.40GHz. § CONCLUSION In this paper, we provide error resilient tools for end-to-end neural coders, taking into account real constraints on both the added complexity and the network characteristics. A low-complexity PLC model is proposed, which operates directly in the latent domain and exploits vector quantization, taking advantage of the generative capability of the decoder. In addition, to limit the need for concealment, we proposed the use of in-band FEC, to correct and decode lost packets using the redundant information transmitted at an additional bitrate of 0.8 kbps. In future work, we intend to extend this method to other end-to-end neural codecs and evaluate its effectiveness. We also plan to explore the use of neural networks for in-band FEC. We have proposed a novel PLC model based on index prediction of VQ-codebook for NESC. The proposed model works in causal mode and is of very low complexity. It is a well integrated solution for end-to-end neural codecs and provides optimum concealment. We also propose an FEC solution for NESC that utilizes an additional bitrate of 0.8 kbps for redundant packet. Our FEC solution, when used alongside proposed PLC model provides state-of-the art results for concealment. In future work, we intend to extend this method to other end-to-end neural codecs and evaluate its effectiveness. We also look to explore usage of neural network for redundant packets and improve our method to provide a better quality of concealment. IEEEtran
http://arxiv.org/abs/2406.08349v1
20240612155542
Utilizing Navigation Path to Generate Target Point for Enhanced End-to-End Autonomous Driving Planning
[ "Yuanhua Shen", "Jun Li" ]
cs.RO
[ "cs.RO" ]
#1#1#1 Enhancing Cosmological Model Selection with Interpretable Machine Learning Domenico Sapone June 17, 2024 =========================================================================== empty empty § ABSTRACT In recent years, end-to-end autonomous driving frameworks have been shown to not only enhance perception performance but also improve planning capabilities. However, most previous end-to-end autonomous driving frameworks have primarily focused on enhancing environment perception while neglecting the learning of autonomous vehicle planning intent. Within the end-to-end framework, this paper proposes a method termed NTT, which obtains explicit planning intent through the navigation path. NTT first generates the future target point for the autonomous vehicle based on the navigation path, thereby enhancing planning performance within the end-to-end framework. On one hand, the generation of the target point allows the autonomous vehicle to learn explicit intention from the navigation path, enhancing the practicality of planning. On the other hand, planning trajectory generated based on the target point can adapt more flexibly to environmental changes, thus effectively improving planning safety. We achieved excellent planning performance on the widely used nuScenes dataset and validated the effectiveness of our method through ablation experiments. § INTRODUCTION A robust autonomous driving system not only requires effective perception of the environment but also entails the ability to undertake rational and safe planning based on environmental and navigation information. Autonomous driving algorithms typically consist of several subtasks, including 3D object detection <cit.>, map segmentation <cit.>, motion prediction <cit.>, 3D occupancy prediction <cit.> and planning <cit.>. In recent years, an end-to-end approach <cit.> has integrated multiple independent tasks into multi-task learning, optimizing the entire system, including intermediate representations, towards the final planning task. UniAD <cit.> integrates six subtasks including object detection, object tracking, map segmentation, trajectory prediction, grid prediction, and planning into a unified end-to-end network framework for the first time. This approach achieves a comprehensive full-stack driving general model and improves the performance of all tasks compared to all previous methods. However, Most existing end-to-end methods underestimate the importance of navigation information in planning, often conflating planning with prediction into the same task. Learning-based planning and prediction algorithms are highly similar in their forms of representation. But they differ crucially in whether the intent is known. Prediction involves forecasting the future trajectories of agents based on their current and past states, without knowledge of their intentions. The challenge in prediction lies in obtaining a highly uncertain multimodal distribution of future outcomes <cit.>. In planning, the intention of the ego vehicle is explicit, with the task requirement being to generate safe and comfortable planning trajectory <cit.>. Some methods <cit.> constrain the planning of autonomous vehicle using discrete navigation commands (e.g., go straight, turn left, turn right). However, our experiments suggest that solely relying on simple navigation commands is insufficient to learn explicit intention. In our paper, we propose a method termed NTT, which utilizes the navigation path to constrain planning (shown in Figure <ref>), thus clarifying driving intent and enhancing planning performance. Specifically, we first utilize the navigation path to generate potential target point, which are then interacted with the environmental information to obtain the complete planning trajectory. This enables us to obtain flexible planning trajectory that adapt to environmental changes under clear intention, thereby improving the correctness and safety of the planning. It is worth mentioning that our navigation paths are obtained from commercial navigation software. This type of navigation data is readily available and has practical significance. Due to the inherent granularity limitations of commercial navigation paths, there may be significant deviations between the starting point of the navigation path and the position of the ego vehicle. Therefore, we only consider the directional information of navigation paths. Through experiments, we have demonstrated the effectiveness of our modeling approach for navigation paths. Our contributions can be summarised in the following. ∙We introduce a planning methodology termed NTT that integrates navigation path data into an end-to-end framework. NTT first utilizes the navigation path to constrain the generation of the target point, and then generates the complete trajectory based on the target point. NTT enhances the ego vehicle's understanding of driving intentions and improves planning safety. ∙We enriched the nuScenes <cit.> dataset by incorporating navigation path information, offering valuable data references for future research endeavors. ∙NTT achieved outstanding end-to-end planning performance on the nuScenes <cit.> dataset, illustrating the superiority of our methodology. § RELATED WORK §.§ Perception Perception forms the foundation of autonomous driving. In recent years, bird's-eye view (BEV) representation <cit.> has emerged as a common strategy, enabling effective fusion of multimodal data and demonstrating significant potential across perception tasks, including 3D object detection <cit.>, map segmentation <cit.>, and 3D occupancy prediction <cit.>. In the realm of 3D object detection, DETR3D <cit.> utilizes 3D queries to index corresponding image features. In map segmentation, HDMapNet <cit.> integrates data from cameras and LiDAR sensors to predict vectorized map elements. MapTR <cit.> and MapTRv2 <cit.> model map elements as sets of points with a set of equivariant transformations, accurately describing the shape of map elements and stabilizing the learning process. StreamMapNet <cit.> employs multi-point attention and temporal information to enhance the stability of large-scale, high-precision map reconstruction. §.§ Prediction Accurate prediction of the movements of traffic participants is crucial for ensuring the safety of planning. Some prediction methods utilize historical trajectories and HD maps as input <cit.>. TNT <cit.> samples anchor points from the roadmap and generates trajectories based on these points. The trajectories are then scored, and non-maximum suppression (NMS) is employed to select the final trajectory set. DenseTNT <cit.> improves upon TNT <cit.> by densely sampling points on the map and using a goal set predictor module to output multimodal prediction trajectories. Additionally, some prediction methods <cit.> use agent and map features extracted from BEV features as inputs and employ attention networks <cit.> to output predicted trajectories. §.§ Planning Traditional rule-based planners <cit.> has achieved significant progress, but its lack of generalization remains a challenge. Learning-based planners <cit.> show immense potential due to their compatibility with end-to-end autonomous driving frameworks and their ability to improve performance through large-scale data. ST-P3 <cit.>, as the first to propose an end-to-end autonomous driving framework based on surround-view cameras, takes multiple snapshots from surround-view camera images as input and sequentially processes them through perception, prediction, and planning modules to output the final planning path. UniAD <cit.> cleverly integrates multiple perception and prediction tasks, improving the performance of all tasks. VAD <cit.> converts rasterized map representations into vectorized representations and further improves planning performance through three instance-level constraints. GenAD <cit.> models autonomous driving within a generative framework, simultaneously outputting prediction and planning results. However, these end-to-end planning methods have not effectively utilized navigation information. In this paper, we explore how to leverage navigation paths within an end-to-end framework to obtain planning trajectories with clear intentions. § METHODOLOGY This section presents our methodological framework, as depicted in Figure <ref>. We begin by introducing the ego-centric scene representation, where multi-frame, multi-view images serve as input, subsequently transformed into an ego-centric scene description encompassing map elements and traffic participants (Sec. <ref>). Subsequently, we elucidate the process of integrating navigation path data into the nuScenes <cit.> dataset and expound upon the modeling of this navigation information (Sec. <ref>). Furthermore, We provide a detailed explanation of how to utilize this navigation data to generate future target point for the ego vehicle, followed by the generation of complete planning trajectory based on the generated target point (Sec. <ref>). Finally, we expound upon the training methodology for our end-to-end planning framework utilizing the navigation path (Sec. <ref>) §.§ Scene Encoder The first step in end-to-end autonomous driving is to obtain a high-level scene description from low-level image data. Initially, we employ a convolutional network <cit.> and feature pyramid network <cit.> to extract multi-scale image features from sensor data. Subsequently, deformable cross-attention <cit.> are utilized to convert these image features into BEV representations through a set of BEV queries. Following this, instance-level map tokens and agent tokens are employed to learn vectorized representations of map elements and static as well as dynamic information of traffic participants. Collectively, map tokens E_M and agent tokens E_A are concatenated to form tokens E_scene , which collectively describe the entire driving scenario. BEV to map. When each lane element is considered as an instance, modeling the relationships between lane elements and agents becomes straightforward. Thus, we employ a set of instance-level map tokens <cit.> E_M to represent the map, where each map token can be decoded into a set of points in BEV space along with corresponding class scores. In this paper, we consider four types of map elements (i.e., lane divide, road boundary, pedestrian crossing and lane centerline). Particularly for lane centerline, according to LaneGap <cit.>, we use a complete and continuous path as the basic prediction unit, serving to indicate the direction of travel. Additionally, We utilize the point set forming the lane centerlines as candidate points to assist in generating future target point for the ego vehicle (Sec.<ref> ). BEV to agent. Similarly, we use a set of instance-level agent tokens <cit.> E_A to represent agents. Through a 3D object detection head, each agent token can be decoded into the position, category scores, and heading angle. To further enrich the motion information of agent tokens, we utilize attention mechanisms <cit.> to facilitate interactions between agents, as well as interactions between agents and the map. We then predict the future multimodal trajectories for each agent and output the probability score for each modality. §.§ Navigation Path Navigational Path Acquisition. The nuScenes <cit.> dataset inherently lacks navigation data, leading some research practices to construct navigation commands (e.g., go straight, turn left, turn right) based on the ground truth trajectory of the ego vehicle. However, simplistic commands cannot provide specific and clear driving instructions for the ego vehicle. Thus, a more suitable approach involves guiding planning through the navigation path. To further investigate whether end-to-end autonomous driving systems can achieve correct planning through the navigation path, we utilize the Google Maps API to obtain the navigation path for each scene in the nuScenes <cit.> dataset. Specifically, we first acquire the start and end points for each scene, and convert them to latitude and longitude coordinates based on their map locations. these points are then used as input to obtain the rough navigation path through the Google Maps API. Next, we interpolate the obtained the navigation path to fix the distance between each pair of navigation path points, setting the distance to 5 meters in experiments. The interpolated path serves as the final navigation path. Navigation Path Model. Given that only the future navigation path can be obtained during the actual driving process, we designate the nearest navigation path point to the current ego vehicle position as the starting point of the navigation route. The starting point, along with the subsequent m points, form the 2D navigation path group P ∈ℝ^m + 1,2 for the current frame. Considering the meter-level accuracy of the navigation path obtained from the Google Maps API, which may exhibit significant positional deviation from the ground truth trajectory of the ego vehicle, our modeling only focuses on the directional information of the navigation path. Specifically, we first compute the difference between each pair of consecutive points in the path group P to derive positional vectors V_p, which denote the relative spatial relationship within the path group. V_p = { p_i + 1 - p_i}_i = 1^m Here, p_i represents the i-th point in the path group P. Furthermore, to enhance the representation of directional features in the navigation path, we introduce the heading angle of the navigation path as an additional set of parameters. The heading angle can be calculated as the arctangent of the navigation position vector v_p_i = ( x_i,y_i). The formula for the heading angle h_i is as follows: h_i = atan2( y_i,x_i) To mitigate potential numerical instability and to provide a more compact representation of directional features, we leverage the sine and cosine functions of the heading angle. We concatenate the position vector, the sine of the heading angle, and the cosine of the heading angle to obtain the directional representation of the navigation path at the current position, denoted as N: N = {( v_p_i,cos( h_i),sin( h_i)) }_i = 1^m §.§ Navigation to Planning Navigation to target. In prediction, some methods <cit.> capture future uncertainty by predicting possible targets for agents, achieving state-of-the-art performance. Through experimentation, we discovered that applying such target-based methods in planning can effectively enhance planning safety. Unlike prediction, planning is a task with clear intention. Therefore, we additionally incorporate navigation path information to constrain the generation of the target point. We illustrate the proposed method of constraining the generation of the target point using navigation path in Figure <ref>. Specifically, to better capture fine-grained lane information, we drew inspiration from DenseTNT <cit.>'s Dense Goal Probability Estimation method, generating a dense probability distribution of target candidate points from the scene context. The point with the highest probability is selected as the target point. It is worth noting that DenseTNT <cit.> sprinkles points around the lane centerlines based on high-definition maps. However, the lane centerlines obtained from perception are not always highly accurate and may even be unidentifiable. Therefore, in addition to sprinkling points around the perceived lane centerlines, we also scatter some dense points within a certain distance ahead of the vehicle. The purpose is to enable the vehicle to move forward within a small range when perception is not reliable. In the Navigation-aware Target Encoder, We first utilize an MLP to map the navigation path representation into a higher-dimensional representation space, aiming to obtain richer feature representations. As our navigation path representation only encompasses directional information, we augment it with the encoded ego vehicle position to constitute the comprehensive navigation features E_N. E_N = MLP(N) + MLP( P_ego) Where P_ego = ( x_ego,y_ego) represents a 2D coordinate. Subsequently, we encode the 2D coordinates of target candidate points using another MLP to obtain the feature matrix F_T. Afterwards, we concatenate the feature matrix F_T with E_N, and perform multiple rounds of feature fusion. This iterative process aims to continuously explore the correlation between target candidate points and the navigation path, thereby updating F_T. F_T^l + 1 = f^l( F_T^l,E_N) Where f^l( · ) represents the feature fusion function for the l-th layer, which we denote it an MLP. [...] denotes concatenation operation. According to DenseTNT <cit.>, the local information of candidate points within the scene can be learned through an attention mechanism. F_T^' = CrossAttention (q, k, v) q = F_T,k = v = E_scene E_scene represents scene token embeddings, composed of map tokens E_M and agent tokens E_A. The predicted score for each target candidate can be represented as: α_i = exp( g( F_i) )/∑_n = 1^Nexp( g( F_n) ) Where g( · ) is implemented with an MLP. Trajectory Completion. By constraining with the navigation path, we generate a probability distribution of the ego vehicle's target position from the scene information. We designate the point with the highest probability as the target point P_target. Based on the P_target, we further generate the complete planning trajectory. Initially, we employ a cross-attention network <cit.> to interact the ego-vehicle's query Q_ego with the scene information E_scene. Here, the ego-vehicle's query Q_ego is the sum of the encoding of the target position P_target and E_N. The purpose of this operation is to dynamically adjust the focus on scene information according to the driving intention. This process can be formulated as: Q_ego^' = CrossAttention (q, k, v) q = Q_ego = E_N + MLP( P_target) k = v = E_scene Finally, we concatenate Q_ego and Q_ego^', and input them into a 3-layer MLP decoder to generate the complete trajectory T̂ = {ŝ_1,ŝ_2,…,ŝ_k}. k represents the future trajectory planned for k frames. T̂ = MLP( Q_ego, Q_ego^') where [...] denotes concatenation operation. §.§ Training Scene Learning Loss. We follow the decoder design of VAD <cit.>'s scene perception, dividing scene learning into two parts: one is the learning of vectorized maps, and the other is the learning of traffic participants' information. The map loss ℒ_map consists of l_1 regression loss between predicted map points and the ground truth map points, as well as focal loss <cit.> for map classification. The loss for traffic participants ℒ_agent includes 3D detection loss and motion prediction loss. The construction of detection loss is similar to that of map loss. For motion prediction, we select the trajectory with the smallest displacement error (minFDE) from multiple trajectories and compute the l_1 loss with the ground truth trajectory, which serves as the motion prediction loss. Target Probability Estimation Loss. Following DenseTNT <cit.>, we assign a score of 1 to the target candidate point closest to the endpoint of the ego vehicle's ground truth trajectory, and 0 to the rest. We compute the binary cross-entropy loss between the predicted target scores and the ground truth target scores as the target probability estimation loss ℒ_target. Planning Loss. Following the design of planning loss in VAD <cit.>, we divide the planning loss ℒ_plan into three parts: collision loss ℒ_col between the ego vehicle and other vehicles or road boundaries, directional loss ℒ_dir between the planning trajectory vector and lane vector, and l_1 regression loss ℒ_reg between the planning trajectory and the ego vehicle's ground truth trajectory. ℒ_plan can be formulated as: ℒ_plan = ω_1ℒ_col +  ω_2ℒ_dir +  ω_3ℒ_reg where ω_1, ω_2, and ω_3 are balance factors. We adopt an end-to-end paradigm to train our proposed model, where the overall loss is the weighted sum of all the aforementioned losses. ℒ = ω_mapℒ_map +  ω_agentℒ_agent +   ω_targetℒ_target + ω_planℒ_plan § EXPERIMENTS §.§ Dataset and Metric We evaluate our proposed method on the popular nuScenes <cit.> dataset, which contains 1000 driving scenes, each lasting roughly 20 seconds. Keyframe annotations are provided at 2Hz. Each sample includes RGB images from 6 cameras, covering a 360^∘ horizontal FOV of the ego vehicle. Following existing end-to-end autonomous driving methods <cit.>, we use the L2 displacement error and collision rate to measure the quality of planning. The L2 displacement error measures the L2 distance between the planning trajectory and the ground truth trajectory. The collision rate measures the frequency of collisions with other traffic participants under the planning trajectory. We use the past 2 seconds of information as input and evaluate the planning performance for the next 3 seconds. §.§ Dataset and Metric We adopt ResNet50 <cit.> as the backbone network to extract image features, with input images resized to 640 × 360. We employ 200 × 200 BEV tokens s to perceive the driving scene within a range of 60m × 30m. We set the number of map tokens to 100 and agent tokens to 300. Each map token further comprises 20 point tokens to represent map points. For training, we employ the AdamW <cit.> optimizer with a Cosine Annealing <cit.> scheduler. The initial learning rate is set to 1 × 10^- 4 with a weight decay of 0.01. We trained for 60 epochs on 4 NVIDIA Quadro RTX 6000 GPUs, with a total batch size of 4. §.§ Main Results We compared NTT with state-of-the-art end-to-end driving methods in Table <ref>. It is evident that NTT attains state-of-the-art performance. NTT achieved a notable reduction of 0.11m in average planning displacement error. Additionally, we observed a 9% decrease in average collision rates, with a particularly substantial reduction observed at the trajectory endpoint. These results demonstrate the effectiveness of NTT in improving both safety and accuracy in end-to-end planning. §.§ Ablation Study Significance of target generation module. We conducted an ablation study to verify the effectiveness of the target generation module in Table <ref>. We attempted to remove the generation of the target point and directly use the encoded features of the navigation path as the query. Through an attention mechanism interacting with the driving scene, the updated features were input to the MLP to generate the final trajectory. As shown in the table <ref>, when only the navigation path was used without the target generation module, there was a significant increase in collision rates, and a certain degree of increase in the displacement error of the planning trajectory. Through the generated target point, a flexible planning trajectory could be obtained based on the environment while keeping the target position unchanged. Experimental results demonstrate that this module effectively reduces the collision rate and improves accuracy to a certain degree. The role of the navigation path. We also analyzed the effectiveness of NTT in utilizing the navigation path for end-to-end planning. The quality of intent learning is primarily reflected in the performance during turns, while planning during straight driving relies more on environmental perception for obstacle avoidance. Therefore, we conducted experiments in some turning scenarios. Specifically, we evaluated scenes from the nuScenes <cit.> validation set where the lateral movement of the ego vehicle's ground truth trajectory exceeded 2 meters within the next 3 seconds. In Table <ref>, "tgt+emb" indicates the absence of using the navigation path as prior knowledge for generating the target point. Instead, the target point is directly generated from the driving environment through a learnable embedding. "tgt+cmd" represents the use of previous end-to-end methods (i.e., using navigation commands) to select the target point. "tgt+path" refers to NTT, which utilizes the navigation path as prior knowledge for the target point generation. Compared to VAD <cit.>, NTT significantly improves planning performance in turning scenarios. We observed a reduction of 0.48 meters in average planning displacement error and a 57% decrease in collision rates. The comparison between "tgt+path" and "tgt+cmd" results highlights the superiority of navigation paths over navigation commands, while the comparison between "tgt+cmd" and "tgt+emb." further emphasizes the disadvantages of navigation commands, which may even have a negative effect on intent learning. Overall, our method NTT, which utilizes the navigation path as prior knowledge to generate the target point before planning trajectories, achieves the best planning performance, demonstrating the superiority of NTT in intent learning. Visualizations. We provide visual results of NTT and compare them with VAD <cit.>. We adopt the visualization approach of VAD <cit.>, showcasing mapping, detection, motion prediction, and planning results from a bird's-eye view perspective. Comparisons are made under different weather conditions: sunny, cloudy, and nighttime. In Figure <ref>, it can be observed that, compared to VAD <cit.>, NTT can capture more subtle turning intentions and achieve planning that better conforms to road structures, thus resulting in safer and more accurate planning paths. This demonstrates the superiority of NTT in intent learning and collision rate reduction. § CONCLUSIONS In this paper, we introduce NTT, a planning method that integrates navigation path information within an end-to-end framework. We explore a design that first utilizes the navigation path to constrain the generation of the target point, followed by generating the complete planning trajectory based on the target point. Extensive experiments on the nuScenes <cit.> dataset demonstrate the superior planning performance of our proposed approach, NTT. In the future, further exploration is warranted on how to deploy end-to-end autonomous driving frameworks in real vehicles and achieve safe and efficient point-to-point planning. IEEEtran
http://arxiv.org/abs/2406.08239v1
20240612140604
Infinite-dimensional Frobenius Manifolds Underlying the genus-zero Universal Whitham Hierarchy
[ "Shilin Ma" ]
math-ph
[ "math-ph", "math.MP" ]
equationsection thmTheorem[section] MthmMain Theorem prop[thm]Proposition cor[thm]Corollary cjt[thm]Conjecture lem[thm]Lemma de[thm]Definition rem[thm]Remark ex[thm]Example
http://arxiv.org/abs/2406.08007v1
20240612085417
Enhancing phase sensitivity in Mach-Zehnder interferometer with various detection schemes using SU(1,1) coherent states
[ "Nour-Eddine Abouelkhir", "Abdallah Slaoui", "El Hassan Saidi", "Rachid Ahl Laamara", "Hanane El Hadfi" ]
quant-ph
[ "quant-ph" ]
LPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco. Corresponding author: abdallah.slaoui@um5s.net.maLPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco.Centre of Physics and Mathematics, CPM, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco. LPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco.Centre of Physics and Mathematics, CPM, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco.College of Physical and Chemical Sciences, Hassan II Academy of Sciences and Technology, Rabat, Morocco. LPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco.Centre of Physics and Mathematics, CPM, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco. LPHE-Modeling and Simulation, Faculty of Sciences, Mohammed V University in Rabat, Rabat, Morocco. § ABSTRACT Improving interferometric phase sensitivity is crucial for high-precision measurements in rapidly developing quantum technologies. The Mach-Zehnder interferometer (MZI) is a versatile tool for analyzing this phenomenon. By splitting and recombining a light beam using beam splitters, MZIs allow for precise phase sensitivity analysis using tools like the quantum Cramér-Rao bound (QCRB) and the quantum Fisher information (QFI). This paper analyzes the phase sensitivity of a MZI in various scenarios using different detection schemes and input states. We compare the single- and two-parameter quantum estimation and their associated QCRB for three phase-shift situations: in both arms, only in the upper arm (asymmetric), and in both arms symmetrically. We then investigate the phase sensitivity under three detection schemes: difference intensity, single-mode intensity, and balanced homodyne. Additionally, we explore the use of Perelomov and Barut-Girardello coherent states, two types of SU(1,1) coherent states, in all scenarios. Notably, we demonstrate that under optimal conditions, all detection schemes can achieve the QCRB by utilizing entangled SU(1,1) coherent states as input states. Keywords: Interferometric phase sensitivity, Barut-Girardello and Perelomov coherent states, Mach-Zehnder interferometer. Enhancing phase sensitivity in Mach-Zehnder interferometer with various detection schemes using SU(1,1) coherent states H. El Hadfi June 17, 2024 ======================================================================================================================= § INTRODUCTION Interferometry, exploiting the interaction of superimposed waves, is essential in precision measurement, quantum metrology, and sensing applications <cit.>. This technique also contributes significantly to the understanding of fundamental physics concepts <cit.>. Notably, many physical quantities, including distance, local gravity fields, and magnetic fields, are related to the phase differences of the interfering waves, highlighting the interferometer's high sensitivity to phase changes and its wide applicability in precise measurement and metrology <cit.>. Phase estimation, a cornerstone of quantum metrology, is extensively studied due to its crucial role in diverse precision applications like environmental sensing, gravitational wave detection <cit.>, and gyroscopes <cit.>. This vital role stems from the high sensitivity of optical interferometry. In the conventional classical setting with standard resources, the sensitivity reaches a limit called the shot-noise limit (SNL) or the standard quantum limit (SQL), scaling as 1/√(𝒩), where 𝒩 is the number of input photons <cit.>. To surpass this limit, Caves <cit.> proposed the squeezed-state technique, which addresses vacuum fluctuations at the unused input port. Subsequently, various quantum resources such as entangled coherent states <cit.>, NOON states <cit.>, number-squeezed states <cit.>, and two-mode squeezed states <cit.>, have been explored to further enhance measurement precision, potentially reaching the ultimate quantum limit of 1/𝒩, known as the Heisenberg limit (HL) <cit.>. In quantum interferometry, the theoretical limits on phase sensitivity are derived by applying the quantum Fisher information (QFI) and its corresponding fundamental limit on precision, the quantum Cramér-Rao bound (QCRB) <cit.>. Beyond their theoretical significance, these limits prove highly valuable for assessing the optimality of practical detection schemes. A novel type of interferometer, referred to as an SU(1,1) interferometer, is configured similarly to a Mach-Zehnder interferometer (MZI). Yurke et al.<cit.> first proposed this concept theoretically, wherein linear beam splitters (BS) are replaced by non-linear beam splitters (NBS) for the coherent splitting and mixing of two input fields to achieve precise phase estimation. The term SU(1,1) is derived from the interaction type utilized in parametric processes associated with nonlinear wave mixing, distinct from the SU(2) interaction linked to linear wave mixing through a beam splitter. This nomenclature reflects the specific nature of the interaction in these two interferometer types. In this paper, our focus centers on the phase sensitivity of a MZI. It is recognized that most interferometers can be transformed into an MZI, enabling the optimization of their phase sensitivity for different input states and detection schemes. A critical goal in interferometry is realizing the theoretically optimal phase sensitivity, which requires optimizing all potential estimators and detection strategies. The QFI (ℋ) plays a pivotal role in this context <cit.>, being directly linked to the QCRB, expressed as Δθ_QCRB=1/√(ℋ). Therefore, finding ways to increase the QFI becomes an important issue in quantum estimation theory. The phase sensitivity (Δθ_det) in any practical detection scheme always equals or exceeds the QCRB (Δθ_QCRB), i.e., Δθ_det≥Δθ_QCRB. In a MZI, phase sensitivity depends on factors like input states and detection schemes. This work explores entangled SU(1,1) coherent states (CS) for input. The entangled SU(1,1) and SU(2) coherent states share similarities due to the close relationship between their Lie algebras. However, SU(1,1) has two relevant types: Perelomov coherent states (PCS) and Barut-Girardello coherent states (BGCS). PCS, introduced by Perelomov <cit.>, are analogues of harmonic oscillator coherent states, achieved by displacing the vacuum state with a displacement operator. BGCS, introduced by Barut and Girardello <cit.>, are defined as right eigenstates of the SU(1,1) lowering operator. This work investigates both types of entangled SU(1,1) CS as input states, alongside three detection schemes: difference intensity, balanced homodyne detection and single-mode intensity. This paper is arranged as follows: Section (<ref>) offers a concise review of SU(1,1) coherent states. Section (<ref>) introduces conventions and a two-parameter QFI approach followed by discussion of single-parameter QFI for both asymmetric and symmetric phase shifts scenarios. In Section (<ref>), we provide expressions for the QFIs in all three considered scenarios for input Perelomov and Barut-Girardello coherent states combined with a vacuum state. The three detection schemes are described in detail in Section (<ref>), while Section (<ref>) analyzes their performances with input SU(1,1) coherent states. Finally, Section (<ref>) summarizes the work. § SU(1,1) COHERENT STATES We begin with a brief introduction to the SU(1,1) Lie algebra. This algebra is spanned by three generators, Â_z, Â_+, and Â_-, satisfying the commutation relations [Â_+,Â_-]=-2Â_z, [Â_z,Â_±]=±Â_± In this work, we focus on input states in the context of optical fields, particularly the entangled SU(1,1) coherent states (CSs). To examine these states, we employ the Holstein-Primakoff realization (HPR), a potent theoretical tool in quantum optics. The SU(1,1) CSs can be characterized by a set of single-mode Bose annihilation and creation operators, aligning with the HPR representation of the SU(1,1) Lie algebra. This HPR form is given by the operators Â_+=b̂^†(b̂^†b̂+2a)^1/2, Â_-=(b̂^†b̂+2a)^1/2b̂,Â_z=a+b̂^†b̂, where the operators b̂ and b̂^† satisfy the Bose algebra [b̂,b̂^†]= and the action of the operators Â_z, Â_+, and Â_- on the Fock space states |a,g⟩ (with g=0,1,2,...) is given by the following Â_+|a,g⟩= ((g+1)(2a+g))^1/2|a,g⟩, Â_-|a,g⟩= (g(2a+g))^1/2|a,g⟩, Â_z|a,g⟩= (g+a)|a,g⟩. The parameter a called the Bargmann index, is associated with the eigenvalue determination of the SU(1,1) Casimir operator. This operator is given by Ĉ=Â^2_z-(Â_+Â_- +Â_-Â_+)/2, and by evaluating the eigenvalues of Ĉ, we can obtain the form a(a-1). Here, we confine our attention solely to the discrete series, which a is greater than zero, taking values such as a=1/2, 1, 3/2,....Now we can move on to the construction of the SU(1,1) CSs. As this group is non-compact, all its unitary representations must be infinite-dimensional. Therefore, there are two principal types of coherent states. The first type consists of displacing the vacuum state by the displacement operator, which generates the Perelomov coherent states (PCSs). The second type of SU(1,1) CSs, namely, the Barut-Girardello coherent states (BGCSs), are defined as a right eigenstate of the SU(1,1) lowering operator. §.§ SU(1, 1) Perelomov coherent states Following Perelomov's work <cit.>, the standard PCSs are defined as |ξ,a⟩=D(μ)|0,a⟩, with D(μ) is the displacement operator for this group, defined as D(μ)=e^μÂ_+ -μ^∗Â_-, with μ is a complex number. Using the property Â_+^†=Â_-, we can show the following property of the displacement operator: D^+(μ)=D(-μ). This operator D(μ) can be rewritten as D(μ)=e^z Â_+e^ηÂ_ze^-ξ^∗Â_-, where ξ=e^-iφtanh |μ|, μ=e^-iφϑ/2 and η=ln(1-|ξ|^2). The parameter ϑ is a hyperbolic angle with 0≤ϑ<∞ and the angle φ is azimuthal with 0≤φ≤ 2π. By using this equation, we can directly obtain the PCSs in the form of |ξ,a⟩=(1-|ξ|^2)^l∑_g=0^∞√(Γ(g+2a)/g!Γ(2a))ξ^g|a,g⟩, where Γ(x) is the gamma function. §.§ Barut-Girardello Coherent States Here, we would like to construct the Barut-Girardello coherent state. This state is defined as the solution to the eigenvalue equation for the annihilation operator Â_- <cit.>: Â_-|ξ,a⟩=ξ|ξ,a⟩, a>0, where ξ is an arbitrary complex number. In addition, we can decompose the eigenstates |ξ,a⟩ as a superposition of the complete orthonormal basis {|ν,a⟩} |ξ,a⟩=∑_g=0^∞⟨ν,l|ξ,a⟩|ν,a⟩. Let the annihilation operator Â_- act on Eq.(<ref>). Then, utilizing Eqs.(<ref>) and (<ref>) along with the following orthonormality relation ⟨ g,a|g',a⟩=δ_νν', ∑_g=0^∞|g,a⟩⟨ g,a|=, we can obtain ⟨ g,a|ξ,a⟩=z/√(nΓ(g+2a))⟨ g-1,a|ξ,a⟩. After the recurrence procedure, this equation above transforms into the following ⟨ g,a|ξ,a⟩=ξ^gΓ(2a)/g!Γ(g+2a)⟨ 0,a|ξ,a⟩. When we normalize the states |ξ,a⟩ to unity, we have |ξ,a⟩=√(|ξ|^2a-1/I_2a-1(2|ξ|))∑_g=0^∞ξ^g/√(g!Γ(g+2a))|g,a⟩, where I_m is the modified Bessel function of order m, defined as I_m(x)=∑_m=0^∞1/m!Γ(m+g+1)(x/2)^2m+g. § QUANTUM FISHER INFORMATION IN MACH-ZEHNDER INTERFEROMETER We are considering the standard Mach-Zehnder (MZ) interferometric setup illustrated in Figure (<ref>). In this setup, the two beam splitters, BS1 and BS2, have transmission (reflection) coefficients α (β) and α' (β'), respectively. Throughout this work, the input state is assumed to be pure, with no losses. In general, the precision of phase estimation in quantum interferometry is bounded by the QFI, which depends on the way the interferometer phase delay is modeled: (a) two independent phase shifts, θ_1 (θ_2) in the upper (lower) arm; (b) single phase shift in the lower arm; (c) two phase shifts distributed symmetrically, ±θ/2. We first consider the most general scenario, in which the upper and lower arms of an interferometer contain a phase shift, denoted by θ_1 and θ_2, respectively. According to the literature <cit.>, a two-parameter estimation technique can be used to avoid the problem of counting supplementary resources, such as an external phase reference, that are not available. In case where no external phase reference is available, we are only interested in the phase shift difference θ_dif=θ_1-θ_2. Therefore, it is more convenient to express the QFIM in the basis θ_s/d=θ_1±θ_2. To estimate the values of θ_s and θ_dif, we utilize the QFIM, represented by a 2×2 matrix <cit.>: ℋ=([ ℋ_dd ℋ_sd; ℋ_ds ℋ_ss ]), with ℋ_ij=4ℜ_e{⟨∂_iψ|∂_jψ⟩-⟨∂_iψ|ψ⟩⟨ψ|∂_jψ⟩}, where Re is the real part, and the subscripts i and j correspond to θ_s and θ_dif, respectively. We consider the wavevector |ψ⟩, which is expressed as |ψ⟩=exp{-iĝ_2-ĝ_3/2θ_dif}exp{-iĝ_2+ĝ_3/2θ_s}|ψ'⟩. Here, ĝ_l=b̂^†_lb̂_l denotes the number operator corresponding to port l. By applying the field operator transformations above to the input state |ψ⟩, we can obtain the state |ψ'⟩ as b̂_2=αb̂_0+βb̂_1,b̂_3=βb̂_0+αb̂_1. We have the equation |α|^2+|β|^2=1, and αβ^∗=-α^∗β. This relationship implies that α^∗β=± iαβ. Without loss of generality, throughout this work we will adopt the convention α^∗β=i|αβ|. The QCRB is a lower bound on the variance of any unbiased estimator of a parameter and provides the minimum limit for estimating the parameter. In the case of multiparameter estimation, the QCRB is given as Cov(θ̂)≥1/Nℋ^-1, with N repeated experiments, ℋ is the QFIM given in Eq.(<ref>) and Cov(θ̂) stands for the covariance matrix of the estimator, including both θ_dif and θ_s, whose elements are Cov(θ̂)_ij=E(θ̂_iθ̂_j)-E(θ̂_i)E(θ̂_j) with E being a mathematical expectation. Here, we set N = 1, and when examining phase difference sensitivity, we have (Δθ_dif)^2≥ (ℋ^-1)_dd. The expression for the first diagonal element of the inverse matrix ℋ^-1 is as follows ℋ^(a)=1/(ℋ^-1)_dd=ℋ_dd-(F_sd)^2/ℋ_ss, thus, inequality (<ref>) can be saturated, which implies the two-parameter QCRB is reduced to a simpler form of Δθ_QCRB^(a)= 1/√(ℋ^(a)). According to the definition of ℋ_ij in equation (<ref>), the elements of the QFIM given in equation (<ref>), namely ℋ_ss, ℋ_dd, and ℋ_sd, can be determined as follows ℋ_ss=Δ^2ĝ_0+Δ^2ĝ_1, ℋ_dd= (2|α|^2-1)^2(Δ^2ĝ_0+Δ^2ĝ_1)+8|αβ|^2( ⟨ĝ_0⟩⟨ĝ_1⟩-|⟨b̂_0⟩|^2|⟨b̂_1⟩|^2-ℜ_e{⟨(b̂_0^†)^2⟩⟨b̂_1^2⟩-⟨b̂_0^†⟩^2⟨b̂_1⟩^2}) +4|αβ|^2(⟨ĝ_0⟩+⟨ĝ_1⟩)-8|αβ|(2|α|^2-1)(ℑ_m{(⟨b̂_0^†ĝ_0⟩-⟨b̂_0^†⟩⟨ĝ_0⟩)⟨b̂_1⟩+⟨b̂_0⟩(⟨b̂_1^†ĝ_1⟩-⟨ĝ_1⟩⟨b̂_1^†⟩)}), ℋ_sd= ℋ_ds=(2|α|^2-1)(Δ^2ĝ_0-Δ^2ĝ_1)+4|αβ|ℑ_m{⟨b̂_0⟩⟨b̂^†_1⟩-(⟨ĝ_0b̂_0⟩-⟨ĝ_0⟩⟨b̂_0⟩)⟨b̂^†_1⟩+⟨b̂_0⟩(⟨b̂_1^†ĝ_1⟩-⟨b̂^†_1⟩⟨ĝ_1⟩)}, where Δ^2ĝ represents the variance of the number operator ĝ, defined as Δ^2ĝ=⟨ĝ^2⟩-⟨ĝ⟩^2. In the case of a phase shift in one arm, assuming it occurs in output 3 of BS1, and using the notations from Fig.(<ref>), we have the state transformed as |ψ⟩=e^-iθĝ_3|ψ'⟩. Using definition (<ref>), the single-parameter QFI, denoted as ℋ^(b), is given by ℋ^(b)= 4Δ^2ĝ_3, and the QCRB, which provides the optimal phase estimation, is given by Δθ_QCRB^(b)= 1/√(4Δ^2ĝ_3). By applying the field operator transformations (<ref>), we obtain the following expression ℋ^(b)= 4|β|^4Δ^2ĝ_0+4|α|^4Δ^2ĝ_1 +4|αβ|^4(⟨ĝ_0⟩+⟨ĝ_1⟩+2(⟨ĝ_0⟩⟨ĝ_1⟩-|⟨b̂_0⟩|^2|⟨b̂_1⟩|^2)) -8|αβ|^2ℜ_e{⟨b̂^2_0⟩⟨(b̂^†_1)^2⟩-⟨b̂_0⟩⟨b̂^†_1⟩}-8|αβ|ℑ_m{⟨b̂_0⟩⟨b̂^†_1⟩} -16|αβ||β|^2ℑ_m{(⟨ĝ_0b̂_0⟩-⟨ĝ_0⟩⟨b̂_0⟩)⟨b̂^†_1⟩} -16|αβ||α|^2ℑ_m{⟨b̂_0⟩(⟨b̂^†_0ĝ_1⟩-⟨ĝ_1⟩⟨b̂^†_1⟩)}. Comparing equation (<ref>) with the elements of the QFIM derived from the first estimation scenario, we can see that the single-parameter QFI, ℋ^(b), can be expressed in terms of the coefficients of the QFIM as follows ℋ^(b)=ℋ_dd+ℋ_ss-2ℋ_sd. If ℋ_sd = ℋ_ss, then the above equation and the two-parameter QFI (<ref>) are equal, ℋ^(a)=ℋ^(b). In this case, we can prove that ℋ^(b)≥ℋ^(a). In the last case, which is essentially a single-parameter estimation problem similar to the second scenario, it is modeled by the unitary operation as U(θ) = e^iθ/2(ĝ_2-ĝ_3). Consequently, the QFI is given by ℋ^(c)= Δ^2ĝ_2+Δ^2ĝ_3. Similarly in equation (<ref>), we obtain ℋ^(c)=(|α|^4+|β|^4)(Δ^2ĝ_0+Δ^2ĝ_1) +2|αβ|^2(⟨ĝ_0⟩+⟨ĝ_1⟩+2(⟨ĝ_0⟩⟨ĝ_1⟩-|⟨b̂_0⟩|^2|⟨b̂_1⟩|^2)) -2|αβ|^2(⟨b̂^2_0⟩⟨(b̂^†_1)^2⟩+⟨(b̂^†_0)^2⟩⟨b̂^2_1⟩-⟨b̂_0⟩^2⟨b̂^†_1⟩^2-⟨b̂^†_0⟩^2⟨b̂_1⟩^2) +2α^∗β(2|α|^2-1)(⟨b̂^†_0ĝ_0⟩-⟨b̂^†_0⟩⟨ĝ_0⟩)⟨b̂_1⟩ -2α^∗β(2|α|^2-1)(⟨ĝ_0b̂_0⟩-⟨ĝ_0⟩⟨b̂_0⟩)⟨b̂^†_1⟩ +2α^∗β(2|α|^2-1)⟨b̂_0⟩(⟨b̂^†_1ĝ_1⟩-⟨b̂^†_1⟩⟨ĝ_1⟩) -2α^∗β(2|α|^2-1)⟨b̂^†_0⟩(⟨ĝ_1b̂_1⟩-⟨ĝ_1⟩⟨b̂_1⟩), and this implies that the corresponding QCRB becomes Δθ_QCRB^(c)= 1/√(ℋ^(c)). § PHASE ESTIMATION WITH A VACUUM STATE IN THE FIRST INPUT AND SU(1,1) COHERENT STATES IN THE SECOND INPUT In this section, our focus lies on input states characterized by entangled SU(1,1) coherent states combined with a vacuum state. We have identified two principal types of SU(1,1) CSs: The first type, termed as the Perelomov coherent states (PCSs; see Eq.(<ref>)), and the second type, the Barut-Girardello coherent states (BGCSs; see Eq.(<ref>)). The input state is represented as follows |ψ_in⟩=|ξ_i,a⟩_1⊗|0⟩_0, where the subscript i=P or B corresponds to entangled PCS (Eq.<ref>) and entangled BGCS (Eq.<ref>), respectively. We also denote the QFI as ℋ_i. Using the results of the three QFIs reported in the previous section, i.e., ℋ^(a), ℋ^(b), and ℋ^(c), given by equations (<ref>), (<ref>, and (<ref>), it is easy to verify that the QFIs in our input states are given by ℋ^(a)_i= 4|αβ|^2⟨ĝ_1⟩_i, ℋ^(b)_i= 4|α|^4Δ^2ĝ_1+4|αβ|^2⟨ĝ_1⟩_i, ℋ^(c)_i= (|α|^4+|β|^4)Δ^2ĝ_1+2|αβ|^2⟨ĝ_1⟩_i. Using the equations derived above, we can calculate the QFI for each type of SU(1,1) coherent state. We denote the QFI for the first type PCSs as ℋ_P and the QFI for the second type BGCSs as ℋ_B. For PCSs, the analytical expressions for the QFIs simplify to ℋ^(a)_P= 4a|αβ|^2(cosh(v)-1), ℋ^(b)_P= 4a|α|^2(1/2|α|^2sinh^2(v)+|β|^2(cosh(v)-1)), ℋ^(c)_P= a/2(|α|^4+|β|^4)sinh^2(v)+2a|αβ|^2(cosh(v)-1). Therefore, the corresponding QCRBs are Δθ^(a)_QCRB,P= 1/2|αβ|√(a(cosh(v)-1)), Δθ^(b)_QCRB,P= 1/2|α|√(a/2|α|^2sinh^2(v)+a|β|^2(cosh(v)-1)), Δθ^(c)_QCRB,P= 1/√(a/2(|α|^4+|β|^4)sinh^2(v)+2a|αβ|^2(cosh(v)-1)), In the case of BGCSs, the QFIs take the following analytical form ℋ^(a)_B= 4|ξ||αβ|^2I_2a(2|ξ|)/I_2a-1(2|ξ|), ℋ^(b)_B= 4|ξ|/I_2a-1^2(2|ξ|)[|α|^4X+|αβ|^2I_2a-1(2|ξ|)I_2a(2|ξ|)], ℋ^(c)_B= |ξ|/I_2a-1^2(2|ξ|)[(|α|^4+|β|^4)X+2|αβ|^2I_2a-1(2|ξ|)I_2a(2|ξ|) ], where X=I_2a-1(2|ξ|)[|ξ|I_2a+1(2|ξ|)+I_2a(2|ξ|)]-|ξ|I_2a^2(2|ξ|), and Δθ^(a)_QCRB,B= 1/2|αβ|√(I_2a-1(2|ξ|)/|ξ|I_2a(2|ξ|)), Δθ^(b)_QCRB,B= I_2a-1(2|ξ|)/2|α|√(|ξ|(|α|^2X+|β|^2I_2a-1(2|ξ|)I_2a(2|ξ|))), Δθ^(c)_QCRB,B= I_2a-1(2|ξ|)/√(|ξ|[(|α|^4+|β|^4)X+2|αβ|^2I_2a-1(2|ξ|)I_2a(2|ξ|) ]). In the balanced scenario (i.e., |α|=|β|=1/√(2)), the QFIs associated with PCSs takes the form ℋ^(a)_P= a(cosh(v)-1), ℋ^(b)_P= a(1/2sinh^2(v)+cosh(v)-1), ℋ^(c)_P= 1/4asinh^2(v)+1/2(cosh(v)-1). The corresponding QCRBs are given by Δθ^(a)_QCRB,P= 1/√(a(cosh(v)-1)), Δθ^(b)_QCRB,P= 1/√(a(1/2sinh^2(v)+cosh(v)-1)), and Δθ^(c)_QCRB,P=1/√(a/4sinh^2(v)+a/2(cosh(v)-1)). Similarly for the BGCSs, we can write the analytical expressions as ℋ^(a)_B= |ξ|I_2a(2|ξ|)/I_2a-1(2|ξ|), ℋ^(b)_B= |ξ|/I_2a-1^2(2|ξ|)[X+I_2a-1(2|ξ|)I_2a(2|ξ|)], ℋ^(c)_B= |ξ|/2I_2a-1^2(2|ξ|)[X+I_2a-1(2|ξ|)I_2a(2|ξ|) ]. and the corresponding QCRBs take the form Δθ^(a)_QCRB,B= √(I_2a-1(2|ξ|)/|ξ|I_2a(2|ξ|)), Δθ^(b)_QCRB,B= I_2a-1(2|ξ|)/√(|ξ|[X+I_2a-1(2|ξ|)I_2a(2|ξ|)]), and Δθ^(c)_QCRB,B=√(2)I_2a-1(2|ξ|)/√(|ξ|[X+I_2a-1(2|ξ|)I_2a(2|ξ|) ]). In Figure (<ref>), we illustrate the dynamic behavior of the three QFIs metrics considered, ℋ^(a), ℋ^(b), and ℋ^(c), with respect to the transmission coefficient |α|^2 of the first beam splitter for both the Perelomov coherent states (PCS) and the Barut-Girardello coherent states (BGCS) with parameters a=1 and v=1. In Fig.<ref>(a), focusing on the scenario where the single input state is PCS, the plots show that the QFI ℋ^(c)_P remains nearly constant for different values of the transmission coefficient α, while the QFI ℋ^(b)_P exhibits a linear variation, increasing steadily from 0 for |α|=0 to 2asinh^2(v) for |α|=1. Moreover, the two-parameter QFI, represented by ℋ^(a)_P, reaches its maximum value in the balanced case, i.e., when |α|=|β| = 1/√(2). In the extreme case where |α| equals 0 or 1, the QFI reaches its minimum value, ℋ^(a)_P=0. Moving to Fig.<ref>(b), the QFI is plotted for the single input state BGCS. Here, ℋ^(c) remains constant, while ℋ^(b)_P shows a linear progression, consistently ranging from its minimum for |α|=0 to its maximum for |α|=1, which corresponds to 4|ξ|/I_2a-1^2(2|ξ|). Furthermore, for the two-parameter QFI, ℋ^(a)P reaches its maximum value when |α|=|β| = 1/√(2). The QFI vanishes in the extreme cases where |α| equals 0 or 1. More importantly, comparing Fig.<ref>(a) with Fig.<ref>(b), we observe that the values of the phase estimate for PCSs are larger than those for BGCSs in all the scenarios considered. § PHASE SENSITIVITY IN A MACH-ZEHNDER INTERFEROMETER We will now proceed to close the Mach-Zehnder interferometer (MZI) using BS2, which is characterized by its transmission (reflection) coefficient α' (β'). We will then examine the performance of three realistic detection schemes: namely, the difference intensity, the single-mode intensity, and the balanced homodyne detection. To delve deeper into the analysis, we explore the quantum parameter estimation problem. In this context, we consider an experimentally accessible Hermitian operator, denoted by Ŝ , that depends on a parameter θ. In our specific scenario, θ corresponds to the phase shift in a MZI and may or may not be observable. Its average is given by ⟨Ŝ(θ)⟩=⟨ψ|Ŝ(θ)|ψ⟩, where |ψ⟩ represents the wave function of the system. When a small variation δθ is applied to the parameter θ, it induces a change described as ⟨Ŝ(θ+δθ)⟩≈⟨Ŝ(θ)⟩+∂⟨Ŝ(θ)⟩/∂θδθ. The experimental detectability of the difference between ⟨Ŝ(θ+δθ)⟩ and ⟨Ŝ(θ)⟩ depends on satisfying the condition ⟨Ŝ(θ+δθ)⟩ - ⟨Ŝ(θ)⟩≥ΔŜ(θ), where ΔŜ is the standard deviation of Ŝ and is defined as the square root of the variance Δ^2Ŝ, which is expressed as ΔŜ=√(⟨Ŝ^2⟩ - ⟨Ŝ⟩^2). If the inequality (<ref>) is saturated by the value of δθ, then this variation δθ is called the sensitivity, denoted by Δθ <cit.> Δθ=ΔŜ/|∂/∂θ⟨Ŝ⟩|. In the following, θ represents the total phase shift inside the interferometer, which is divided into two parts: the first part, denoted as θ_i, represents the quantity we want to measure, and the second part is θ_exp, which is experimentally controllable. We express this relationship as θ=θ_i+θ_exp. In interferometry, the condition |θ_i|≪|θ| is crucial because it indicates that the unknown phase shift θ_i has a limited effect on the total phase shift θ. Therefore, the experimenter must adjust θ_exp to approach the optimal phase shift, denoted as θ_opt, to achieve the best performance. In the above description, we will focus on the phase sensitivity for each of the considered detection schemes, i.e., difference intensity detection, single-mode intensity detection, and balanced homodyne detection. §.§ Difference intensity detection scheme In the difference intensity detection scheme, which is only sensitive to the difference between the phase shifts θ_1 and θ_2, We compute the disparity in the output photocurrents, labeled as Ĝ_dif, specifically those detected at D_4 and D_5, as illustrated in Fig.(<ref>). Thus, the output operator Ĝ_dif is defined as Ĝ_dif=b̂^†_4b̂_4-b̂^†_5b̂_5. To express the operator Ĝ_dif in terms of the input field operators, we need the field operator transformations, b̂_4=α'b̂_2+β'b̂_3, b̂_5=β'b̂_2+α'b̂_3, where α' (β') represents the transmission (reflection) coefficients of the second beam splitter (BS2). Using the field operator transformations (<ref>), we obtain b̂_4=[(αα'e^-iθ_2+ββ'e^-iθ_1)b̂_0+(αβ'e^-iθ_1+βα'e^-iθ_2)b̂_1], b̂_5=[(αβ'e^-iθ_2+βα'e^-iθ_1)b̂_0+(αα'e^-iθ_1+ββ'e^-iθ_2)b̂_1], where θ=θ_1-θ_2. Substituting the field operator transformations into the definition of Ĝ_dif yields Ĝ_dif =[(2|α|^2-1)(2|α'|^2-1)-4|αα'ββ'|cosθ](ĝ_0-ĝ_1) +4ℜ_e{((|β|^2e^-iθ-|α|^2e^iθ)α'^∗β'+α^∗β(1-2|α'|^2))ĝ_0ĝ_1^†}. To quantify the phase sensitivity here, we can define it as Δθ_dif=ΔĜ_dif/|∂/∂θ⟨Ĝ_dif⟩|, where the derivative of Ĝ_dif with respect to θ is given by ∂/∂θ⟨Ĝ_dif⟩ =4|αα'ββ'|sinθ(⟨ĝ_0⟩-⟨ĝ_1⟩) +4|α'β'|ℜ_e{(|β|^2e^-iθ+|α|^2e^iθ)⟨b̂_0⟩⟨ĝ^†_1⟩}. Thus, the variance of the operator Ĝ_dif can be derived as the following form Δ^2Ĝ_dif= δ_A^2(Δ^2ĝ_0+Δ^2ĝ_1)+|δ_B|^2(⟨ĝ_0⟩+⟨ĝ_1⟩) +2|δ_B|^2(⟨ĝ_0⟩⟨ĝ_1⟩-|⟨b̂_0⟩|^2||⟨b̂_1⟩|^2) +2ℜ_e{δ_B^2(⟨b̂_0^2⟩⟨(b̂_1^†)^2⟩)-⟨b̂_0⟩^2⟨b̂_1^†⟩^2} +4δ_Aℜ_e{δ_B((⟨ĝ_0b̂_0⟩-⟨ĝ_0⟩⟨b̂_0⟩)⟨b̂_1^†⟩.. ..-⟨b̂_0⟩(⟨b̂_1^†ĝ_1⟩-⟨ĝ_1⟩⟨b̂_1^†⟩))}, where δ_A= 1-2(|α||β'|+|β||α'|)^2+4|αβ||α'β'|(1-cosθ), δ_B= 2i(|αβ|(1-2|α'|^2)+(1-2|α|^2)|α'β'|cosθ) +2|α'β'|sinθ, satisfy the following condition δ_A^2+|δ_B|^2=1. In this detection scheme, the phase sensitivity remains the same for both scenarios (b) and (c). For simplicity, we will use the notation Δθ_dif to collectively represent the phase sensitivity in all detection schemes. §.§ Single-mode intensity detection scheme In the single-mode intensity detection scheme, we focus on a single photocurrent at output port 4 (see Fig.(<ref>)), represented by its associated operator ĝ_4=b̂_4^†b̂_4. The phase sensitivity in this scenario is defined as Δθ_sing=Δĝ_4/|∂/∂θ⟨ĝ_4⟩|. From equation (<ref>), we can determine the average number of photons with respect to the input field operator as ⟨ĝ_4⟩ =(|αα'|^2+|ββ'|^2-2|αα'ββ'|cosθ)⟨ĝ_0⟩ +(|αβ'|^2+|α'β|^2+2|αα'ββ'|cosθ)⟨ĝ_1⟩ +2ℜ_e{(α^∗β(2|α'|^2-1)+α'^∗β'e^-iθ(|α|^2-|β|^2e^2iθ))⟨b̂_0^†⟩⟨b̂_1⟩}. Using the above equation, we immediately get ∂⟨ĝ_4⟩/∂θ= 2|αα'ββ'|sinθ(⟨ĝ_0⟩-⟨ĝ_1⟩) +2||α'^∗β'|ℜ_e{(|α|^2e^-iθ+|β|^2e^iθ)⟨b̂_0^†⟩⟨b̂_1⟩}. To find Δ^2ĝ_4, we first calculate the square of the operator ĝ_4. Then we get the final expression for Δ^2ĝ_4 as Δ^2ĝ_4 =|δ_3|^2(⟨ĝ_0⟩+⟨ĝ_1⟩+2⟨ĝ_0⟩⟨ĝ_1⟩-2|⟨b̂_0⟩|^2|⟨b̂_1⟩|^2) +2δ_0ℜ_e{δ_3(⟨ĝ_0b̂_0^†⟩+⟨b̂_0^†ĝ_0⟩-2⟨ĝ_0⟩⟨b̂_0^†⟩)⟨b̂_1⟩} +2δ_1ℜ_e{δ_3⟨b̂_0^†⟩(⟨ĝ_1b̂_1⟩+⟨b̂_1ĝ_1⟩-2⟨ĝ_1⟩⟨b̂_1⟩)} +2|δ_3^2|ℜ_e{⟨b̂_0^2⟩⟨(b̂_1^†)^2⟩-⟨b̂_0⟩^2⟨b̂_1^†⟩^2}+δ_0^2Δ^2ĝ_0 +δ_1^2Δ^2ĝ_1, where δ_0= |αα'|^2+|ββ'|^2-2|αα'ββ'|cosθ, δ_1= |αβ'|^2+|α'β|^2+2|αα'ββ'|cosθ, δ_3= α^∗β(2|α'|^2-1)+α'^∗β'(|α|^2e^-iθ-|β|^2e^iθ). §.§ Balanced homodyne detection scheme We now turn to the balanced homodyne detection scheme at output port 4 (see Fig.(<ref>)). The operator of interest for modeling this detection scheme is given by X̂_θ_L=ℜ_e{e^-iθ_Lb̂_4}, where θ_L is the phase of the local coherent source |γ⟩, where |γ⟩ = |γ|e^iθ_L and γ is a complex number. In this detection scheme, we define the phase sensitivity as follows Δθ_hom=√(Δ^2X̂_θ_L)/|∂⟨X̂_θ_L⟩/∂θ|. Using the field operator transformations (<ref>), we arrive at the final expression for ⟨X̂_θ_L⟩ as ⟨X̂_θ_L⟩= ℜ_e{e^-iθ_L((αα'e^-iθ_2+ββ'e^-iθ_1)⟨b̂_0⟩.. .. +(αβ'e^-iθ_1+βα'e^-iθ_2)⟨b̂_1⟩)}, and the variance of the above operator is given by Δ^2X̂_θ_L= 1/4+2ℜ_e{x^2Δ^2b̂_0+y^2Δ^2b̂_1} +2|x|^2(⟨ĝ_0⟩-|⟨b̂_0⟩|^2)+2|y|^2(⟨ĝ_1⟩-|⟨b̂_1⟩|^2), where the coefficients A and B are given by x= 1/2e^-i(θ_L+θ_2)(αα'+ββ'e^-iθ), y= 1/2e^-i(θ_L+θ_2)(αβ'e^-iθ+t'r). For scenario (b) in Fig.(<ref>), where θ_1=θ and θ_2=0, the absolute value of the derivative of ⟨X̂_θ_L⟩ with respect to θ is given by |∂⟨X̂_θ_L⟩/∂θ|=|ℜ_e{e^-i(θ_L+θ)(β⟨b̂_0⟩+α⟨b̂_1⟩)}||β'| and for scenario (c), where θ_1=-θ_2=θ/2, we obtain |∂⟨X̂_θ_L⟩/∂θ|= 1/2|ℜ_e{ie^-iθ_L((αα'e^iθ/2-ββ'e^-iθ/2)⟨b̂_0⟩.. .. +(βα'e^iθ/2-αβ'e^-iθ/2))⟨b̂_1⟩}| § PHASE SENSITIVITY WITH SU(1,1) COHERENT STATES IN THE INPUT In this section, we compare the phase sensitivities achievable by the three considered detection schemes, as presented in Section IV, for input SU(1,1) coherent states with the QCRBs implied by the various QFIs discussed in Section III. Using the results of the phase sensitivities reported in the previous section, i.e., Δθ_dif, Δθ_sing, and Δθ_hom, and considering the input state (<ref>), it is easy to verify that the phase sensitivities in our input states are as follows:For a difference-intensity detection scheme, we get the final analytical expression of the phase sensitivity for the two types of SU(1,1) CSs, as Δθ_dif^i=Δ_iĜ_dif/4|αα'ββ'||sinθ⟨ĝ_1⟩_i|, where the subscript i=P or B corresponds to entangled PCS (Eq.<ref>) and entangled BGCS (Eq.<ref>), respectively. Then, from the above expression of the phase sensitivity, we calculate its final analytical expression for the two types of SU(1,1) CSs, as Δθ_dif^P=√(1/2δ_A^2sinh^2 v+|δ_B|^2(cosh v-1))/4√(a)|αα'ββ'||(cosh v-1)sinθ|, Δθ_dif^B= √(δ_A^2|ξ|[ I_2a-1I_2a+1-I_2a^2]+I_2a-1I_2a)/4|αα'ββ'||sinθ|√(|ξ|)I_2a, where δ_A and δ_B were defined in equation (<ref>). To simplify notation, we define I_2a≡ I_2a(2|ξ|). Interestingly, for both scenarios (b) and (c), this detection scheme yields the same phase sensitivity result.For a single-mode intensity detection scheme, we obtain the phase sensitivity in all considered scenarios as Δθ_sing^i=Δ_iĝ_4/2|αα'ββ'||sinθ⟨ĝ_1⟩_i|, and the analytical expression of the phase sensitivity for our input SU(1,1) coherent states takes the form Δθ_sing^P=√(1/2δ_1^2sinh^2v+|δ_3|^2(cos v-1))/2√(a)|αα'ββ'||sinθ (cosh v-1)|, Δθ_sing^B=√(δ_1^2|ξ|[ I_2a-1I_2a+1-I_2a^2]+(δ_1^2+|δ_3|^2)^2I_2a-1I_2a)/2√(|ξ|)|αα'ββ'||sinθ|I_2a, where δ_1 and δ_3 were defined in equation (<ref>).Finally, for a balanced homodyne detection scheme, we have Δ^2_iX̂_θ_L=1/4+2ℜ_e{y^2Δ^2_ib̂_1}+2|y|^2(⟨ĝ_1⟩_i-|⟨b̂_1⟩_i|^2), In the case of scenario (b), where θ_1=θ and θ_2=0, and assuming θ_L=φ, the variance of the operator X̂_θ_L is given by Δ_i^2X̂_θ_L= 1/4-1/2(|αβ'|^2cos2θ+|α'β|^2+2|αα'ββ'|cosθ)μ_i +1/2(|αβ'|^2+|α'β|^2+2|αα'ββ'|cosθ)( g̅_i-|ν_i|^2), where μ_P=cosh^-4a(v/2)/tanh^2(v/2)[∑_g=0^∞√(Γ(g+2a)Γ(g+2a-2))/Γ(2a)(g-2)!tanh^2g(v/2)-1/cosh^4a(v/2)(∑_g=0^∞√(Γ(g+2a)Γ(g+2a-1))/Γ(2a)(g-1)!tanh^2g(v/2))^2], μ_B=tanh^2a-3(v/2)/I_2a-1[∑_g=0^∞tanh^2g(v/2)/(g-2)!√(Γ(g+2a)Γ(g+2a-2))-tanh^2a-1(v/2)/I_2a-1(∑_g=0^∞tanh^2g(v/2)/(g-1)!√(Γ(g+2a)Γ(g+2a-1)))^2], ν_P=(1-tanh^2(v/2))^2a∑_g=0^∞√(Γ(g+2a)Γ(g+2a-1))/Γ(2a)(g-1)!tanh^(2g-1)(v/2) ν_B=tanh^2(a-2)(v/2)/I_2a-1∑_g=0^∞tanh^2g(v/2)/(g-1)!√(Γ(g+2a)Γ(g+2a-1)), g̅_P=a(cosh v-1), g̅_B=I_2a/I_2a-1tanh(v/2). From equation (<ref>), we get |∂⟨X̂_θ_L⟩/∂θ|=1/|tan(v/2)||αβ'||cos(θ) ν_i|. Based on these results, the phase sensitivity is given by Δθ_hom^(b)= |tan(v/2)|√(Δ_i^2X̂_θ_L)/|αβ'||cos(θ) ν_P|. In the case of scenario (c), where θ_1=-θ_2=θ/2, the above variance is given by Δ_i^2X̂_θ_L =1/4-1/2((|αβ'|^2+|α'β|^2)cosθ+2|αα'ββ'|)μ_i +1/2(|αβ'|^2+|α'β|^2+2|αα'ββ'|cosθ)( g̅_i-|ν_i|^2). Based on the same reasoning, from equation (<ref>) we obtain |∂⟨X̂_θ_L⟩/∂θ|=1/2|tan(v/2)|||α'β|-|αβ'|||cos(θ/2) ν_i|, From this, we can calculate the phase sensitivity in this last scenario as Δθ_hom^(c)= |2tan(v/2)|√(Δ_i^2X̂_θ_L)/||α'β|-|αβ'|||cos(θ/2) ν_i|. Fig.(<ref>) presents the variation of the four phase sensitivities as a function of the MZI's phase shift for both the Perelomov and Barut-Girardello coherent input states with parameters a=1 and v=1. The figure shows these phase sensitivities for the three detection schemes discussed previously, alongside the corresponding three QCRBs. We plot the difference-intensity detection and single-mode detection schemes for the case of a balanced beam splitter, i.e., α=1/√(2) and β=i/√(2), considering its optimal setup. For the balanced homodyne detection scheme, we consider transmission coefficients α=1 for the first beam splitter and α'=0 for the second beam splitter, implying its optimal performance. The gray curve and the green curve in Fig.<ref>(a) and Fig.<ref>(b) correspond to the difference intensity detection scheme and the single-mode detection scheme, respectively. As can be seen in this figure, both curves have an optimum that achieves the QCRB, which is implied by the two-parameter QFI. The blue and orange lines are for two balanced homodyne detection schemes. As observed in this figure, these phase sensitivities reach the QCRBs. For the chosen values, the phase sensitivity Δθ_hom^(b) shows significantly better performance than the phase sensitivity Δθ_dif. This suggests an advantage to using an external phase reference for both the Perelomov coherent input state and the Barut-Girardello coherent input state. The input state, including the local oscillator, can be expressed as |ψ⟩=|ψ_in⟩⊗|γ⟩=|ξ_i,a⟩⊗||γ|e^iθ_L⟩. Our interferometer consists of two arms: one involving input port 1, passing through the first beam splitter with total transmission, phase shift θ_1, the second beam splitter with total reflection, and reaching the beam splitter (BSL). The other arm represents the local oscillator fed into the balanced beam splitter of the homodyne setup. To compare the performance of optimal phase estimation in different detection schemes for the two types of SU(1,1) CSs, we use a technique where we introduce the ratio between the phase sensitivities of these two states in different detection schemes as R=Δθ_i/Δθ_j, with i representing the Perelomov coherent input state and j representing the Barut-Girardello coherent input state. As a result, when the ratio R<1, the error limit of the phase sensitivity for the Perelomov coherent input state in different detection schemes is smaller and offers an advantage over that of the Barut-Girardello coherent input state. As shown in Fig.(<ref>), we observe that the values of the phase sensitivities for the two states satisfy the inequality Δθ_P<Δθ_B, indicating that the performance of the phase sensitivities for the Perelomov coherent input state is better and would provide a more precise result than that of the Barut-Girardello coherent input state. § CONCLUSION Optimizing the sensitivity of a MZI requires careful consideration of both the input state and the detection scheme. QFI serves as a valuable tool to identify the optimal operating points that achieve the highest possible sensitivity. This paper presents theoretical calculations of QCRBs for both two-parameter and single-parameter estimation in quantum interferometry, considering two input scenarios. We explore the performance of Perelomov and Barut-Girardello coherent input states within the SU(1,1) Lie algebra. We investigate their phase sensitivity across various detection schemes, including difference-intensity, single-mode, and balanced homodyne detection. Furthermore, we analyze the QCRB associated with the QFI obtained for these states in all the aforementioned scenarios. Our results demonstrate that the phase sensitivities for the Perelomov coherent input state in different detection schemes are better and would provide a more precise result than that of the Barut-Girardello coherent input state. The use of balanced homodyne detection techniques has been studied. The availability of an external phase reference can significantly enhance the performance of these input states, particularly in the unphysical limits where the transmission coefficients of the beam splitters approach |α|→ 1 and |α'|→ 0. This suggests that an external phase reference can improve the performance of these input states in quantum interferometry, potentially leading to improved measurement precision. Data Availability: No data associated in the manuscript. Disclosures: The authors declare no conflicts of interest 88 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty Peters2001 A. Peters, K. Y. Chung, and S. Chu, High-precision gravity measurements using atom interferometry, Metrologia, 38 (2001) 25. Fixler2007 J. B. Fixler, G. T. Foster, J. M. McGuirk, and M. A. Kasevich, Atom Interferometer Measurement of the Newtonian Constant of Gravity, Science, 315 (2007) 74–77. Abbott2016 B. P. Abbott, R. Abbott, T. Abbott, et al, Observation of gravitational waves from a binary black hole merger, Phys. Rev. Lett. 116 (2017) 061102. Abbott2017 B. P. Abbott, R. Abbott, T. Abbott, et al, Observation of ravitational Waves from a Binary Neutron Star Inspiral, Phys. Rev. Lett. 119 (2017) 161101. Ou2020 Z. Y. Ou and X. Li, Quantum SU (1, 1) interferometers: Basic principles and applications, APL Photonics, 5 (2020) 8. Gerry2001 C. C. Gerry and R. A. Campos, Generation of maximally entangled photonic states with a quantum-optical Fredkin gate, Phys. Rev. A, 64 (2001) 063814. Pezze2008 L. Pezzé and A. Smerzi, Mach-Zehnder interferometry at the Heisenberg limit with coherent and squeezed-vacuum light, Phys. Rev. Lett. 100 (2008) 073601. Berry2009 D. W. Berry, B. L. Higgins, S. D. Bartlett, et al., How to perform the most accurate possible phase measurements, Phys. Rev. A, 80 (2009) 052114. Liu2010 Y. C. Liu, G. R. Jin and L. You, Quantum-limited metrology in the presence of collisional dephasing, Phys. Rev. A, 82 (2010) 045601. Giovannetti2011 V. Giovannetti, S. Lloyd and L. Maccone, Advances in quantum metrology, Nature photonics, 5 (2011) 222–229. Birrittella2012 R. Birrittella, J. Mimih and C. C. Gerry, Multiphoton quantum interference at a beam splitter and the approach to Heisenberg-limited interferometry, Phys. Rev. A, 86 (2012) 063828. Lu2012 X. M. Lu, S. Luo and C. H. Oh, Hierarchy of measurement-induced Fisher information for composite states, Phys. Rev. A, 86 (2012) 022342. Gerry2012 A. W. Chin, S. F. Huelga and M. B. Plenio, Quantum metrology in non-Markovian environments, Phys. Rev. Lett. 109 (2012) 233601. Gagatsos2013 C. N. Gagatsos, O. Oreshkov and N. J. Cerf, Majorization relations and entanglement generation in a beam splitter, Phys. Rev. A, 87 (2013) 042307. Slaoui2023 A. Slaoui, B. Amghar and R. A. Laamara, Interferometric phase estimation and quantum resource dynamics in Bell coherent-state superpositions generated via a unitary beam splitter, JOSA B, 40 (2023) 2013–2027. Nagata2007 T. Nagata, R. Okamoto, J. L. O’Brien, et al., Beating the standard quantum limit with four-entangled photons, Science, 316 (2007) 726. Estève2008 J. Estève, C. Gross, A. Weller, et al., Squeezing and entanglement in a Bose–Einstein condensate, Nature, 455 (2008) 1216. Appel2009 J. Appel, P. J. Windpassinger, D. Oblak, et al., Mesoscopic atomic entanglement for precision measurements beyond the standard quantum limit, PNAS, 106 (2009) 10960. Riedel2010 M. F. Riedel, P. Böhi, Y. Li, et al., Atom-chip-based generation of entanglement for quantum metrology, Nature, 464 (2010) 1170. Abbott2009 B. P Abbott, R. Abbott, R. Adhikariet et al., LIGO: the laser interferometer gravitational-wave observatory, Rep. Prog. Phys. 72 (2009) 076901. Demkowicz2013 R. Demkowicz-Dobrzański, K. Banaszek, and R. Schnabel, Fundamental quantum interferometry bound for the squeezed-light-enhanced gravitational wave detector GEO 600, Phys. Rev. A, 88 (2013) 041802. Grote2013 H. Grote, K. Danzmann, K. L. Dooley, R. Schnabel, J. Slutsky, and H. Vahlbruch, First long-term application of squeezed states of light in a gravitational-wave observatory, Phys. Rev. Lett. 110 (2013) 181101. Aasi2013 J. Aasi, J. Abadie, B. P. Abbott, et al., Enhanced sensitivity of the LIGO gravitational wave detector by using squeezed states of light, Nature Photonics, 8 (2013) 613-619. Oelker2014 E. Oelker, L. Barsotti, S. Dwyer, D. Sigg, and N. Mavalvala, Squeezed light for advanced gravitational wave detectors and beyond, Optics express, 32 (2014) 21106–21121. Acernese2014 F. Acernese, M. Agathos, K. Agatsuma, et al., Advanced Virgo: a second-generation interferometric gravitational wave detector, Class. Quantum Grav. 32 (2014) 024001. Scientific2017 L. I. G. O. Scientific, B. P. Abbott, R. Abbott, et al., GW170104: observation of a 50-solar-mass binary black hole coalescence at redshift 0.2, Phys. Rev. Lett. 118 (2017) 221101. Vahlbruch2018 H. Vahlbruch, D. Wilken, M. Mehmet, and B. Willke, Laser power stabilization beyond the shot noise limit using squeezed light, Phys. Rev. Lett. 121 (2018) 173601. Mehmet2018 M. Mehmet and H. Vahlbruch, High-efficiency squeezed light generation for gravitational wave detectors, Class. Quantum Grav. 36 (2018) 015014. Tse2019 M. E. Tse, H. Yu, N. Kijbunchoo, al., Quantum-enhanced advanced LIGO detectors in the era of gravitational-wave astronomy, Phys. Rev. Lett. 123 (2019) 231107. Bergh1981 R. A. Bergh, H. C. Lefevre and H. J. Shaw, All-single-mode fiber-optic gyroscope with long-term stability, Optics Letters, 6 (1981) 502–504. Li2017 J. Li, M.-G. Suh, and K. Vahala, Microresonator brillouin gyroscope, Optica, 4 (2017) 346–-348. Liang2017 W. Liang, V. S. Ilchenko, A. A. Dale, et al., Resonant microphotonic gyroscope, Optica, 4 (2017) 114-–117. Khial2018 P. P. Khial, A. D. White, and A. Hajimiri, Nanophotonic optical gyroscope with reciprocal sensitivity enhancement, Nature Photonics, 12 (2018) 671–-675. Caves1981 C. M. Caves, Quantum-mechanical noise in an interferometer, Physical Review D, 23 (1981) 1693–1708. Xiao1987 M. Xiao, L.-A. Wu, and H. J. Kimble, Precision measurement beyond the shot-noise limit, Phys. Rev. Lett. 59 (1987) 278. Boto2000 A. N. Boto, P. Kok, D. S. Abrams, S. L. Braunstein, C. P. Williams, and J. P. Dowling, Quantum Interferometric Optical Lithography: Exploiting Entanglement to Beat the Diffraction Limit, Phys. Rev. Lett. 85 (2000) 2733. Steinlechner2013 S. Steinlechner, J. Bauchrowitz, M. Meinders, H. M¨ullerEbhardt, K. Danzmann, and R. Schnabel, Quantumdense metrology, Nature Photonics, 7 (2013) 626. Bollinger1996 J. J. Bollinger, W. M. Itano, D. J. Wineland, and D. J. Heinzen, Optimal frequency measurements with maximally correlated states, Phys. Rev. A, 54 (1996) R4649. Dowling2008 J. P. Dowling, Quantum optical metrology–the lowdown on high-N00N states, Contemporary physics, 49 (2008) 125. Pezze2013 L. Pezzé and A. Smerzi, Ultrasensitive two-mode interferometry with single-mode number squeezing, Phys. Rev. Lett. 110 (2013) 163604. Anisimov2010 P. M. Anisimov, G. M. Raterman, A. Chiruvelli, et al., Quantum metrology with two-mode squeezed vacuum: parity detection beats the Heisenberg limit, Phys. Rev. Lett. 96 (104) 103602. Ou1997 Z. Y. Ou, Fundamental quantum limit in precision phase measurement, Phys. Rev. A, 55 (1997) 2598. Giovannetti2006 V. Giovannetti, S. Lloyd, and L. Maccone, Quantum metrology, Phys. Rev. Lett. 96 (2006) 010401. Ataman2020 S. Ataman, Single-versus two-parameter Fisher information in quantum interferometry, Phys. Rev. A, 102 (2020) 1. Braunstein1994 S. L. Braunstein and C. M. Caves, Statistical distance and the geometry of quantum states, Phys. Rev. Lett. 72 (1994) 3439. Demkowicz2012 R. Demkowicz-Dobrzański, J. Kołodyński, and M. Guţă, The elusive Heisenberg limit in quantum-enhanced metrology, Nature communications, 3 (2012) 1063. Pezze2015 L. Pezzé, P. Hyllus, and A. Smerzi, Phase-sensitivity bounds for two-mode interferometers, Phys. Rev. A, 91 (2015) 032103. Yurke1986 B. Yurke, S. L. McCall, and J. R. Klauder, SU(2) and SU(1, 1) interferometers, Phys. Rev. A, 33 (2015) 4033. Helstrom1973 C. Helstrom, Minimum mean-squared error of estimates in quantum statistics, Phys. Lett. A, 25 (1967) 101. Helstrom1968 C. Helstrom, The minimum variance of estimates in quantum signal detection, IEEE Transactions on information theory, 14 (1968) 234. Ataman2019 S. Ataman, Optimal Mach-Zehnder phase sensitivity with Gaussian states, Phys. Rev. A, 100 (2019) 6. Holevo1973 A. S. Holevo, Statistical decision theory for quantum systems, Journal of multivariate analysis, 3 (1973) 337–394. Wu2019 J.-Y. Wu, N. Toda, and H. F. Hofmann, Quantum enhancement of sensitivity achieved by photon-number-resolving detection in the dark port of a two-path interferometer operating at high intensities, Phys. Rev. A, 100 (2019) 013814. Abouelkhir2023 N. Abouelkhir, H. E. Hadfi, A. Slaoui and R. A. Laamara, A simple analytical expression of quantum Fisher and Skew information and their dynamics under decoherence channels, Physica A: Statistical Mechanics and its Applications, 612 (2023) 128479. Ikken2023N. Ikken, A. Slaoui, R. Ahl Laamara, and L. B. Drissi, Bidirectional quantum teleportation of even and odd coherent states through the multipartite Glauber coherent state: Theory and implementation, Quantum Inf Process, 22 (2023) 391. Perelomov1977 A. M. Perelomov, Generalized coherent states and some of their applications, Sov. Phys. Usp. 20 (1977) 703. Barut1971 A. O. Barut and L. Girardello, New “coherent” states associated with non-compact groups, Commun. Math. Phys. 21 (1971) 41–55. Jarzyna2012 M. Jarzyna and R. Demkowicz-Dobrzański, Quantum interferometry with and without an external phase reference, Phys. Rev. A, 85 (2012) 011801. Lang2013 M. D. Lang and C. M. Caves, Optimal quantum-enhanced interferometry using a laser power source, Phys. Rev. Lett. 111 (2013) 173601. Ataman2022 S. Ataman, Quantum Fisher information maximization in an unbalanced interferometer, Phys. Rev. A, 105 (2022) 1. Takeoka2017 M. Takeoka, K. P. Seshadreesan, C. You, S. Izumi, and J. P, Fundamental precision limit of a Mach-Zehnder interferometric sensor when one of the inputs is the vacuum, Phys. Rev. A, 96 (2017) 052118. Liu2020 J. Liu, H. Yuan, X. M. Lu and X. Wang, Quantum Fisher information matrix and multiparameter estimation, Journal of Physics A: Mathematical and Theoretical, 53 (2020) 023001. Abouelkhir2023(2) N. E. Abouelkhir, A. Slaoui, H. El Hadfi and R. A. Laamara, Estimating phase parameters of a three-level system interacting with two classical monochromatic fields in simultaneous and individual metrological strategies, JOSA B, 40 (2023) 1599–1610. dAriano1994 G. M. d’Ariano and M. G. A. Paris, Lower bounds on phase sensitivity in ideal and feasible measurements, Phys. Rev. A, 49 (1994) 3022.
http://arxiv.org/abs/2406.08960v1
20240613094931
AirPlanes: Accurate Plane Estimation via 3D-Consistent Embeddings
[ "Jamie Watson", "Filippo Aleotti", "Mohamed Sayed", "Zawar Qureshi", "Oisin Mac Aodha", "Gabriel Brostow", "Michael Firman", "Sara Vicente" ]
cs.CV
[ "cs.CV" ]
Gatemonium: A Voltage-Tunable Fluxonium Javad Shabani June 17, 2024 ======================================= § ABSTRACT Extracting planes from a 3D scene is useful for downstream tasks in robotics and augmented reality. In this paper we tackle the problem of estimating the planar surfaces in a scene from posed images. Our first finding is that a surprisingly competitive baseline results from combining popular clustering algorithms with recent improvements in 3D geometry estimation. However, such purely geometric methods are understandably oblivious to plane semantics, which are crucial to discerning distinct planes. To overcome this limitation, we propose a method that predicts multi-view consistent plane embeddings that complement geometry when clustering points into planes. We show through extensive evaluation on the ScanNetV2 dataset that our new method outperforms existing approaches and our strong geometric baseline for the task of plane estimation. § INTRODUCTION While only parts of the real world are perfectly planar, a 3D reconstruction made out of planes is a useful parameterization for many downstream tasks. A planar scene reconstruction is a common representation for applications in robotics <cit.>, path planning <cit.>, and augmented reality (AR) <cit.>. For example, both ARKit <cit.> and ARCore <cit.>, two of the most used AR platforms, provide 3D plane estimation from scenes as part of their frameworks. Broadly, there are two families of approaches for 3D plane extraction from images: geometric versus learning-based methods. Geometry-based pipelines assume access to a point cloud or mesh of the scene, as estimated from multi-view stereo or LIDAR. This geometry is then partitioned into planes using geometric cues, using RANSAC <cit.>. The disadvantages of these approaches are that they can be sensitive to noisy data and they do not typically encode learned priors to facilitate robust plane estimation. In contrast, learning-based methods make use of supervised data to develop models that can predict plane parameters from raw images. Many prior works have focused on the task of extracting planes from single input images <cit.>. In practice though, it is more common to have a sequence of input images of the scene of interest, in AR applications where the user is interacting with new parts of the scene in real-time. There is however, only limited work that extends these learning-based single image methods to the multi-image setting <cit.>. Inspired by recent work in interactive labeling <cit.>, we propose an alternative approach to discovering planes in 3D. We train a small MLP network for each scene, which maps any 3D location in that scene to an embedding vector. Using various 2D and 3D cues, we train the MLP to produce embeddings which are 3D consistent and can be easily clustered to uncover distinct and accurate planar regions. By exploiting learned cues when decomposing a scene into planes, our method can adapt to different definitions of what constitutes a plane. This is important because the concept of what counts as a plane is application dependent. For example, a painting on the wall can be considered either a distinct plane or part of the wall plane, depending on the application. Unlike purely geometric definitions, our method learns what is considered a plane based on what is “encoded” in the training data. Our core contribution is a new method that estimates 3D-consistent plane embeddings from a sequence of posed RGB images, and then groups them into planar instances. We demonstrate via extensive evaluation that our method is more accurate than recent end-to-end learning-based approaches, and can run at interactive speeds. We also make a surprising observation by proposing an additional strong `geometry plus RANSAC' baseline. It can achieve impressive accuracy, outperforming existing baselines, ranking second place behind our proposed method. § RELATED WORK Planes from single images. Although estimating 3D planes from single images is an ill-posed problem, multiple deep learning solutions have been proposed. Top-down approaches <cit.> directly predict a mask and the parameters of each plane. In contrast, bottom-up approaches <cit.> first map pixels into embeddings, which can subsequently be clustered into planes (via clustering methods <cit.>). More recent works <cit.> leverage the query learning mechanism of Vision Transformers <cit.> to achieve single-image results. These methods process frames independently, so are unable to produce temporally and 3D-consistent planes. As a result, they would require non-trivial plane tracking mechanisms to match the same plane across different frames over time. In contrast, we leverage multi-view image sequences, which enables planes to be estimated in 3D rather than just from single images. We note that some works use planarity assumptions to regularize depth maps <cit.> or to improve 3D scenes <cit.> and poses <cit.>. In contrast, our aim is to find a high quality planar decomposition of the scene, rather than to use planarity for regularization in downstream tasks. Planes from 3D and multi-view images. The extraction of geometric primitives, such as planes, from 3D point clouds is an established problem <cit.>. RANSAC <cit.> and the Hough transform <cit.> are popular strategies to help fit planes, and other 3D shapes <cit.>, to 3D data. While a small number of works start from multi-view stereo estimated point clouds <cit.>, the vast majority of plane extraction methods assume access to higher-quality 3D LIDAR scans <cit.>. These methods can be slow, not suitable for real-time AR applications, and they cannot easily cope with non-trivial amounts of noise in the input point clouds. To address noise, existing methods have attempted to enforce simple to define priors during reconstruction such as a Manhattan-world assumption <cit.>, object/scene symmetry <cit.>, or via user interaction <cit.>. Methods that only use geometry are fundamentally limited by the quality of the 3D information provided to them. In contrast, learning-based methods can learn to compensate for such issues and can also generate planar decompositions that better align with the semantic content of the scene. Learning-based methods have been proposed for estimating planes from a limited number of input images <cit.>. However, extending these methods to entire videos is not trivial. Most related to us, PlanarRecon <cit.> is one of the first learning-based methods to predict a planar representation of entire 3D scenes. They incrementally detect and reconstruct 3D planes from posed RGB sequences, where 3D planes are detected in video fragments before the fragments are fused into a consistent planar reconstruction. The pipeline is somewhat complex and contains expensive operations such as 3D convolutions, recurrent units, and differentiable matching. In contrast, we trade such complexity for an efficient non-plane-based 3D scene reconstruction method <cit.>, which provides reliable scene geometry estimates suitable for input to our plane estimation method. Finally, room layout estimation can also leverage multiple images <cit.>. However, their extreme scene simplification is only suitable for a limited number of applications. 2D and 3D segmentation. Our task of dividing a 3D scene into planes has some similarities with 3D semantic <cit.> or panoptic instance <cit.> segmentation. These methods are less applicable to our problem because they aim to segment objects or semantic regions without special regard for geometric properties. Recent works have leveraged NeRFs to obtain a consistent semantic <cit.> or panoptic <cit.> scene representation. Our method follows this direction by also using test time optimization. However, unlike <cit.> we use an online rather than offline reconstruction method, and do not need to perform linear assignment for every frame. Scene-level embeddings. Our key innovation is to use per-scene 3D embeddings to represent planes. We are inspired by previous works, iMap <cit.> and iLabel <cit.>, who showed how emergent embeddings can be used for interactive reconstruction and labeling. We are inspired by these works, but instead of encoding scene geometry or semantic labels, we encode plane embeddings trained to be multi-view consistent. Related, there are works that optimize 3D embeddings from 2D supervision, to ground 2D vision-language features <cit.> in 3D <cit.>. However, unlike our reconstruction focus, their aim is to ground open-vocabulary semantic queries in 3D. Representations for 3D reconstruction. The focus of our work is planar scene representations, but there are many alternatives to planes. For example, TSDFs encode shape volumetrically. They can be generated by estimating depth, from multi-view stereo <cit.>, or directly via more expensive 3D convolutions <cit.>. Subsequent methods <cit.> have extended neural TSDF estimation to the online setting. Implicit functions are an alternative representation which have been used to map from 3D space to occupancy <cit.>. In the context of SLAM, implicit neural strategies have been developed <cit.> that are able to encode scene geometry using a multi-layer perceptron (MLP). Finally, further from our task, the recent success of NeRFs <cit.> for realistic novel view synthesis has paved the way for methods that apply volume rendering to represent a scene using a neural network <cit.>. § METHOD We take as input a sequence of color images , each associated with a known camera pose. We aim to predict a representation of the imaged 3D scene, where surfaces are segmented into constituent planes. We follow the definition of planes from previous work <cit.> where there can be semantic separation between parts, nearby table-tops should each have a different plane and a closed door should have a different plane to the wall enclosing it. Our approach estimates planes by first reconstructing the 3D geometry of the scene using a mesh representation. We then train a network that maps each point on the mesh to a 3D-consistent embedding space, such that points on the same plane map to nearby places in the space. These embeddings implicitly encode semantic instance information and geometric cues. Semantics complement the 3D geometry information provided by the mesh computed via a lightweight multi-view stereo system. We then use a clustering algorithm on the geometry and embeddings to compute accurate plane assignments. All the steps in our method support online inference. An overview of our approach is shown in Fig. <ref>. §.§ Learning 3D planar embeddings Our key innovation is to learn a mapping from each 3D point 𝐩 in a reconstructed scene to an embedding _𝐩, such that points on the same plane map to nearly the same place in embedding space, while points on different planes map to different places. We denote these as `3D embeddings', where 3D refers to the fact that the embeddings encode per-scene, and not per-image, planar information. We first review how existing single image embedding networks are trained, before describing how we distill these pixel-wise embeddings into a 3D-consistent embedding. Single image embeddings. In the case of plane estimation from monocular images, <cit.> train a feedforward network to map a single color image to per-pixel embeddings. Pixels i and j in the same image are mapped to embeddings _i and _j respectively, where _i is similar to _j, if and only if i and j are in the same plane. This is achieved by training a network which takes as input a single image and outputs a per-pixel embedding, using two losses: a pull loss penalizing pixel embeddings _i that are different from the mean embedding of their corresponding plane; and a push loss encouraging mean embeddings for each plane to be different from each other. One option to obtain 3D embeddings could be to find all pixels that correspond to the reprojection of a 3D point across multiple views and average their per-pixel embeddings. The issue with this approach can be seen in Fig. <ref>. Here, the per-pixel embeddings are not consistent across views, despite encoding valuable planar instance information for each individual view. This is ablated in Sec. <ref> as `embeddings w/o test-time optimization'. Consistent 3D embeddings. Our goal is to learn embeddings that preserve the properties of the per-pixel embeddings, while being consistent across views. We achieve this goal by learning a per-scene mapping function , which is parameterized as an MLP and is optimized at test time, following recent work <cit.>. Our network takes as input a 3D point 𝐩 and predicts its `3D' embedding _𝐩 = (𝐩). Single image embeddings distillation loss. Our network is trained to distill information contained in the per-pixel embeddings . For a pair of pixels i and j in a single image, we take their embeddings _i and _j. We also know their corresponding 3D positions _i and _j and their image-space normals _i and _j. We can then train the network such that (_i) is similar to (_j), if and only if their corresponding embeddings in image space _i and _j are similar and their normals (_i and _j) are also similar. Inspired by the push-pull loss used for the single image embeddings <cit.>, we use the following loss to encourage this: L_ = ‖(_i) - (_j) ‖, if ‖_i - _j ‖ < t_e and _i ·_j > t_n max(0, t_p - ‖(_i) - (_j) ‖), otherwise, where t_e is a pull threshold on embeddings, t_n is a threshold on normals, and t_p is a push threshold. This loss is applied to sampled pairs of points on the same image. §.§ 3D geometry estimation To estimate planes, we use our 3D embeddings alongside an initial estimate of scene geometry. To estimate an accurate 3D mesh we use SimpleRecon <cit.>, a state-of-the-art 3D reconstruction system that requires posed images as input. In it, depth maps are estimated using a multi-view stereo net, then fused into a 3D mesh via a truncated signed distance function (TSDF) <cit.>. We adapt their network to additionally predict a planar/non-planar probability, assigning a per-pixel value indicating if that pixel belongs to a planar or non-planar region, trained equivalently to the single-image plane estimator of <cit.>. Our novelty is to then combine these per-pixel predictions into 3D as an additional channel in the TSDF. When extracting the mesh, we exclude voxels that have an aggregated non-planar value of less than p = 0.25, so that non-planar regions are not part of the final mesh. This extracted mesh is one of the inputs to the next steps. §.§ Plane grouping Given an embedding for each vertex in our 3D mesh, our next step is to cluster vertices into plane instances based on those embeddings and on geometry information defined by the mesh. For this clustering step we rely on sequential RANSAC <cit.>. RANSAC works by randomly sampling plane instance proposals, checking the inlier count for each proposal, and selecting the plane instance with the most inliers. This process is done sequentially, where at each iteration the points associated with the last predicted plane are removed from the pool. Each plane instance proposal is created by sampling a single mesh vertex, which together with its associated normal, defines a plane. A different mesh vertex is considered an inlier to this plane proposal if: (i) the distance to the plane is smaller than a threshold r_d and (ii) the euclidean difference between embeddings is smaller than a threshold r_e. After convergence, we merge planes with highly similar embeddings and normals, where the distance between average embeddings is < 0.2 and the dot product between average normals is > 0.6. Next, we run a connected components algorithm on the mesh representation of each discovered plane in turn, to separate out non-contiguous planes. Since the non-planar vertices have already been removed as explained in Sec. <ref>, we expect all remaining vertices to be assigned a plane instance label. RANSAC, however does not guarantee this. For this reason, we run a post-processing step that iteratively propagates labels to connected unlabeled points from the RANSAC step. Finally, we remove planes with fewer than 100 vertices. §.§ Online inference All components of our method are designed so that they can run online with little adaptation. The 3D geometry estimation steps, depth estimation, fusion into TSDF, and mesh extraction, are commonly used in online systems <cit.>. Our per-scene embedding network is always updated in an online fashion, similar to <cit.>. Given the current 3D mesh and the current embedding network, embeddings can be predicted for all mesh vertices. We then perform clustering to extract plane instances. To achieve interactive speeds for our online method, we replace our RANSAC clustering method, which takes 131ms on average per scene, with the mean-shift algorithm <cit.> using the efficient implementation from <cit.>, which takes 25ms. We evaluate this alternative clustering algorithm in the experimental section. Finally, each time we recompute planes, we use Hungarian matching <cit.> between the previous and current plane assignments to encourage consistency of planes across time (visible in the figure as stability of colors over time, while new planes are computed). Fig. <ref> shows an online reconstruction obtained with our method for a ScanNetV2 scene. §.§ Sequential RANSAC: A strong baseline Given recent advances in 3D scene reconstruction from image inputs,  <cit.>, the question arises: How good a planar decomposition can we achieve if we run RANSAC on the mesh only, without the contribution of our 3D embeddings? Surprisingly, later results show this simple baseline performs very well. However, while this naive approach takes geometry into account, it does not leverage semantic or appearance-based cues, leading to plane over- and under-segmentation issues (see Fig. <ref>). Our method, using 3D plane embeddings, addresses these problems. § IMPLEMENTATION DETAILS Depth, plane probabilities, and per-pixel embedding network architecture. We use the SimpleRecon <cit.> architecture for depth estimation. Encoder features are shared between the depth estimation, plane probabilities, and per-pixel embedding tasks, though they have separate decoders. Full architecture details are in the supplementary material. Embedding MLP network. We use a three-layer MLP with 128 dimensions for each hidden layer. Following <cit.>, we lift the input to the MLP to 48 periodic activation functions before it is input to the first linear layer. Our final embedding has three dimensions. We use t_e = 0.9, t_n = 0.8 and t_p = 1.0, tuned on the validation set. Similarly to <cit.>, the MLP is always trained in an online fashion. For each new keyframe we sample 400 pixels from it and apply Eqn. (<ref>) to each pair of points, together with the pairs from the 10 most recent keyframes. We then run backpropagation ten times to optimize the MLP. Grouping thresholds. For RANSAC we set r_e = 0.5 and r_d = 0.1. We set the mean-shift bandwidth to 0.25. Mesh planarization. Given our final assignment of points to planes, we perform mesh planarization to convert our 3D mesh into a planarized mesh. First, we estimate the plane equation for each plane. Next, each point is moved along the normal of its assigned plane such that it lies on the plane it is assigned to. This is the mesh which is geometrically evaluated against the ground truth planarized mesh. § EXPERIMENTS We train and evaluate on ScanNetV2 <cit.>, because <cit.> provided ground truth plane annotations for most of it. Plane annotations are unavailable for the ScanNetV2 test set. We therefore split the official ScanNetV2 validation set into new plane evaluation validation and test splits, dubbed and , with 80 and 100 scenes respectively. For a fair comparison with prior work, we re-evaluate baselines on our new test split. The new splits and our evaluation code are available at https://nianticlabs.github.io/airplanes/https://nianticlabs.github.io/airplanes/. §.§ Evaluation metrics Geometric evaluation. Here, we evaluate how well the predicted planar mesh approximates the geometry of the ground truth planar mesh. Following <cit.> we adopt conventional 3D metrics <cit.>. To compare a predicted mesh with the ground truth mesh, we first sample N=200,000 points from each mesh. We then compare the two sampled point clouds to each other using chamfer distance and f1 score. See <cit.> for details. Fully volumetric methods such as <cit.> predict geometry for the whole scene, including unobserved regions. To prevent such methods from being penalized unfairly, we enforce a visibility mask to handle unseen points differently when computing metrics, following <cit.>. This visibility mask is applied to all methods for fair comparison. We also mask out 3D points sampled on faces that connect two or more planes, as these points have ambiguous labeling. For full transparency, we report numbers in the supplementary material using the evaluation method from <cit.> without our additions. Plane segmentation evaluation. Following previous work on plane estimation <cit.>, we also report the following clustering metrics: Variation of Information (VOI), Rand Index (RI), and Segmentation Covering (SC). Given a predicted mesh, we use the protocol proposed in <cit.> to map the plane ID of each vertex to the closest vertex in the ground truth mesh. See <cit.> for full details. Planar metrics. To better evaluate how well the main, large planes in the ground truth scene, are reconstructed we additionally propose the following protocol. We select the k=20 largest planes from each ground truth mesh. For each such plane _j, we find the predicted plane _i that most closely matches according to the completion metric. We report the fidelity between _j and _i as completion(_j, _i), where completion is the completion metric from <cit.>. The average of this score over all k ground truth planes over all scenes is our planar fidelity score. We also report the geometric accuracy between _j and _i as planar accuracy, and the average of the two as planar chamfer. §.§ Comparisons with baselines We evaluate our 3D plane estimation method against various baselines (Table <ref>). PlanarRecon <cit.> is the existing state-of-the-art method for 3D plane estimation from posed RGB images. We outperform their approach in geometry, segmentation, and planar metrics. We also compare with the leading baseline for 3D plane estimation from a single image, PlaneRecTR <cit.>. For each scene, we run this single image predictor for selected keyframes. Planes from each incoming image are matched to the closest world planes by comparing planar normals, offsets, and plane positions. We compare with our own implementation of sequential RANSAC, applied to meshes from SimpleRecon <cit.>. SimpleRecon (SR in tables) is the same method we use for geometry estimation, as detailed in Sec. <ref>, making this the closest baseline to our method, but without using the benefits of our 3D consistent embeddings. In addition, we also apply the sequential RANSAC method to geometry from <cit.>. See supplementary material for implementation details of the baselines. Our method outperforms all other methods on the segmentation metrics. While the results for the geometric metrics are comparable with the SR <cit.> + RANSAC baseline, we significantly outperform this baseline on the segmentation and planar metrics, clearly demonstrating the benefit of using our 3D consistent embeddings. Surprisingly, PlanarRecon <cit.> is outperformed by several of our sequential RANSAC baselines. This is in contrast with the results presented in <cit.>, and we discuss this difference in more detail in the supplementary material. Our embeddings benefit other geometry methods. To validate the usefulness of our 3D embeddings, we use them in combination with different geometry estimation methods <cit.>. We compare using only 3D geometry versus using 3D geometry plus the embeddings derived from our test-time optimized MLPs without retraining. We show the results for this experiment in Table <ref>. For all methods, we observe that the additional information encoded in the embeddings improves over the baseline of using geometry + RANSAC only. §.§ Ablations We ablate our method to validate that our contributions lead to higher scores. These results are in Table <ref>. Fused per-pixel embeddings w/o test time optimization is our method but using the embeddings directly from <cit.>, without our 3D distillation. These embeddings are fused as additional channels into the TSDF. Fused per-pixel embeddings with training time multi-view consistency is a variant of our method, where we attempt to train a single feed-forward embedding network which predicts multi-view consistent 3D embeddings directly, without performing test-time optimization. Ours without planar probability is our method but all points are assigned a planar probability of 1, meaning non-planar points are still part of the mesh. For this reason, we do not run the post-processing step that assigns unlabeled points after RANSAC. The first two ablations show that it is not trivial to predict 3D consistent embeddings using a feed-forward network applied to each frame independently. This motivates our use of a per-scene MLP optimized at test time to achieve consistent embeddings. The last of these ablations shows that our fusion of planar probabilities into the TSDF improves geometry metrics. We note that some computational savings could be made, at the price of ∼1% drop in geometry scores, without this. We also compare the two different plane grouping algorithms. Using the Mean-shift variant of our method leads to only a small degradation of results versus RANSAC, while achieving interactive speeds (see Sec. <ref> for timings). RANSAC oracle methods are variants of RANSAC which have access to ground truth semantic and instance information. SR + RANSAC + ground truth semantic labels uses ground truth semantic labels (transferred to the closest vertex in the predicted mesh) in the sequential RANSAC loop to separate planes. Specifically, points can be associated with a plane candidate only if they are geometrically consistent and have the same label. We additionally compare with a RANSAC variant with predicted semantic labels, where we predict N=20 semantic classes and we fuse their probabilities in the TSDF. It is worth noting that explicitly predicting semantics is beneficial and leads to better planar scores compared to its geometry-only counterpart in Table <ref>. However, our method provides better results across all metrics, and requires fusion of only planar probabilities instead of N semantic classes, which might be challenging as N increases. Finally, we also show an oracle with ground truth instance labels, which presents an upper bound for plane estimation. §.§ Qualitative results Fig. <ref> shows results of our method compared to the closest published competitor PlanarRecon <cit.> and our SR <cit.> + RANSAC baseline. We can see that our method has closer fidelity to the ground truth versus <cit.>, and avoids oversimplification of geometry. By more closely adhering to the geometry of the real scene, our planes can appear to have `jagged' edges when compared to the more simplified outputs from <cit.>. Our planar meshes have gaps where planes intersect because we remove triangles that connect vertices from different planes. If needed for a specific application, our outputs could be further post-processed, using <cit.>. A qualitative comparison of our method with the SR <cit.> + RANSAC baseline shows that we are able to recover separate semantic planes that have a common planar geometry. Finally, Fig. <ref> shows more results of our method, with images and camera poses from an iPhone running ARKit <cit.>. §.§ Planes at interactive speeds The online variant of our method, which uses mean-shift clustering, takes a total of 152ms per keyframe on average, on an RTX A6000 GPU. This comprises 65ms to obtain the per-pixel depth, planar probability, and 2D planar embedding and separately 1ms for TSDF fusion, 61ms to update the MLP, and 25ms to run the clustering. As the average interval between keyframes in ScanNetV2 is 272ms, our method runs at interactive speeds. Alternatively, for the RANSAC variant, the clustering step takes 131ms for an entire scene. §.§ Limitations Our method shows notable improvements compared to other 3D plane estimation methods, but limitations remain. Errors in the geometry from our MVS system might have severe consequences when extracting 3D planes. We also fit planes in a greedy manner. Instead, global optimization  <cit.> may further improve results. Unlike <cit.>, we only estimate planes for visible geometry. Completing unobserved regions, like <cit.>, could be a useful extension for some applications. § CONCLUSION We propose a new approach which takes a sequence of posed color images as input, and outputs a planar representation of the 3D scene. Surprisingly, we demonstrate that a strong baseline for this task is to simply run sequential RANSAC on a lightweight 3D reconstruction. However, this baseline is likely too limited for AR and robotics use-cases. Our approach addresses this, by training a 3D embedding network to map 3D points to 3D-consistent and meaningful plane embeddings, which can then be clustered into 3D planes. Our approach gives state-of-the-art plane estimation performance on the ScanNetV2 dataset. Acknowledgements. We are extremely grateful to Saki Shinoda, Jakub Powierza, and Stanimir Vichev for their invaluable infrastructure support. ieeenat_fullname
http://arxiv.org/abs/2406.09215v1
20240613151611
On Softmax Direct Preference Optimization for Recommendation
[ "Yuxin Chen", "Junfei Tan", "An Zhang", "Zhengyi Yang", "Leheng Sheng", "Enzhi Zhang", "Xiang Wang", "Tat-Seng Chua" ]
cs.IR
[ "cs.IR", "cs.AI" ]
Cascaded injection locking of optomechanical crystal oscillators Daniel Navarro-Urrios June 17, 2024 ================================================================ § ABSTRACT Recommender systems aim to predict personalized rankings based on user preference data. With the rise of Language Models (LMs), LM-based recommenders have been widely explored due to their extensive world knowledge and powerful reasoning abilities. Most of the LM-based recommenders convert historical interactions into language prompts, pairing with a positive item as the target response and fine-tuning LM with a language modeling loss. However, the current objective fails to fully leverage preference data and is not optimized for personalized ranking tasks, which hinders the performance of LM-based recommenders. Inspired by the current advancement of Direct Preference Optimization (DPO) in human preference alignment and the success of softmax loss in recommendations, we propose Softmax-DPO (S-DPO) to instill ranking information into the LM to help LM-based recommenders distinguish preferred items from negatives, rather than solely focusing on positives. Specifically, we incorporate multiple negatives in user preference data and devise an alternative version of DPO loss tailored for LM-based recommenders, connected to softmax sampling strategies. Theoretically, we bridge S-DPO with the softmax loss over negative sampling and find that it has a side effect of mining hard negatives, which assures its exceptional capabilities in recommendation tasks. Empirically, extensive experiments conducted on three real-world datasets demonstrate the superiority of S-DPO to effectively model user preference and further boost recommendation performance while mitigating the data likelihood decline issue of DPO. Our codes are available at <https://github.com/chenyuxin1999/S-DPO>. § INTRODUCTION Recommender systems aim to predict personalized rankings based on user preference data, historical interactions such as purchases, clicks, and ratings <cit.>. Recently, leveraging the extensive world knowledge and powerful reasoning abilities of language models (LMs) <cit.>, LM-based recommenders have been broadly explored <cit.>. These recommenders convert historical interaction data into language prompts and either perform in-context learning or fine-tune LMs, demonstrating notable advantages, including zero-shot and few-shot reasoning <cit.>, enhanced generalization abilities <cit.>, and rich semantic understanding <cit.>. However, current LM-based recommenders typically utilize language modeling loss for personalized ranking objectives—predicting the next token—which significantly differs from the objective of modeling user preferences in recommendation tasks <cit.>. We argue that the current objective of LM-based recommenders does not fully utilize preference data and is not optimized for personalized ranking tasks, thereby hindering recommendation performance. Most LM-based recommenders address recommendation tasks by leveraging specialized language prompts <cit.>, incorporating collaborative signals as a new modality <cit.>, or extending the vocabulary of LMs with item tokens <cit.>. Typically, these recommenders pair each language prompt, including the user's historical interaction item lists, with a single positive item and then update LM parameters using language modeling loss <cit.>. Despite being designed for recommendation tasks, these LM-based recommenders do not consider negative items and are not directly optimized for personalized rankings. Such a training paradigm fails to fully leverage user preference data and overlooks the role of negative items in recommendations, thereby impeding the alignment of LMs with user preferences. Inspired by the success of using human-labeled data to align LMs with human preferences <cit.> and advancements in direct preference optimization (DPO) <cit.>, we make progress on aligning LMs with recommendations by fine-tuning them to predict the next item in accordance with the user's preference. This preference alignment stage aims to instill ranking information into the LMs and help recommenders distinguish preferred items from negatives, rather than solely focusing on positives. Towards this end, we incorporate multiple negatives in user preference data and devise an alternative version of DPO loss tailored for recommendation, connected to softmax sampling strategies <cit.>, which we call S-DPO. Specifically, we first devise supervised fine-tuning to inject domain knowledge and improve LM's ability to follow the instructions before preference alignment phase, following <cit.>. In the preference alignment stage, instead of constructing solely positive pairs, we initially pair each language prompt with both positive and randomly sampled multiple negatives to build text-based preference data. Building upon these preference data, we extend conventional DPO with the Bradley-Terry preference model <cit.> on pairwise data to the Plackett-Luce preference model <cit.>, which handles relative rankings in recommendation tasks. Benefiting from the use of multiple negatives in preference data, our S-DPO offers three appealing properties. On the one hand, S-DPO serves as the first specialized personalized ranking loss for LM-based recommenders, effectively utilizing multiple negatives and acknowledging the importance of preference data. Empirically, we demonstrate that it provides more effective ranking gradients and mitigates the instability associated with DPO training (Section <ref>). On the other hand, we theoretically bridge the DPO loss with the traditional BPR loss <cit.> over pairwise data and connect S-DPO with the softmax loss over negative sampling (also known as contrastive loss in self-supervised recommendations, which achieves state-of-the-art performance <cit.>). This connection naturally underscores the ranking performance of S-DPO and highlights the critical role of multiple negatives. Furthermore, we find that S-DPO has a side effect of mining hard negative examples similar to contrastive learning paradigm <cit.>, which not only boosts the performance but also accelerates the training process (Section <ref>), assuring its exceptional capabilities in recommendation tasks. Overall, our contributions can be concluded as follows: * We are among the first to point out that the widely used language modeling loss in LM-based recommendation is not designed for ranking tasks and fails to fully utilize user preference data, thereby hindering recommendation performance. * We propose S-DPO, an alternative version of DPO loss tailored for LM-based recommenders, incorporating multiple negatives to instill ranking information into LM. * We theoretically bridge S-DPO with the softmax loss over negative sampling to highlight the critical role of multiple negatives and find its side effect of mining hard negatives, assuring its capabilities. § PRELIMINARY In this section, we first formalize sequential recommendation as the task of aligning language models (LMs) with user preferences. Then, we discuss the general framework of current LM-based recommenders that utilizes language modeling loss to fine-tune LMs. Finally, we outline the training process widely used to align LMs with human preferences, including reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO). Task Formulation. Given the historical interactions ℋ_u of one user u in chronological order, the goal of LM-based sequential recommender ℳ_θ, where θ represents trainable parameters, is to select the item i_p preferred by user u from candidate set C={i_j}_j=1^N, where N is the number of candidates. This task requires that item i_p be preferred over the other candidate items, denoted by ℐ_d=C\{i_p}. This requirement explicitly defines a multi-negative preference understanding for LM-based recommenders, which can be formulated as follows: ∀ i_d ∈ℐ_d, i_p >_u i_d, wherein >_u stands for the preference of user u. Fine-tuning LM-based recommenders. Current LM-based recommenders widely adopt supervised fine-tuning (SFT) <cit.> on recommendation-specific data to enhance their performance <cit.>. Generally, this involves two steps: structuring recommendation data as text-based pairs and then fine-tuning LMs based on these pairs. In the first step, for user u, a recommendation task prompt x_u encompasses the user's historical interactions ℋ_u, the candidate item set C, and a description of the sequential recommendation task. This prompt x_u is paired with the title of the preferred item i_p in the candidate set C, denoted as e_p, to form the pair data (x_u, e_p). In the second step, the (x_u, e_p) pairs are utilized to fine-tune the LM-based recommender ℳ_θ through language modeling loss. This loss, commonly used in SFT in language modeling tasks, implicitly treats the recommendation task as predicting the next token based on preceding tokens. Formally, the objective of optimizing the LM-based recommender ℳ_θ with pair data (x_u, e_p) can be formulated as: θ max∑_(x_u,e_p)∑_t=1^|e_p| log(P_θ((e_p)_t|x_u,(e_p)_<t), where |e_p| is the number of tokens in e_p, (e_p)_t is the t-th token of e_p and (e_p)_<t is the tokens preceding (e_p)_t. However, recommendation tasks are essentially user preference alignment tasks, as formalized in the above task formulation, and differ from user modeling tasks that consider only positive responses. Such a gap necessitates further exploration into aligning LM-based recommenders with user preference, an area that has been underexplored. RLHF pipeline and DPO. Recent studies in natural language processing (NLP) explore the use of human-labeled pairwise data as a reward signal to align LMs with human preferences, such as RLHF <cit.> and DPO <cit.>. Specifically, the RLHF <cit.> pipeline adds two additional phases after the SFT phase: reward model training and reinforcement learning (RL) optimization. After obtaining the SFT model π^ SFT, RLHF further optimizes it with pairwise preference data. Inspired by the success of RLHF in NLP, we leverage RLHF to inject recommendation-specific user pairwise preference into LM-based recommenders. Let ℰ={e_j}_j=1^N denote the title set of candidate items, where e_j denotes the title of item i_j. Given two items i_j,i_k∈𝒞, the user preference i_j >_u i_k can be seamlessly transformed into a response preference, stipulating that e_j is preferred over e_k when presented with prompt x_u, denoted as e_j ≻ e_k |x_u. By sampling one dispreferred item i_d from dispreferred candidate set ℐ_d, we can curate a preference dataset {(e_p,e_d,x_u)}. After that, RLHF utilizes a preference model for preference distribution modeling, such as Bradley-Terry (BT) model <cit.>. This preference model assumes there is a latent function r(x_u,e_j) representing the reward of prompt-response pair (x_u,e_j). The bigger reward r(x_u,e_j) means the more user u prefers item i. From this perspective, reward function r(x_u,e_j) serves as a scoring function that quantifies the preference of user u to item i. Besides, the preference model defines a mapping from the reward function r(x_u,e_j) to a preference distribution p_r(e_j ≻ e_k | x_u). Based on preference distribution, an optimal reward function is trained by maximizing the likelihood of preference data. The training objective of this phase is as follows: ℒ_ RM=-𝔼_(x_u, e_p,e_d)[ log p_r(e_p≻ e_d|x_u)]. Let π_θ(e|x_u) be the probability that LM-based recommender ℳ_θ output title e given prompt x_u. The final reinforcement learning phase aims to maximize the expected reward of policy while not deviate too far from the reference model, formulating the following objective for optimal policy: π_θ max 𝔼_x_u∼𝒟,e∼π_θ(e|x_u)[r(x_u,e)] - β𝔻_ KL[π_θ(e|x_u)||π_ ref(e|x_u)], where 𝒟 denotes the distribution of x_u and π_ ref=π^ SFT. A recent study, DPO <cit.>, theoretically proves the optimal policy in a closed form to Eq.(<ref>) is π^*(e|x_u)=1/Z(x_u)π_ ref(e|x_u) exp(1/βr(x_u,e)), which is equivalent to r(x_u,e)=β log π(e|x_u)/π_ ref(e|x_u)+β log Z(x_u), where Z(x_u)=∑_eπ_ ref(e|x_u) exp(1/βr(x_u,e)) is the partition function. By defining p_r(e_p≻ e_d|x_u) as σ(r(x_u,e_p)-r(x_u,e_d)) in Eq.(<ref>) according to the BT model used in RLHF and substituting term r(x_u,e) in Eq.(<ref>) with Eq.(<ref>), the last two phases of RLHF pipeline can be equivalently transformed into optimizing DPO loss below: ℒ_ DPO=-𝔼_(x_u, e_p,e_d)[ log σ(β log π_θ(e_p|x_u)/π_ ref(e_p|x_u)-β log π_θ(e_d|x_u)/π_ ref(e_d|x_u))], wherein σ(x) is the sigmoid function. DPO is able to directly extract the optimal policy from pairwise preference data, making it more practical for preference alignment than RLHF. Nevertheless, DPO and RLHF are usually designed for pairwise preference. The oversight of other negative items impedes the performance of the LM-based recommenders. To bridge the gap, we expand DPO to S-DPO in recommendation tasks, in consideration of multiple negative items. § METHODOLOGY §.§ Derivation of S-DPO loss To align LM-based recommender ℳ_θ with multi-negative preference, we first derive the preference distribution and then propose a new loss function called S-DPO (depicted in Figure <ref>). Multi-negative Preference Distribution. As mentioned in Section <ref>, for user u, there is a partial ranking stipulating i_p >_u i_d,∀ i_d∈ℐ_d in sequential recommendation tasks. Let ℰ_d be the titles of dispreferred items ℐ_d. The aforementioned partial ranking is equivalent to e_p ≻ e_d | x_u, ∀ e_d ∈ℰ_d, from which a multi-negative preference dataset {x_u, e_p, ℰ_d} can be curated in an analogous way to RLHF. For the dataset pairing one preferred item with multiple dispreferred items, we leverage the Plackett-Luce (PL) model <cit.> to build preference distribution. Given prompt x_u, K titles e_1, e_2, ⋯, e_K and a permutation τ:[K]→[K] reflecting user preference, with τ(j) denoting the j-th element of permutation τ, the PL model estimates that the ranking e_τ(1),e_τ(2),⋯,e_τ(K) turns out true, as: p(τ|e_1,e_2,⋯,e_K, x_u)=∏_j=1^K exp(r(x_u,e_τ(j)))/Σ _l=j^K exp(r(x_u,e_τ(l))). By enumerating all the permutations starting with p and calculating sum of their probability given by the PL model, the final multi-negative preference distribution p^* can be derived as: p^*(e_p≻ e_d, ∀ e_d ∈ℰ_d | x_u) = exp(r(x_u,e_p))/∑_j=1^K exp(r(x_u,e_j)). For brevity, the complete derivation is delegated to Appendix <ref>. Deriving S-DPO. By substituting reward function r(x_u,e) in Eq.(<ref>) with Eq.(<ref>), the mutli-negative preference distribution can be rewritten as: p^*(e_p≻ e_d, ∀ e_d ∈ℰ_d | x_u) = 1/1+∑_e_d∈ℰ_d exp(β log π(e_d|x_u)/π_ ref(e_d|x_u)-β log π(e_p|x_u)/π_ ref(e_p|x_u)). Through plugging distribution given by Eq.(<ref>) in the reward learning objective in Eq.(<ref>), our S-DPO loss can be formulated for policy π_θ as: ℒ_ S-DPO(π_θ;π_ ref) =-𝔼_(x_u,e_p,ℰ_d)∼𝒟[ log σ(- log ∑_e_d∈ℰ_d exp(β log π_θ(e_d|x_u)/π_ ref(e_d|x_u)-β log π_θ(e_p|x_u)/π_ ref(e_p|x_u)))]. Notably, when the number of candidates N is 2, which means there is only one dispreferred item, S-DPO reduces to DPO. The proof is provided in Appendix <ref>. Gradient Analysis. We conduct gradient analysis on S-DPO. The gradient of ℒ_ S-DPO with respect to parameters θ takes the following formulation: ∇_θℒ_ S-DPO(π_θ;π_ ref) = -β𝔼_(x_u,e_p,ℰ_d)[ σ( log∑_e_d∈ℰ_d exp(g(e_d, e_p, x_u)))_higher weight when reward deviates from preference·[ ∇_θ log π_θ(e_p|x_u)-∑_e_d∈ℰ_d∇_θ log π_θ(e_d|x_u)/∑_e'_d ∈ℰ_d exp(g(e'_d,e_d,x_u))_higher weight when reward is larger] ], wherein g(e_j,e_k,x_u) = r_θ(x_u, e_j)-r_θ(x_u, e_k) and similar to DPO, r_θ(x_u,e)=β log π_θ(e|x_u)/π_ ref(e|x_u) is the implicit reward function defined by π_θ. See Appendix <ref> for a complete derivation. Recap the DPO gradient below: ∇_θℒ_ DPO(π_θ;π_ ref) = -β𝔼_(x_u, e_p, e_d)[ σ(g(e_d, e_p, x_u))_higher weight when reward is wrong·[ ∇_θ logπ_θ(e_p|x_u)-∇_θ logπ_θ(e_d|x_u) ] ]. Similar to DPO, the gradient of S-DPO loss increases the likelihood of the preferred item and decreases the likelihood of all the dispreferred items. Each example is also weighed by how much the implicit reward r(x_u,e) deviates from the preference data. However, compared with DPO, S-DPO harnesses information of multiple dispreferred items in this weight. Moreover, S-DPO contains an extra weight term only assigned to the gradient of dispreferred items. The term 1/∑_e'_d ∈ℰ_d exp(g(e'_d,e_d,x_u))= exp(r_θ(x_u,e_d))/∑_e'_d ∈ℰ_d exp(r_θ(x_u, e'_d)) mirrors the relative reward of item i_d in dispreferred items. The larger the reward of item i_d is compared with other dispreferred items, the higher the weight will be. Therefore, for dispreferred items with larger reward, which can be considered as hard negative items, their likelihood will decline more in the next update. This mechanism endows S-DPO with more effectiveness and stability than DPO. The gradient of S-DPO also tends to be larger, enabling S-DPO to converge faster . §.§ Properties of S-DPO In this section, we will discuss the structural correlation between DPO and BPR <cit.>, together with S-DPO and softmax loss <cit.>, which demonstrates the advantage of S-DPO over DPO and language modeling objective. For user u, preferred item i_p and one dispreferred i_d∈ℐ_d, BPR loss takes the form: ℒ_ BPR=-𝔼_(u,i_p,i_d)[ log σ(f(u,i_p)-f(u,i_d))], wherein f(u,i) represents preference score of user u for item i. Similarly, given dispreferred item set ℐ_d, the softmax loss takes the form: ℒ_ softmax=-𝔼_(u,i_p,ℐ_d)[ log σ(- log∑_i_d∈ℐ_d exp(f(u,i_d)-f(u,i_p)))]. Review the DPO loss in Eq.(<ref>) and S-DPO loss in Eq.(<ref>). Notably, term β log π_θ(e|x_u)/π_ ref(e|x_u) is the implicit reward function, denoted by r_θ(x_u,e) in Section <ref>. According to Section <ref>, r_θ(e,x_u) reflects the preference of user u to item i corresponding to title e. When the reference model has no knowledge about recommendation, π_ ref(e|x_u) is approximately a uniform distribution, term r_θ(x_u,e)=β log π_θ(e|x_u)/π_ ref(e|x_u) exactly reveals absolute preference. Hence, r_θ(x_u,e) possesses a similar function to f(u,i) . From this perspective, DPO and S-DPO can be seen as special patterns of BPR and softmax loss, respectively. The structural correlation indicates that DPO and S-DPO are more suitable for recommendation than the language-modeling loss. Moreover, as softmax loss works better than BPR loss in multi-negative scenarios <cit.>, it can be inferred that S-DPO will be more tailored for multi-negative user preference alignment than DPO. § EXPERIMENTS In this section, we aim to answer the following research questions: * RQ1: How does S-DPO compare with traditional and LM-based sequential recommendation models on performance? * RQ2: How does the LM-based recommender benefit from the multiple negatives? Baselines. We thoroughly compare S-DPO with three categories of recommenders in sequential recommendations: traditional recommenders (GRU4Rec <cit.>, Caser <cit.>, SASRec <cit.>), LM-enhanced recommenders (MoRec <cit.>) and LM-based recommenders (LLaMA2 <cit.>, Chat-REC <cit.>, TALLRec <cit.>, LLaRA <cit.>). See detailed introduction and comparison of baselines in Appendix <ref>. Datasets. We conduct extensive experiments on three real-world benchmark datasets which differ in size and domain (Movielens <cit.>, Goodreads[https://www.goodreads.comhttps://www.goodreads.com], and LastFM <cit.>). Following standard settings of <cit.>, we employ a commonly used metric Hit Ratio@1 (HR@1) for performance evaluation and an additional metric Valid Ratio to evaluate the LM-based methods' ability to generate appropriate responses. See detailed introductions of datasets and evaluation metrics in Appendix <ref>. Implementation. We implement all LM-based recommenders on 4 NVIDIA A100 GPUs. For all LM-based recommenders, we conduct a supervised fine-tuning stage for a maximum of 5 epochs. For S-DPO and its variants, we conduct a preference alignment stage for further 3 epochs. Different from existing methods, we only optimize loss on item titles and find it effective in recommendation tasks. Refer to Appendix <ref> for more implementation details. §.§ Overall Performance Comparison (RQ1) Table <ref> presents a comparative analysis of the performance of our proposed S-DPO and baselines. “Rel.Ipv” denotes the relative improvement of S-DPO compared with baselines. Bold and underlined indicate the best and the second-best performance, respectively. We observe that: * LM-based recommenders have driven impressive performance breakthroughs compared with traditional recommenders. Our results reveal that traditional recommenders outperform untuned LM-based recommenders (LLaMA, ChatRec) but fall short compared to LM-based recommenders fine-tuned on historical interactions (TALLRec and LLaRA). It is noted that untuned LM-based recommenders are limited by inadequate instruction-following capabilities or a lack of domain-specific knowledge indicated by the low valid ratio and suboptimal performance, which highlights the necessity of the supervised fine-tuning stage to further ground the inherent ability of language models down to sequential recommendation tasks. Moreover, MoRec also exhibits suboptimal performance compared to its traditional variant because it leaves the reasoning ability of LM untouched. The superior performance of recent LM-based recommenders indicates the significant roles of knowledge and reasoning ability in language models for recommendation tasks, which highlights the potential of LM-based recommenders. * Tailoring language model for recommendation task further boosts the performance of LM-based recommenders. For LM-based recommenders, the substantial performance gap between fine-tuned and untuned approaches emphasizes the importance of tailoring models for recommendations. TALLRec adapts LM for recommendation by supervised fine-tuning LM on historical interactions, surpassing traditional recommenders. Additionally, LLaRA consistently outperformed TALLRec across all datasets, suggesting that introducing collaborative signals through appropriate item representations is a viable direction for further adapting LM. However, existing LM-based methods adapt LM from either item representation methods or corpus construction, leaving the adaptation of optimization objectives unexplored. Instead, S-DPO aligns the language model with multi-negative user preference data by extending DPO to include a softmax ranking loss, making it a more appropriate loss function for recommendation tasks. * S-DPO consistently outperforms all traditional recommenders and the state-of-the-art LM-based recommenders on all datasets. S-DPO shows an improvement ranging from 11.10% to 47.03% on Hit Ratio@1 compared to the second-best baseline. Building on a supervised fine-tuning stage, we attribute this further improvement to the preference alignment stage, which explicitly instills ranking information into LM and utilizes preference data with multiple negative samples. Such superior performance suggests that explicitly tailoring LM for recommendation using user preference data at the training objective level is more effective than other LM-based recommenders. By leveraging the inherent abilities of the LM and incorporating ranking information from user preference data, S-DPO effectively differentiates between preferred and less preferred items. Notably, the preference alignment stage hardly harms the inherent ability of LM, illustrated by a high valid ratio. §.§ Study on S-DPO (RQ2) Ablation Study. To investigate the effect of explicit ranking optimization and multiple negative samples of S-DPO, we compare it with Supervised Fine-Tuning(SFT), and a variant of S-DPO with only a single negative sample, downgrading to pairwise DPO loss. The experimental results are reported in Figure <ref>. We can observe that DPO can achieve an overall better performance compared to SFT, which underscores the effectiveness of instilling ranking relationships into existing LM-based recommenders. With a more effective ranking gradient provided by multiple negative samples, S-DPO can further boost performance and achieve the best among all baseline methods and variants. Study on number of negative samples. Benefiting from the utilization of multiple negative pairs in preference data, our S-DPO offers two empirically appealing properties compared to DPO: 1) S-DPO has more effective gradients facilitating the optimization; 2) S-DPO mitigates the data likelihood decline issue of DPO. Figure <ref> provides the comparison of validation loss between S-DPO and DPO, illustrating that the loss of S-DPO decreases faster and more significantly. This observation demonstrates that multiple negative pairs provide larger and more meaningful gradients for model optimization, which is attributed to the inherent ability of S-DPO to mine negative samples <cit.> (Section <ref>). More results of loss analysis can be found in Appendix <ref>. On the other hand, it is widely accepted that the optimization of DPO suffers from data likelihood decline issues <cit.>, which implies that the log-likelihood of the preferred completions is reduced below the original log-likelihood from the reference model, hindering the performance of DPO. We, therefore, study the behavior of S-DPO and surprisingly find that it has the potential property of mitigating data likelihood decline issues. As illustrated in Figure <ref>, S-DPO exhibits continually increasing rewards of preferred items, while DPO struggles to increase the reward of preferred items. To further verify the superiority of the multiple negative samples of S-DPO compared with DPO, we conduct experiments to explore the potential of the number of negative samples, with the results depicted in Figure <ref>. It can be observed that utilizing multiple negative samples allows the model to achieve better performance than with a single one. Furthermore, as the number of negative samples increases, the model's performance exhibits continual improvements. We attribute this success of S-DPO to more effective ranking gradients provided by multiple negatives which can be connected to the superior performance of contrastive loss in self-supervised recommendations <cit.>. More results of preferred item rewards can be found in Appendix <ref>. Study on values of β. In S-DPO, β is a hyperparameter controlling the deviation of LM from the base reference policy <cit.>. Typically, a smaller value of β implies that the language model is more heavily influenced by the preference signals and vice versa. In this section, we explore the effect of β on S-DPO. As indicated in Figure <ref> through <ref>, a higher β can achieve overall better performance in our task, while a lower β may overwhelm the model's learned knowledge from the supervised fine-tuning stage, as evidenced by both low valid ratio and hit ratio. On the other hand, an excessively large β prevents the model from effectively learning ranking relationships, leading to suboptimal performance. In all our main experiments and studies, we set β as 1 to achieve a balance between ranking signals and inherent knowledge of language models. § RELATED WORK §.§ LM for Recommendation Recent advancements in recommendation systems have increasingly incorporated Language Models (LMs) due to their extensive knowledge and robust reasoning abilities. This integration occurs primarily in two forms: LM-enhanced recommenders and LM-based recommenders. LM-enhanced recommenders utilize LM embedding as semantic representations to provide contrastive signals <cit.> or utilize LM as advanced feature extractors improving the representation of user and item features <cit.>. However, these systems still rely on traditional recommenders for the final recommendation task, which leaves the reasoning ability of LM largely untouched. On the other hand, LM-based recommenders directly employ LMs for making recommendations. Early works leverage LMs' in-context learning capabilities for zero-shot or few-shot recommendations, demonstrating significant potential <cit.>. However, untuned LM-based recommenders are limited by inadequate instruction-following capabilities and a lack of domain-specific knowledge. To bridge this gap, recent efforts in this category include supervised fine-tuning of LMs on the historical interactions to enhance their performance in recommendation tasks <cit.>. More recently, researchers have discovered that exploring item representation methods in the finetuning phase may further boost LM's ability for recommendation <cit.>. This branch of works includes integrating collaborative signals <cit.>, adjusting numeric representations <cit.> or introducing additional item tokens <cit.>. However, existing finetuned methods follow the training objective of language generation without any specific adjustments for personalized ranking. Different from them, S-DPO proposes to explicitly optimize item ranking information on preference data. §.§ Preference Alignment of Language Models Reinforcement Learning from Human Feedback (RLHF) <cit.> is a prevalent method of LMs to learn from human preferences. The RLHF pipeline comprises reward model learning and reinforcement learning (RL) optimization, the latter of which suffers instability and inefficiency. Direct Preference Optimization (DPO) <cit.> bypasses the brittle RL phase via a particular reward model parameterization and is thus simpler to implement while still keeping the performance of RLHF. DPO proves to be effective in many scopes, like NLP <cit.> and multimodal LMs <cit.>. Besides, several variants have been proposed for further improvement of DPO. ΨPO <cit.> is a generalization of DPO loss and its representative IPO can better overcome the problem of overfitting. ODPO <cit.> treats preference pairs differently by stipulating that the likelihood gap of two responses should be greater than a corresponding offset value. Other variants like KTO <cit.>, f-DPO <cit.>, RSO <cit.>, etc. also enhance DPO in various aspects. Despite these contributions, the possibilities for leveraging and further adapting DPO for recommendation are still largely unexplored. Moreover, there are few studies that discuss extending DPO to handle multi-negative scenarios. § LIMITATION Despite effectiveness, there are several limitations not addressed in this paper. On the one hand, the number of negative samples is capped at 15 in our experiments. The potential of multiple negative samples hasn't been fully explored due to the limited time and computation resources. On the other hand, increasing the number of negative examples inevitably results in higher training costs, a phenomenon that becomes more pronounced as the number of negative examples grows in the context of language models. Finally, despite the empirical success and theoretical connection to softmax loss in recommendation tasks, the profound understanding of softmax ranking loss on LM needs to be further explored. § CONCLUSION In this work, we devised a principled Softmax-DPO (S-DPO) loss specially tailored for LM-based recommenders, utilizing multiple negatives in preference data to explicitly instill ranking information into LM. Empirically, S-DPO surpasses all baseline models including traditional and LM-based methods on three datasets in sequential recommendation tasks while successfully mitigating the data likelihood decline issue of DPO. Grounded by theoretical proof, we bridge S-DPO with the softmax loss in self-supervised recommendations, underscoring the ranking performance of S-DPO and highlighting the critical roles of multiple negatives. Also, we theoretically find that S-DPO has an inherent ability to mine hard negatives which provide larger and more effective gradients to model optimization, assuring its exceptional capabilities in recommendation tasks. We believe that our S-DPO, as a generalization of DPO, provides valuable insights for future LM-based recommenders and has the potential to benefit research fields other than recommender systems[Thc extension to broader impact will be detailed in Appendix <ref>]. unsrtnat § MATHEMATICAL DERIVATIONS §.§ Deriving Preference Distribution The PL model takes the form: p^*(τ|e_e,i_2,⋯,e_K, x_u)=∏_j=1^K exp(r(x_u,e_τ(j)))/Σ _l=j^K exp(r(x_u,e_τ(l))). . The ranking in multi-negative preference data is e_p ≻ e_d | x_u, ∀ e_d ∈ℰ_d. Our new preference distribution that estimates the probability of the ranking can be derived: p^*(e_p≻ e_d, ∀ e_d ∈ℐ_d | x_u, e_p, ℰ_d) = ∑_τ∈{τ'| τ'(1)=p} p^*(τ|x_u,e_p,ℰ_d) = ∑_τ∈{τ'| τ'(1)=p}∏_j=1^K exp(r(x_u,e_τ(j)))/∑ _l=j^K exp(r(x_u,e_τ(l))) = exp(r(x_u,e_p))/∑_j=1^K exp(r(x_u,e_j))×∑_τ'∈ Per(ind(ℐ_d))∏_j=1^K-1 exp(r(x_u,e_τ'(j)))/∑_l=j^K-1 exp(r(x_u,e_τ'(l))) = exp(r(x_u,e_p))/∑_j=1^K exp(r(x_u,e_j))×∑_τ'∈ Per(ind(ℐ_d))p^*(τ'|x_u,ℰ_d) = exp(r(x_u,e_p))/∑_j=1^K exp(r(x_u,e_j)), wherein ind(ℰ_d) denotes the indices of titles in ℰ_d and Per(ind(ℰ_d)) denotes the set of permutations of index set ind(ℰ_d). The third equation is because a permutation of {1,2⋯,K} starting with p can be divided into the prefix p and a subsequent permutation of the rest indices, which is exactly Per(ind(ℰ_d)). §.§ Connection Between DPO and S-DPO When N=2, the following equations hold: ℒ_ S-DPO(π_θ;π_ ref) Eq.(<ref>) = -𝔼_(x_u,e_p,ℰ_d)[ log σ(- log ∑_e_d∈ℰ_d exp(β log π_θ(e_d|x_u)/π_ ref(e_d|x_u)-β log π_θ(e_p|x_u)/π_ ref(e_p|x_u)))] = -𝔼_(x_u,e_p,e_d)[ log σ(- log exp(β log π_θ(e_d|x_u)/π_ ref(e_d|x_u)-β log π_θ(e_p|x_u)/π_ ref(e_p|x_u)))] N=2 = -𝔼_(x_u,e_p,e_d)[ log σ(β log π_θ(e_p|x_u)/π_ ref(e_p|x_u)-β log π_θ(e_d|x_u)/π_ ref(e_d|x_u))] = ℒ_ DPO(π_θ;π_ ref). Therefore, DPO is a special case of S-DPO. §.§ Deriving the Gradient of S-DPO Loss Let V(θ;e_d)=g(e_d,e_p,x_u)=β log π_θ(e_d|x_u)/π_ ref(e_d|x_u)-β log π_θ(e_p|x_u)/π_ ref(e_p|x_u) and the S-DPO loss takes the following form: ℒ_ S-DPO(π_θ;π_ ref) =-𝔼_(x_u,e_p,ℰ_d)[ log σ(- log ∑_e_d∈ℰ_d exp(V(θ;e_d)))] The gradient of V(θ;e_d) can be formulated as: ∇_θ V(θ;e_d)=β(∇_θ log π_θ(e_d|x_u)-∇_θ log π_θ(e_p|x_u)) Using properties of sigmoid function that σ'(x)=σ(x)(1-σ(x))=σ(x)σ(-x) and thus (( log σ(x))'=1/σ(x)×σ(x)σ(-x)=σ(-x), we have: ∇_θ ℒ_ S-DPO(π_θ;π_ ref) =-𝔼_(x_u,e_p,ℰ_d)[∇_θ log σ(- log ∑_e_d∈ℰ_d exp(V(θ;e_d)))] = 𝔼_(x_u,e_p,ℰ_d)[ σ( log ∑_e_d∈ℰ_d exp(V(θ;e_d)))·∇_θ log ∑_e_d∈ℰ_d exp(V(θ;e_d))] ( logσ(x))'=σ(-x) = 𝔼_(x_u,e_p,ℰ_d)[ σ( log ∑_e_d∈ℰ_d exp(V(θ;e_d))) ·∑_e_d∈ℰ_d exp(V(θ;e_d))·∇_θ V(θ;e_d)/∑_e'_d∈ℰ_d exp(V(θ;e'_d))] =-β𝔼_(x_u,e_p,ℰ_d)[ σ( log ∑_e_d∈ℰ_d exp(V(θ;e_d))) ·∑_e_d∈ℰ_d∇_θ log π_θ(e_p|x_u)-∇_θ log π_θ(e_d|x_u)/∑_e'_d∈ℰ_d exp(V(θ;e'_d)-V(θ;e_d))] Eq. (<ref>) =-β𝔼_(x_u,e_p,ℰ_d)[ σ( log ∑_e_d∈ℰ_d exp(g(e_d,e_p,x_u))) ·∑_e_d∈ℰ_d∇_θ log π_θ(e_p|x_u)-∇_θ log π_θ(e_d|x_u)/∑_e'_d∈ℰ_d exp(g(e'_d,e_d,x_u))] Definition of V(θ;e_d) =-β𝔼_(x_u,e_p,ℰ_d)[ σ( log ∑_e_d∈ℰ_d exp(g(e_d,e_p,x_u))) ·[ ∇_θ log π_θ(e_p|x_u)- ∑_e_d∈ℰ_d∇_θ log π_θ(e_d|x_u)/∑_e'_d∈ℰ_d exp(g(e'_d,e_d,x_u))] ] The last equation is because: ∑_e_d∈ℰ_d1/∑_e'_d∈ℰ_d exp(g(e'_d,e_d,x_u))=∑_e_d∈ℰ_d exp(V(θ;e_d))/∑_e'_d∈ℰ_d exp(V(θ;e'_d))=1 § EXPERIMENTAL SETTINGS §.§ Baselines We compare the performance of S-DPO, against both traditional and LM-based baselines to showcase the effectiveness of our method. Specifically, for traditional methods, we have: * GRU4Rec <cit.> utilizes the GRU (Gated Recurrent Unit) architecture to model sequences, enabling effective prediction in recommendation tasks. * Caser <cit.> employs both horizontal and vertical convolutional operations to enhance the capture of high-order interactions within item sequences, improving recommendation accuracy. * SASRec <cit.> incorporates a multi-head self-attention mechanism in its self-attentive sequential recommendation model, facilitating the modeling of intricate sequential data patterns. For LM-enhanced method, we have: * MoRec <cit.> advances traditional recommendation systems by incorporating the modality features of items instead of the id feature. we employ BERT for the text encoder and SASRec for the recommendation architecture. For LM-based methods, we have: * LLaMA2 <cit.> utilized vanilla LLaMA2-7B to directly generate recommendation results through direct prompting. * Chat-REC <cit.> is implemented based on the framework discussed in <cit.>, we retain user interaction sequences consisting of item titles as use profiles for a fair comparison. We use GPT4 <cit.> as its primary large language model. * TallRec <cit.> first propose to transform interaction sequences into textual prompts and then fine-tunes large language models using domain-specific corpus. * LLaRA <cit.> combines collaborative signals from traditional recommendation systems into the fine-tuning of large language models for improved recommendation performance. §.§ Datasets To evaluate the effectiveness of S-DPO, we conduct experiments on three widely used real-world datasets: Movielens <cit.>, Goodreads[https://www.goodreads.comhttps://www.goodreads.com], and LastFM <cit.>. The statistics of datasets are illustrated in Table <ref>. The MovieLens dataset is widely used for movie recommendation tasks and includes user ratings and movie titles, we select the MovieLens100K dataset in our experiment. Similarly, Goodreads is sourced from a social book cataloging website, where users can explore, rate, and review a variety of books. LastFM dataset comprises users' listening history and artists' names from the Last.fm online music platform. Following <cit.>, we maintain their titles as textual descriptions for each dataset. For Goodreads, we remove users and books with less than 20 interactions, which keeps the same as the processing of MovieLens. For all datasets, we organize sequences chronologically before dividing the data into training, validation, and testing sets in an 8:1:1 ratio to prevent any potential information leakage. §.§ Implementation Details We implement all approaches with Python 3.9.7, PyTorch 2.2.2, and transformers 4.39.0 on 4 NVIDIA A100s. We select Llama2-7B <cit.> as the LM backbone for S-DPO. Following <cit.>, we randomly select prompts from several prompt formats during training and evaluation to ensure flexibility and generality. For optimization of all the traditional methods, the Adam optimizer is employed with a learning rate adjusted to 0.001, and a batch size configured at 256. All models undergo L2 regularization, with coefficients experimentally determined from [1e-3, 1e-4, 1e-5, 1e-6, 1e-7]. In all experiments involving large language models, we train each method for a maximum of 5 epochs using a batch size of 128 and select the checkpoint with the lowest loss on the validation set as the final checkpoint. A warm-up strategy is applied to the learning rate, starting at 5% of its maximum value, and gradually adjusting it through a cosine scheduler throughout the training process. For S-DPO and all of its ablation studies, we further conduct preference training for further 3 epochs with a batch size of 128 and a learning rate of 1e-5. Setting the value of β as 1, we search the number of negative samples in [3,5] for the main results. The effects of both factors are further explored in <ref>. §.§ Evaluation Metrics Given that LMs primarily produce textual responses rather than comprehensive item rankings, we utilize a re-ranking metric in line with previous research <cit.> to assess recommendation performance. For each sequence, a candidate set is constructed by randomly selecting 20 non-interacted items and always includes the correct item. We assess all models based on their ability to pinpoint the correct item within this candidate set, employing the HitRatio@1 (HR@1) metric for performance evaluation. Following <cit.>, we also introduce an additional metric called the Valid Ratio to evaluate the LM-based methods' adherence to instructions and their ability to generate appropriate responses. Due to the difficulty LMs face in producing ranked results for candidate items, position-aware metrics like NDCG are deemed unsuitable for this evaluation. § STUDY OF NEGATIVE SAMPLES NUMBER Figure <ref> through figure <ref> provide the comparison of validation loss between S-DPO and DPO on MovieLens and Goodreads, illustrating that the loss of S-DPO decreases faster and more significantly. This observation is consistent with Figure <ref>. On the other hand, figure <ref> through figure <ref> showcase that S-DPO exhibits continually increasing rewards of preferred items, while DPO struggles to increase the reward of preferred items on both MovieLens and Goodreads, which is aligned with Figure <ref>. § BROADER IMPACT We left further exploration of softmax ranking loss in LM including more negative samples and validation on various settings as future works. We believe that S-DPO, a generalization of DPO loss has the potential to benefit other research areas other than recommender systems. This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
http://arxiv.org/abs/2406.09209v1
20240613151412
Acceleration of the Universe without the Hubble tension with Kaniadakis holographic dark energy using the Hubble horizon as the IR cut-off
[ "Wei Fang", "Guo Chen", "Chao-Jun Feng", "Wei Du", "Chenggang Shu" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-th" ]
wfang@shnu.edu.cn Department of Physics, Shanghai Normal University, 100 Guilin Rd, Shanghai 200234, P.R.China The Shanghai Key Lab for Astrophysics, 100 Guilin Rd, Shanghai 200234, P.R.China Co-first author. Department of Physics, Shanghai Normal University, 100 Guilin Rd, Shanghai 200234, P.R.China Corresponding author fengcj@shnu.edu.cn Department of Physics, Shanghai Normal University, 100 Guilin Rd, Shanghai 200234, P.R.China wfang@shnu.edu.cn The Shanghai Key Lab for Astrophysics, 100 Guilin Rd, Shanghai 200234, P.R.China The Shanghai Key Lab for Astrophysics, 100 Guilin Rd, Shanghai 200234, P.R.China § ABSTRACT We introduce a holographic dark energy model that incorporates the first-order approximate Kaniadaski entropy, utilizing the Hubble horizon, 1/H, as the infrared cutoff. We investigate the cosmological evolution within this framework. The model introduces an extra parameter relative to the ΛCDM model. It posits a Universe that is initially dominated by dark matter, which then evolves to a phase where dark energy becomes the predominant component, with this transition occurring at a redshift of approximately z ∼ 0.419. The energy density of dark energy is ultimately expected to become constant, thereby circumventing the potential issue of a "big rip." Employing the most recent Type Ia supernova and Hubble parameter data, we constrain the model's parameters and find a Hubble constant of H_0=72.8 km/s/Mpc, thereby resolving the Hubble tension issue. The estimated age of the Universe, based on the best-fit parameter values, is 14.2 Gyr. Furthermore, we predict the number of strong gravitational lenses and conduct statefinder and Om diagnostic analyses to validate and characterize the model. Acceleration of the Universe without the Hubble tension with Kaniadakis holographic dark energy using the Hubble horizon as the IR cut-off Chenggang Shu Received Month dd, yyyy; accepted Month dd, yyyy ============================================================================================================================================ § INTRODUCTION Observations indicate that the Universe is undergoing accelerated expansion, and to explain this phenomenon, many dark energy models have been proposed. Certainly, the simplest dark energy model is the cosmological constant, known as the ΛCDM model. In this model, the energy density of dark energy is a constant, or equivalently, its equation of state parameter w = -1. However, this model suffers from a fine-tuning problem, and recent observations suggest that the equation of state parameter of dark energy is not a constant. Therefore, at present, the ΛCDM model may not be sufficient to describe the current cosmological accelerated expansion phenomenon, or at least, it is not that simple. Among the myriad of dark energy models, the holographic dark energy model has emerged as a focal point of interest and academic exploration <cit.>. This model is underpinned by the holographic principle, which asserts that the entropy of the Universe should not surpass that of a black hole of equivalent size. As a result, the energy density, which is proportional to the inverse square of an infrared cutoff ∼ 1/L^2, is constrained. This principle has recently been explored in the context of thermodynamics <cit.>. Furthermore, the Rényi holographic dark energy model, which examines the interaction between dark energy and dark matter in fractal cosmology, is discussed in Ref.<cit.>. In Ref. <cit.>, the authors also incorporated viscous fluid, significantly mitigating the issue of the Universe's age. In Ref. <cit.>, the authors reconstructed the f(R) theory model from the holographic dark energy, with recent advancements on the holographic dark energy model available in <cit.> and information on dark energy and modified gravity in Ref.<cit.>. Kaniadakis introduced a single-parameter generalization of the Boltzmann-Gibbs entropy, known as the Kaniadakis entropy, which is defined as: S_k=-k_B∑_in_iln_kn_i , or S_k=-k_B∑_i^WP_i^1+K-P_i^1-K/2K , where P_i represents the probability of a specific microstate of the system, and W denotes the total number of possible configurations. When applied within the context of black hole physics, the entropy takes the form S_k=1/Ksinh (KS_BH) , which, in the approximation where K≪ 1, can be simplified to S_K=S_BH+K^2/6S_BH^3+ 𝒪(K^4) . Drawing from the holographic principle, we can derive the so-called Kaniadakis holographic dark energy (KHDE), whose energy density is expressed as ρ_de = 3(α L^-2 +β̃ L^2) , where L serves as an infrared cutoff. In literature, the future event horizon has frequently been selected as the cut-off boundary, as seen in <cit.>. Specifically, for the KHDE model, L = R_h = a∫_t^∞ ds/a(s) is utilized, a method also referenced in <cit.> within the Brans-Dicke framework. Furthermore, the dynamic characteristics of Kaniadakis holographic dark energy were scrutinized in <cit.>. In another study <cit.>, the authors delved into the applicability of Kaniadakis statistics as a leading paradigm for characterizing intricate systems within the realm of relativity. The justification behind opting for the future event horizon is rooted in the fact that the Hubble horizon, in the initial holographic dark energy model, is inadequate in propelling the Universe's accelerated expansion. Nevertheless, the incorporation of an additional term in Equ. (<ref>) modifies this scenario. In this paper, we utilize the Hubble horizon as the infrared cutoff for the Kaniadakis holographic dark energy, which is derived from the first-order approximation of the dark energy density derived from Kaniadakis entropy. We examine the evolutionary trajectory of the Universe and find that our model successfully accounts for the observed accelerated expansion. After performing the parameter fitting procedure, we also discover that the current Hubble parameter H_0=72.8 km/s/Mpc, hence there is no Hubble tension in this model. For recent works on the Hubble tension, please refer to <cit.>, <cit.>, and <cit.>. The transition from matter dominance to dark energy dominance approximately occurs at a redshift of z ≈ 0.419. The estimated age of the Universe, based on the best-fit parameter values, is 14.2 Gyr. Furthermore, we predict the number of strong gravitational lenses and conduct statefinder and Om diagnostic analyses to validate and characterize the model. In the future, the energy density of dark energy will asymptotically approach a constant, behaving much like the cosmological constant, and hence, there will be no issue of a cosmic "big rip." In the following section, we will delve into the cosmological evolution of the holographic dark energy model and subsequently constrain the model's parameters using empirical data. Moreover, we have derived exact analytical solutions to the Friedmann equations, which we have employed to estimate the age of the Universe. Subsequent to this, we predict the number of strong gravitational lenses and examine the statefinder and Om diagnostic parameters. In the final section, we present our conclusions of the findings. § KANIADAKIS HOLOGRAPHIC DARK ENERGY WITH THE HUBBLE HORIZON AS THE IR CUT-OFF In the following, we will consider a flat, homogeneous, and isotropic Universe described by the Friedmann-Robertson-Walker metric ds^2 = -dt^2 + a^2(t)(dr^2+r^2dθ^2 + r^2sin^2θ dϕ ^2) . After taking the Hubble horizon as the IR cut-off, L = 1/H, the energy density of the Kaniadakis holographic dark energy is ρ_de= 3α H^2 +3β̃ H^-2 , where α is a dimensionless constant, β̃ has the dimension of mass^4 and the factor of 3 introduced before the two constants is for convenience. Then the Friedmann equation becomes 3H^2 = ρ_m + ρ_r + 3α H^2 +3β̃ H^-2 , where ρ_m = ρ_m0a^-3 and ρ_r =ρ_r0a^-4 are the energy densities of matter and radiation, respectively, and H is the Hubble parameter. Here and henceforth, we adopt the unit 8π G = 1. In the later stages of the Universe's evolution, radiation can be neglected because it decays much more rapidly than other components. Therefore, in the following, we only consider two components: dark matter and dark energy until we perform the observational constraints. Slightly rearranging the Friedmann equation, we can obtain 3(1-α)H^4 -ρ_m H^2 -3β̃ = 0 . When α = 1 and β < 0, the above equation indicates that the Hubble parameter increases as the Universe expands: H^2= (-3β̃ /ρ_m0 )a^3. This is inconsistent with observations. So we only consider the solution with α≠ 1: H^2 = ρ_m±√(ρ_m^2 +36β̃ (1-α))/6(1-α) . Observations show that the Universe transitions from being matter-dominated to dark energy-dominated. Therefore, the solution above (<ref>) should take a positive sign H^2 = ρ_m0a^-3 + √(ρ_m0^2a^-6 +36β̃ (1-α))/6(1-α) . Thus, to ensure that H^2 > 0, α must be less than 1. From Equ. (<ref>) , it can be seen that in the early Universe when a ≪ 1, H^2 = 1/3ρ_m0a^-3/(1-α). As a increases, meaning the matter component continuously decays, the Universe will tend towards a dark energy-dominated state, where the energy density of this dark energy becomes a constant, i.e. H^2 →√(β̃/(1-α)) , a→∞ . Substituting Equ. (<ref>) into Equ.(<ref>), we obtain ρ_de = α/2(1-α)( √(ρ_m^2 +36β̃ (1-α))+ρ_m ) + 1/ 2 (√(ρ_m^2 +36β̃ (1-α)) - ρ_m ) = 1/2(1-α)√(ρ_m^2 +36β̃ (1-α)) + 2α - 1/2(1-α)ρ_m . From the continuity equation for dark energy, we also get the pressure of the dark energy as follows p_de = -ρ̇_de/3H - ρ_de = 1/2(1-α)[ ρ_m^2 /√(ρ_m^2 +36β̃ (1-α)) -√(ρ_m^2 +36β̃ (1-α))] , where we have used ρ̇_m = -3Hρ_m. Then, in the era when dark matter dominated the energy density of the dark energy evolved asymptotically to ρ_de→αρ_m /(1-α) and it exhibited zero pressure. The equation of state can be obtained w_de = p_de/ρ_de. Using Equ. (<ref>) and taking its derivative, the deceleration parameter of the Universe can be given by the following equation: q = -1- Ḣ/H^2 =-1 + 3/2ρ_m [ ρ_m^2 +36β̃ (1-α) ]^-1/2 . It is clear that as ρ_m decays to zero, the deceleration parameter approaches -1. § OBSERVATIONAL CONSTRAINTS Define a dimensionless Hubble parameter, E^2 = H^2/H_0^2 = Ω_mr(z)+ √(Ω_mr^2(z) +4β(1-α))/2(1-α) , where β = β̃/H_0^4 is a dimensionless constant and Ω_mr (z)= Ω_m0(1+z)^3 + Ω_r0(1+z)^4 , with z the redshift. And we have also defined dimensionless parameters Ω_m0 = ρ_m0/3H_0^2 , Ω_r0 = ρ_r0/3H_0^2 , for matter and radiation respectively. From Equ. (<ref>), we obtain 1= Ω_m0+Ω_r0 + α + β , which means there are only three free parameters in this model: Ω_m0, Ω_r0 and α. In the following, we will perform the constraint on these parameters using observational data sets based on Markov Chain Monte Carlo method. We employ the Pantheon compilation of Type Ia supernova (SNIa) data <cit.> and Hubble parameter (H(z)) data points <cit.> to constrain the model parameters. The SNIa dataset encompasses 279 samples from the Sloan Digital Sky Survey (SDSS) and the Supernova Legacy Survey (SNLS), spanning a redshift range of 0.03 < z < 0.68. Additionally, it includes 1048 samples with redshifts ranging from 0.01 < z < 2.3, incorporating the Hubble Space Telescope (HST) samples and various low-z samples. The H(z) data consists of 26 data points derived from Baryon Acoustic Oscillations and 31 data points obtained through the differential age method. For the sake of comparison, we have also conducted an optimization process for the wCDM model, which features a constant equation of state parameter w, and has an equivalent number of parameters as the KHDE model. The fitting results for the ΛCDM model are also included in Table <ref>. For a comprehensive account of the fitting methodology, please refer to Refs.<cit.>. Figure <ref> illustrates the constraints on the parameters of the KHDE model at the 2σ confidence level. The constraints on the parameter β can be obtained from Equ.(<ref>) and the fitting results of Ω_m0, Ω_r0 and α as follows: β = 0.686^+0.026_-0.027 . The fitting results show that the parameter α in the KHDE model is non-zero at least within the 2-σ confidence level, while the parameter β is significantly non-zero. Additionally, we find that the fitted value of H_0 = 72.8 km/s/Mpc in this model, which is relatively large indicating the absence of the Hubble tension in this model. In contrast, the H_0 values in the wCDM and ΛCDM models are smaller, exhibiting varying degrees of the Hubble tension. As can be seen from Figure <ref>, the evolution behavior of the Hubble parameter in the three models is somewhat different, especially at recent and future times. The Hubble parameter in the KHDE model is slightly smaller than that in the ΛCDM model in the early Universe, but at recent times (z=0), it is larger than in the other two models. In the future, it will tend towards a constant, which is also the case in the ΛCDM model. However, for the wCDM model, due to its w < -1, the Hubble parameter will continue to increase. From Figure <ref>, the error-weighted variances χ^2_H for the KHDE, wCDM, and ΛCDM models are 81.65, 234.75, and 160.10, respectively. It should be noted that the results in Figure <ref> slightly differ from those in Ref.<cit.> because Figure <ref> shows the outcome of joint optimization with all data, whereas Figure 1 in Ref.<cit.> displays results using only the Hubble data, without presenting the joint optimization results. In the KHDE model, the state equation parameter of dark energy evolves from w = 0 in the early Universe to w < -1, and then further to w = -1, and the its current value is w=-1.109. In Figure <ref>, we compare the evolution under different parameter values. From Equ. (<ref>) , we can also obtain the redshift z_EQ at which the energy density of dark matter equals that of dark energy. z_EQ =[ β/2(1-2α) Ω_m0^2]^1/6-1 ≈ 0.419 , with the best-fit values incorporated in the final step of the calculation. § SOLUTION OF THE FRIEDMANN EQUATION AND THE AGE OF THE UNIVERSE From the Friedmann equation Equ. (<ref>), we obtain a^1/2[ 1 + √(1 +4β (1-α)Ω_m0^-2a^6)]^-1/2 da = √(Ω_m0/2(1-α))H_0dt . Define x= 2√(β (1-α))Ω_m0^-1a^3 , then we get x^-1/2[ 1 + √(1 +x^2)]^-1/2 dx= 3 (β/1-α)^1/4H_0dt . After integrating on both sides of the above equation, the age of the Universe at time a is given by t(a) = 2/3H_0(β/1-α)^-1/4( -y + tan ^-1y + tanh ^-1y ) , where y = √(1-√(x^2+1)+x/1+√(x^2+1)-x) . Therefore, it yields an estimated age of the Universe of t_0 = 14.2 Gyr based on the best-fit parameters, which satisfies the constraint imposed by the stellar age bound: t_0 > 11 ∼ 12 Gyr. In other words, the age of the Universe should be greater than that of any other celestial bodies within it. § PREDICTION ON THE NUMBER OF STRONG GRAVITATIONAL LENSES Galaxy-galaxy strong gravitational lensing is a powerful tool for constraining astrophysical and cosmological parameters due to its sensitivity to the properties of galaxies as well as the geometry of the Universe. A different dark energy model may predict different expansion history of the Universe and affect the spacial distribution of galaxies along the line-of-sight, thus changing the lensing probability. The on-going Euclid Wide Survey is expected to be able to identify 𝒪(10^5) galaxy-scale lenses, which may help us distinguish between different dark energy models based on the lensing statistics. As a test, we use the analytical model for galaxy-galaxy strong gravitational lensing statistics proposed by <cit.> to calculate the number density of detectable lenses in KHDE, wCDM and ΛCDM models with the best-fit parameter values from Section III. In the calculations, we adopt the velocity dispersion function by <cit.> and the UV luminosity function by <cit.>. The minimum signal-to-noise ratio and magnification of bright image are set to be 20 and 3, respectively. We show in Figure <ref> the number density of identifiable galaxy-scale lenses as a function of magnitude cut for the three dark energy models investigated in this paper. As expected, the number density of detectable lenses increases with the magnitude cut of the survey. More importantly, the differing rising tendencies illustrate the strong dependence of lensing occurrence rate on the parameters of dark energy models. It demonstrates that lens counts, with different magnitude cut in Euclid survey, can provide powerful constraints on cosmological parameters and serve as a complementary tool to distinguish the KHDE model from others. § STATEFINDER DIAGNOSTIC The statefinder diagnostic is utilized through the parameters {q, r, s}, which are deduced from the higher-order derivatives of the scale factor in the context of various dark energy models <cit.>. These parameters are defined as follows: q≡-ä/aH^2, r≡⃛a/aH^3, s≡r-1/3(q-1/2) . For the KHDE model, these parameters are given by q = -1- Ḣ/H^2 =-1 + 3/2Ω_m0 a^-3[ Ω_m0^2a^-6 +4β (1-α) ]^-1/2 . r = 1+9/2Ω_m0^2 a^-6[ Ω_m0^2a^-6 +4β (1-α) ]^-1{ 1-Ω_m0 a^-3[ Ω_m0^2a^-6 +4β (1-α) ]^-1/2} , and s = -Ω_m0^2 a^-6[ Ω_m0^2a^-6 +4β (1-α) ]^-1 . Additionally, the Om diagnostic is another useful tool, defined as: Om(a) ≡E^2(a)-1/a^-3-1 . For the wCDM model, the expression simplifies to: Om(a) = Ω_m0a^-3+(1-Ω_m0)a^-3(1+w)/a^-3-1 = Ω_m0 + (1-Ω_m0)a^-3(1+w)-1/a^-3-1 . In the specific case of the ΛCDM model, with w = -1, Om(a) reduces to a constant, Ω_m0. For the KHDE model, the Om diagnostic takes the form: Om(a) = Ω_m0a^-3-2(1-α)+ √(Ω_m0^2a^-6 +4β(1-α))/2(1-α)(a^-3-1) . The evolutions of these diagnostics are illustrated in Figure <ref>, from which it is apparent that there are substantial theoretical differences in the values of these parameters between the KHDE model and the ΛCDM model. However, ultimately, the KHDE model will behave in a manner similar to the cosmological constant model. Moreover, from Equ. (<ref>), it can be inferred that the transition from decelerated to accelerated expansion occurred approximately at a redshift of z ≈ 0.843. § CONCLUSIONS In this paper, we introduce a novel dark energy model known as the KHDE model, an acronym for Kaniadakis Holographic Dark Energy. This model draws inspiration from the holographic principle and employs the Kaniadakis entropy as a bound to prevent the Universe from collapsing into a black hole. Departing from convention, we utilize the Hubble horizon, 1/H, as the infrared cutoff, which successfully accounts for the observed acceleration of the Universe at the current epoch. The KHDE model incorporates just one additional parameter compared to the ΛCDM model. Our analysis, informed by observational constraints, reveals a Hubble constant of H_0 = 72.8 km/s/Mpc, thereby addressing the Hubble tension problem. The transition from dark matter dominance to dark energy dominance occurs at a redshift of around z ∼ 0.419. Based on the optimal parameter values, we estimate the age of the Universe to be 14.2 billion years. Additionally, we predict the number of strong gravitational lenses and perform statefinder and Om diagnostic analyses to further validate and delineate the model's characteristics. The KHDE model possesses several distinctive features: 1) it does not suffer from the Hubble tension problem; 2) it has only one more parameter than ΛCDM model; 3) the equation of state parameter w crosses -1; and 4) the Universe ultimately evolves into a cosmological constant model, thus avoiding the 'big rip' problem. In fact, although our starting point is the approximate Kaniadakis entropy and the holographic principle, we can still ask whether there exists a fundamental theory of gravity in which the Friedmann equations given in Equ. (<ref>) are derived. We leave this question for subsequent research. § ACKNOWLEDGMENTS CJF acknowledges the support from NSFC grants No.11105091. WD acknowledges the support from NSFC grants No. 11890691. unsrt
http://arxiv.org/abs/2406.08345v1
20240612155348
Sequential MAP Parametric OFDM Channel Estimation for Joint Sensing and Communication
[ "Enrique T. R. Pinto", "Markku Juntti" ]
eess.SP
[ "eess.SP" ]
Journal of Class Files, Vol. 14, No. 8, August 2015 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Sequential MAP Parametric OFDM Channel Estimation for Joint Sensing and Communication Enrique T. R. Pinto and Markku Juntti Centre for Wireless Communications (CWC), University of Oulu, Finland {enrique.pinto, markku.juntti}@oulu.fi Received ... ======================================================================================================================================================= § ABSTRACT Uplink sensing is still a relatively unexplored scenario in integrated sensing and communication which can be used to improve positioning and sensing estimates. We introduce a pilot-based maximum likelihood, and a maximum a posteriori parametric channel estimation procedure using an orthogonal frequency division multiplexing (OFDM) waveform in uplink sensing. The algorithm is capable of estimating the multipath components of the channel, such as the angles of arrival, departure, path coefficient, and the delay and Doppler terms. As an advantage, when compared to other existing methods, the proposed procedure presents expressions for exact alternating coordinate updates, which can be further improved to achieve a competitive multipath channel estimation tool. channel estimation, OFDM, uplink, sensing § INTRODUCTION Radio-based sensing is being intensively studied for the purposes of JSC. Exploiting the existing cellular infrastructure to perform sensing of passive devices, localization of active users, and mapping of the environment is not only economically attractive, it is also technically useful. Sensing, positioning, and environment data can not only be used to enhance mobile communications by improving power allocation, beamforming, and user scheduling, but it can also serve other systems such as autonomous vehicles and urban infrastructure by providing information for accident prevention, traffic flow optimization, etc. As wireless communications standards progressively incorporate higher frequency ranges to their spectrum, such as FR2 in the 5G standard and also the very likely inclusion of subTHz bands in B5G and 6G, high mobility scenarios provide shorter and shorter channel coherence times. In these cases, CSI acquisition becomes a non-trivial problem, as channel estimates quickly become outdated due to Doppler shifts, thus, only estimating the channel matrix stops being an effective option. Extracting geometrical propagation information and using it as a deterministic (or hybrid) channel model can be a useful method <cit.>, especially because it paves the way for channel prediction and enviroment sensing/mapping. If the propagation parameters of each multipath are well estimated, the line between sensing with mapping and channel estimation becomes blurred; these values allow us to approximately reconstruct the channel with a deterministic model instead of consigning propagation phenomena to stochastic terms. Furthermore, they provide essential information for JSC, which can be used to detect passive sensing targets, map the environment, and enhance the position estimates of users. In this paper, we propose a sequential MAP parametric channel estimation method for extracting the parameters of each multipath component of the channel in the context of bistatic uplink sensing. The most popular solution in non-real-time channel modelling applications is the SAGE algorithm <cit.>. While generally successful, its alternating coordinate descent often rely on line-search procedures. This can limit its applicability in real-time scenarios. Other existing algorithms use the CANDECOMP/PARAFAC-decomposition (CP-decomposition) for a similar channel estimation procedure <cit.>. However, they do not immediately exploit the structure of the channel tensor. Furthermore, the CP-decomposition is computationally expensive and outputs the best fitting rank K decomposition of the input tensor, requiring further processing for extracting channel parameters. In contrast, our proposed algorithm immediately outputs the channel parameters and exploits the channel model structure when computing their estimates, while also providing expressions for exact coordinate updates. This makes way for future work on improved channel estimation techniques that can further optimize the speed and accuracy of the channel parameter estimation process. The rest of the paper is structured as follows. In Section <ref>, we introduce the model considered in this paper. Then, in Section <ref>, we present the chosen estimation approach. In Section <ref>, we introduce the necessary background for the optimization algorithm that is proposed in Section <ref>. Finally, we analyse some numerical results in Section <ref> and make our concluding remarks in Section <ref>. § SYSTEM MODEL Consider the following OFDM uplink received signal model <cit.> 𝐲_n,t = ∑^L_ℓ=1 b_ℓ e^-j2π n (τ_ℓ+τ_o)f_c e^j2π t (f_D,ℓ + f_o) T_s ·𝐚(ϕ_ℓ) 𝐚^T(θ_ℓ) 𝐱_n,t + 𝐰_n,t, where n and t denote the OFDM subcarrier and symbol index, respectively; 𝐲_n,t is the signal received by the BS at the nth subcarrier and tth symbol; L is the number of multipath components; b_ℓ is the ℓth path gain; τ_ℓ is the propagation delay of the ℓth multipath; τ_o is the clock timing offset between the UE and the BS; f_c is the subcarrier spacing B/N_c, where B is the bandwidth; f_D,ℓ is the Doppler frequency of the ℓth multipath; f_o is the CFO of between UE and the BS; T_s is the OFDM symbol length; 𝐚(ϕ/θ) is the ULA response vector with N_r/N_t antennas and angle of arrival/departure ϕ/θ, given by 𝐚(ϕ/θ) = [ 1 e^-jπsin(ϕ/θ) ⋯ e^-jπ(N_R/T -1) sin(ϕ/θ) ]^T, where “ϕ/θ" here denotes “either ϕ or θ"; 𝐱_n,t is the transmitted pilot at the nth subcarrier and tth symbol; and finally 𝐰_n is AWGN at the nth subcarrier and tth symbol with covariance N_0 𝐈_N_r. Because the signal is transmitted by the UE, this scenario is called uplink sensing. Other variations of the uplink sensing also exist, those are based on setting up UE, synchronized and with shared oscillator signals, deployed specifically for sensing. The model in (<ref>) is general nonetheless, the dedicated UE scenario is readily obtained by setting the offsets to zero. In a communications context, we are usually exclusively interested in the composited values of the channel matrices 𝐇_n,t = ∑^L_ℓ=1 b_ℓ e^j ω_1,ℓ n e^j ω_2,ℓ t𝐚(ϕ_ℓ) 𝐚^T(θ_ℓ), where ω_1,ℓ =-2π(τ_ℓ+τ_o)f_c and ω_2,ℓ = 2π(f_D,ℓ + f_o) T_s. However, in radio-based sensing and localization we are interested in estimating the sensing parameters (b, τ, f_D, ϕ, θ). Furthermore, the timing and frequency offset parameters ξ_o = (τ_o, f_o) are important to be estimated, because they lead to ranging and speed estimation ambiguity. It may be assumed that the offsets are the same for all the antennas, because the signal from the LO is shared within the radio chains of an UE. In this work, we do not tackle the estimation of the offsets, instead we focus exclusively on estimating ξ_ℓ = (b_ℓ, ω_1,ℓ, ω_2,ℓ, ϕ_ℓ, θ_ℓ) ∀ℓ. The offsets remain a nuissance and additional estimation methods would be required to identify them if the times-of-flight or Doppler frequencies are of interest. § MAXIMUM A POSTERIORI ESTIMATION Define 𝐲=vect(y_n,t,u), where vect(·) denotes the tensor vectorization operation, also denote by ξ the vector of sensing parameters ξ_ℓ for all detected paths, then the posterior of ξ given the data 𝐲 is p(ξ|𝐲) = p(𝐲|ξ) p(ξ)/p(𝐲) = ∏_n,t,u p(y_n,t,u|ξ) p(ξ)/∫_Ξ∏_n,t,u p(y_n,t,u|ξ')p(ξ') dξ', where Ξ denotes the parameter space and p(ξ) denotes the prior for ξ. Throughout the remainder of this paper, summation and products over n/t/u/v go from 0 to N_c/s/r/t-1, unless otherwise indicated. The MAP estimate is then given by ξ̂ = _ξ p(ξ|𝐲) = _ξ∏_n,t,u p(y_n,t,u|ξ) p(ξ), since the denominator of (<ref>) is a constant. The conditional PDF of the data is complex normal y_n,t,u|ξ∼𝒞𝒩(μ_n,t,u(ξ) ,N_0 ), where the mean is given by μ_n,t,u(ξ) = ∑^L_ℓ=1 b_ℓ e^j ω_1,ℓ n e^j ω_2,ℓ t e^-jπ u sin(ϕ_ℓ)𝐚^T(θ_ℓ)𝐱_n,t. The priors are assumed to be independent. Given the likelihood and prior, the log-posterior is log p(ξ|𝐘) = -1/N_0∑_n,t,u| y_n,t,u - μ_n,t,u(ξ) |^2 + ∑^L_ℓ=1log (p(b_ℓ)p(ω_1,ℓ)p(ω_2,ℓ)p(ϕ_ℓ)p(θ_ℓ)) + …, where we have omitted the constant terms. § OPTIMIZATION PRELIMINARIES We write the MAP estimation as a constrained minimization problem min_ξ [ 1/N_0∑_n,t,u| y_n,t,u - μ_n,t,u(ξ) |^2 - log p(ξ) ] s.t. ∠ b_ℓ, ω_1,ℓ, ω_2,ℓ∈ (-π.π); ϕ_ℓ, θ_ℓ∈( -π/2, π/2) ∀ℓ. The objective function is clearly nonconvex over ξ and is 5L-dimensional, which can be quite high if there are many multipaths. For this reason, simple local descent methods, such as gradient descent and its variations, are not effective. Additionally, the objective function computation can be quite expensive if the number of receive antennas, subcarriers, and OFDM symbols is large. The computational cost for objective function evaluation makes many global optimization methods, such as particle swarm and simulated annealing, extremely time consuming until an acceptable solution is achieved. One technique that is successful for this problem is an augmented form of AECD, further details are provided in Section <ref>. To perform exact coordinate descent we require that the gradient along that coordinate direction be equal to zero, e.g. for the angle of arrival of path ℓ' we have ∂ f/∂ϕ_ℓ' = 0, where f denotes the objective function in (<ref>). Breaking down the objective function into the sum of the log-likelihood and the log-prior terms, respectively, we have f(ξ,𝐲)=log p(𝐲|ξ)+log p(ξ). We will show that the partial derivatives of the log-likelihood term with relation to ϕ_ℓ', θ_ℓ', ω_1,ℓ', and ω_2,ℓ', are given by Fourier series. The series has as many terms as the size of that parameters associated dimension, e.g. ∂log p(𝐲|ξ)/∂ϕ_ℓ' has N_r terms, ∂log p(𝐲|ξ)/∂ω_1,ℓ' has N_c terms, and so on. The roots of the resulting series (including the additional prior term) will be candidate solutions for the coordinate descent update. The Fourier series root-finding problem can be turned into a companion matrix eigenvalue problem <cit.>, we can thus readily find all roots by applying a transformation to the computed eigenvalues. Finally, we evaluate the objective on all the roots and select the one with smallest value.We now present the partial derivatives of log p(𝐲|ξ) over the ϕ_ℓ', θ_ℓ', ω_1,ℓ', and ω_2,ℓ' coordinates. We omit the derivation for space constraints. Over the following section, some indices will be arbitrarily moved from subscript to superscript in order to save space. Additionally we denote the transmitted signal at transmit antenna v as x^v_n,t. §.§ Partial Derivative over ω_1,ℓ' and ω_2,ℓ' The partial derivative over ω_1,ℓ' is given by ∂log p(𝐲|ξ)/∂ω_1,ℓ' = ∑^N_c-1_n=0 a_ncos(ω_1,ℓ' n) + b_n sin(ω_1,ℓ' n) a_n = 2n/N_0∑_t,uα^u_ℓ',n,t( y^u,*_n,t - ∑_ℓ≠ℓ' e^-jω_1,ℓ nα^u,*_ℓ,n,t) b_n = 2n/N_0∑_t,uα^u_ℓ',n,t( y^u,*_n,t - ∑_ℓ≠ℓ' e^-jω_1,ℓ nα^u,*_ℓ,n,t) α^u_ℓ,n,t = b_ℓ e^jω_2,ℓt e^-jπ usin(ϕ_ℓ)𝐚^T(θ_ℓ) 𝐱_n,t. The partial derivative over ω_2,ℓ' is similar, by symmetry. §.§ Partial Derivative over sin(ϕ_ℓ') For ϕ_ℓ', we take the derivative over sin(ϕ_ℓ') and exploit the bijectivity of the sine function over the (-π/2,π/2) range to compute the value of ϕ_ℓ' that satisfies ∂log p(𝐲|ξ)/∂sin(ϕ_ℓ')=0 with smallest objective value. The partial derivative is given by ∂log p(𝐲|ξ)/∂sin(ϕ_ℓ') = ∑^N_R-1_u=0 a_ucos(π usin(ϕ_ℓ')) + b_u sin(π usin(ϕ_ℓ')) a_u = 2u/N_0∑_t,nα^*_ℓ',n,t( y^u_n,t - ∑_ℓ≠ℓ' e^-jπ usin(ϕ_ℓ)α_ℓ,n,t) b_u = 2u/N_0∑_t,nα^*_ℓ',n,t( y^u_n,t - ∑_ℓ≠ℓ' e^-jπ usin(ϕ_ℓ)α_ℓ,n,t) α_ℓ,n,t = b_ℓ e^jω_1,ℓn e^jω_2,ℓt𝐚^T(θ_ℓ) 𝐱_n,t. §.§ Partial Derivative over sin(θ_ℓ') Once again, exploiting the injectivity of the sine function, we get ∂log p(𝐲|ξ)/∂sin(θ_ℓ') = ∑^N_T-1_v=0 a_v cos(π vsin(θ_ℓ')) + b_v sin(π vsin(θ_ℓ')), where the coefficients are given by a_v = 2/N_0∑_n,t,u v α_n,t,u,v and b_v = -2/N_0∑_n,t,u v β_n,t,u,v, which in turn are expressed in terms of α_n,t,u,v, given by α_n,t,u,v =γ^u_ℓ',n,tx^v_n,t + γ^u_ℓ',n,tx^v_n,t + |α^u_ℓ',n,t|^2∑^N_T-1_k=v x^k_n,tx^k-v,*_n,t -y^u,*_n,tα^u_ℓ',n,tx^v_n,t; and β_n,t,u,v. For v=0: β_n,t,u,0 = γ^u_ℓ',n,tx^v_n,t - γ^u_ℓ',n,tx^v_n,t +|α^u_ℓ',n,t|^2/2∑^N_T-1_k=0 |x^k_n,t|^2 -y^u,*_n,tα^u_ℓ',n,tx^v_n,t and, for v=1,…,N_t-1: β_n,t,u,v = γ^u_ℓ',n,tx^v_n,t - γ^u_ℓ',n,tx^v_n,t + |α^u_ℓ',n,t|^2∑^N_T-1_k=v x^k_n,tx^k-v,*_n,t -y^u,*_n,tα^u_ℓ',n,tx^v_n,t. §.§ Optimization over b_ℓ' We assume a complex normal prior for b_ℓ', with mean b_ℓ' and variance ν_b_ℓ'. It can be seen that f is convex over b_ℓ'. Using Wirtinger calculus, we can derive closed form expressions for the exact coordinate update on b_ℓ', for a single ℓ' (even though a closed form joint update for b_ℓ∀ℓ exists by solving a linear system). We once again omit the derivation, presenting only the result b^opt_ℓ' = ν_b_ℓ'∑_n,t,uγ^u,*_ℓ',n,t( y^u_n,t - ∑_ℓ≠ℓ' b_ℓγ^u_ℓ,n,t) + N_0 b_ℓ'/ν_b_ℓ'∑_n,t,u |γ^u_ℓ',n,t|^2 + N_0, where γ_ℓ,n,t,u = e^jω_1,ℓn e^jω_2,ℓt e^-jπ u sin(ϕ_ℓ)𝐚^T(θ_ℓ) 𝐱_n,t. §.§ Priors Because we want to preserve the Fourier series structure of the partial derivatives, we must choose priors which have derivatives that can be directly incorporated into a Fourier series. For ω_1.ℓ, we consider the following prior distribution p(ω_1.ℓ) ∝exp( -|e^jω_1,ℓn - e^j ω_1,ℓ n|^2/ν_ω_1,ℓ), where ω_1,ℓ∈(-π,π) and ν_ω_1,ℓ>0 respectively denote the mode and variance parameter. Note that, while the mode of the distribution is indeed equal to ω_1,ℓ, the variance is merely an increasing function of ν_ω_1,ℓ. A similar prior is used for ω_2.ℓ. For ϕ_ℓ we use p(ϕ_ℓ) ∝exp( -|e^jπsin(ϕ_ℓ) - e^jπsin(ϕ_ℓ)|^2/ν_ϕ_ℓ), with mode and variance parameters similarly defined. The proposed prior for θ_ℓ is identical. The path gain coefficient prior has been already introduced in Subsection <ref>. In sequential estimation, the mode of the current estimation step corresponds to the point estimates of the previous step, the variance however must be heuristically chosen. §.§ Partial Derivative of the Priors The presented partial derivatives include only the log-likelihood term. We must add the log-prior to have the complete objective. The derivative of the log-prior of ϕ_ℓ' is ∂log p(ξ)/∂sin(ϕ_ℓ') = -2π/ν_ϕ_ℓ'sin(πsin(ϕ_ℓ'))cos(πsin(ϕ_ℓ')) + 2π/ν_ϕ_ℓ'cos(πsin(ϕ_ℓ')sin(πsin(ϕ_ℓ')). A similar equation applies for θ_ℓ', by symmetry. For ω_1,ℓ' we have ∂log p(ξ)/∂ω_1,ℓ' = 2cos(ω_1,ℓ')sin(ω_1,ℓ')/ν_ω_1,ℓ' - 2sin(ω_1,ℓ')cos(ω_1,ℓ')/ν_ω_1,ℓ' . Again, the expression for ω_2,ℓ' follows by symmetry. By adding these terms to the partial derivatives of the log-likelihood term we get the partial derivative of the objective. § OPTIMIZATION PROCEDURE For the inference problem above, the gradient or coordinate descent methods by themselves are ineffective in providing acceptable solutions. Also, due to the dimensions and evident nonconvexity of the optimization problem, proving optimality of the solutions is hard. To achieve a useful feasible solution, we propose an AECD method, in which the parameters for a single multipath index are optimized in an exact alternating fashion in an inner loop, while the outer loop varies the current multipath index. Because the exact coordinate descent is still a local descent method, we augment it with a combination of momentum and a SOR inspired coordinate update, this is essential to escape local optima and improve the estimation results.Let us first detail the outer and inner loop structure. First, a maximum number of expected paths L_max is defined. This number should be surely larger than the possible number of detectable paths, i.e., paths with power that is not much smaller than the noise variance, and depends heavily on the propagation characteristics of the environment. The outer loop progresses along path indices in the following order: 𝐈 = [1, 2, 1, 2, 3, 1, 2, 3, 4, …, L_max-1, L_max,1,…,L_max]. Intuitively, after the first path is detected and roughly estimated, the algorithm moves on to detect the next path. Once the next path is detected and estimated, then the algorithm returns to the first path such as to “compensate the interference" of the previously undetected second path when estimating the first path. This reasoning proceeds until hopefully all paths up to L_max have been estimated. If at some point of the outer loop no more paths remain, then the algorithm starts outputing spurious paths, which have no physical correspondence. This means that choosing a large value for L_max has a time cost, as the algorithm would have to estimate many spurious paths before finishing. It is convenient to devise a procedure to detect when all true paths have already been detected. Moving on to the inner loop. Suppose that the current path at the outer loop is ℓ', then, in a single iteration, the coordinates are updated in the following order: b_ℓ', ω_1,ℓ', b_ℓ', ω_2,ℓ', b_ℓ', θ_ℓ', b_ℓ', ϕ_ℓ'. The inner loop is repeated for a maximum set amount of iterations it_max. Updating the path coefficient b_ℓ' in-between the other coordinates apparently provides more efficient updates. Exploring this idea, for future work, it may be effective to define a “new" objective function by direct substitution of the optimal paths using (<ref>) on (<ref>), and then attempt to optimize this function.Finally, we describe the individual coordinate updates. Denote by ξ_m the coordinate to be updated for the mth time, also denote by ξ^opt_m its optimal coordinate descent update. Then its partial update with momentum is ξ'_m+1 = ξ^opt_m + η_m (ξ_m - ξ_m-1), where η_m is the momentum coefficient of that variable at the mth update. We then perform a SOR inspired rule to complete the coordinate update ξ_m+1 = Wrap_ξ( (1-λ_m) ξ_m + λ_m ξ'_m+1), where λ_m∈[0.5,1.5] is the SOR coefficient of that variable at update m, and Wrap_ξ(·) denotes wrapping the argument value to the valid domain of the parameter, e.g., ϕ and θ should be wrapped to the interval (-π/2,π/2) and ω_1 and ω_2 to (-π,π). Because each variable is updated with forward substitution (like a Gauss-Seidel update for solving linear equations), instead of updating all coordinates together (like a Jacobi update), we apply a heuristic form of SOR, which is known to outperform the Gauss-Seidel for linear equations. While there are no theoretical convergence speed guarantees, it provides an additional degree of freedom to tune the algorithm. An outer loop iteration may be interrupted and skipped if the estimates have failed to change by the desired amount in an inner loop iteration, e.g., if all parameters have not changed by more than 10^-5. An outer loop iteration may also be skipped if the objective function has not changed by more than a threshold for a particular inner iteration. Given a set of multipath parameters from a previous estimation ξ̂_i-1, a basic outline of the proposed algorithm is provided in Algorithm <ref>. In Algorithm <ref>, ξ_ℓ denotes the variables associated with path ℓ, Δ(ξ_ℓ) denotes the vector of relative changes of all variables from path ℓ, the inequality |Δ(ξ_ℓ)|≺ϵ_var denotes that all relative changes are less than the threshold ϵ_var. Similarly, ϵ_obj is the threshold for objective change in a single iteration. One may consider using additional stopping heuristics such as keeping track of a trailing moving average, if some property of the moving average indicates slow convergence, then break and move on to the next outer loop iteration. Line <ref> of Algorithm <ref> requires estimating the number of paths. For this, we propose a method based on objective function decrease. It consists first sorting paths in decreasing order based on the estimated path powers |b_ℓ|, then progress through the vector by including more paths, computing the objective function, and checking how much the objective decreased by including the last path. Proceed until a (possibly variable) threshold value is reached. The version used in Section <ref> is displayed in Algorithm <ref>. § NUMERICAL RESULTS In this section, we will assess the performance of the proposed method by analysing simulation results. Initially, we want to verify how effectively the algorithm detects the existing paths without any prior information. Then, we present a simple example of how this algorithm can be used for mapping, given perfectly known positions and orientations (poses) of the transmitter (UE) and receiver (BS). In both scenarios, we consider a transmtted pilot signal with 50 OFDM symbols and 40 subcarriers. The transmitter and receiver have ULA with 4 and 16 antennas, respectively. The used carrier frequency is 60 GHz, the subcarrier spacing is 240 kHz, and the symbol time is 4.46 μs, which corresponds to numerology μ=4 in the 5G standard. The channel simulation considers only first order specular reflections. Paths with angles of arrival or departure outside the (-π/2,π/2) interval are considered to have zero gain. The environment used in this section is depicted in Fig. <ref>. For space constraints, we leave a detailed comparison with other methods <cit.> for future work. We introduce a channel model with the intention of offerring a sufficient geometrical representation of multipath propagation for our estimation problems. The path coefficient is computed from the total propagation distance d^2_ℓ with an added power reflection loss 0<c_ℓ<1 if the path is not LOS, given by |b_ℓ| = √(c_ℓ/(4π d^2_ℓ)). We consider the transmit power P_T to be equally allocated to all subcarriers N_c. Naturally, if the path is LOS, then c_ℓ=1. The reflection coefficient for NLOS paths is set to c_ℓ=0.2. The phase is sampled from a uniform distribution ∠ b_ℓ∼𝒰(-π,π), thus b_ℓ = |b_ℓ|e^j∠ b_ℓ. The ToF are simply the path distance divided by the speed of light τ_ℓ = d_ℓ/c. The Doppler frequency is computed from the projection of the UE velocity on the departure direction vector v_ℓ, and is given by f_D,ℓ = f_carrier v_ℓ/c. We consider a τ_o = 0.1 μs clock offset between UE and BS. The carrier frequency offset is set to 2.4 MHz, 40 ppm of the carrier frequency. For the model to be identifiable, the transmitted signal cannot be arbitrarily chosen. Intuitively, AoD estimation requires that different angles of departure produce distinguishable outputs throughout the pilot sequence. It is impossible to estimate θ_ℓ if a single data stream is transmitted with a fixed precoder. Using more data streams is one way to ensure that it is possible to estimate the AoD. In the uplink context, it is not usual to transmit many streams. By transmitting a single stream, but varying the precoder, it is possible to guarantee identifiability. We consider 1 data stream and a time-varying precoder matched to angle θ∈(-π/2,π/2), which is uniformly swept from -π/2 to π/2 during the 50 OFDM symbols. Given no prior, we want to assess the precision and recall of the path detection and estimation. We simulate 1024 different scenarios, with random UE poses and BS at (5,30). The transmitter positions are uniformly distributed on the [1,19]×[1,19] rectangle, while their orientation is uniformly distributed on the ψ_UE→BS+[-π/2,π/2] interval, where ψ_UE→BS is the orientation where the UE perfectly faces the BS. Transmit power is set to 8 W, i.e., 9 dBW, equally divided along all subcarriers so that each subcarrier has -7 dBW. Noise power is set to -80 dBW. For the optimizer parameters, we set L_max=6, and the thresholds to ϵ_var = 10^-5 and ϵ_obj = 10^-6. The momentum coefficients are initialized to 0.1 and are decremented at every inner loop iteration with the rule η_m+1 = 0.99η_m. The SOR coefficients are updated at every inner loop iteration with the following rule λ_it = 0.98 + 0.22 exp(-it/15), where “it" is the inner loop iteration counter. The threshold for Algorithm <ref> is ϵ_L = 0.5. An example of the estimation results in this setup is represented by the magenta lines in Fig. <ref>. The full ensemble of points is shown in Fig. <ref>. It may happen that some paths do not converge but are still declared to be valid paths by our algorithm. To determine how frequently this happens, we compare the estimated paths to the true paths by computing ξ_ℓ - ξ̂_ℓ_2 and performing greedy assignment. The estimated paths that had no assigned true paths were considered misdetections. On the other hand, estimated paths that were properly assigned to a true path were considered true detections. This way it is possible to estimate the Precision and Recall of our algorithm. Following the described procedure yields precision of 0.9938 and recall of 0.9854. The estimates of true detections have their MSE and RMSE values shown in Table <ref>. It can be seen that, ignoring misdetections, the quality of estimates is quite useful, particularly for the ω_1 and ω_2 values as well as the path magnitude |b_ℓ|. The estimates for the angles of departure and arrival are not as good, but are still sufficient for approximately sensing the environment, given perfect transmitter pose information. Using ω_1 and ω_2 requires very fine clock and carrier synchronization to eliminate the offsets and extract useful geometric information. Finally, we explore the sequential estimation scenario, in which the estimates from the previous instant are used as priors for the next estimation round. The path of the transmitter and the estimated position of reflectors using the line intersection method is shown in Fig. <ref>. The whole path is traveled over 5 seconds with 50 estimation rounds performed in equal time intervals. We set all variance parameters to ν = 0.005 and achieve 1 precision 0.9844 recall. The equivalent ML precision and recall are 0.9844 and 0.9844, respectively. The MSE and the RMSE values for MAP and ML in this scenario are presented in Table <ref>. Besides the improved precision and recall and similar MSE values, the MAP also converges faster, which can be beneficial in real time applications. It is up to the user to decide the best approach for the intended use. § CONCLUSION Estimating all the multipath components and their parameters is not a simple problem, and existing methods frequently rely on many simplifications or extensive computation that hinders its real-time applicability. In this paper, we have introduced a ML and MAP estimation procedure for channel estimation with possible use cases in sensing and mapping using an OFDM waveform. The proposed method specifically exploits the problem structure and can be improved in straightforward fashion to provide increased robustness, efficiency, accuracy and detection capabilities. § ACKNOWLEDGEMENTS The work was supported in part by the Research Council of Finland (former Academy of Finland) 6G Flagship Program (Grant Number: 346208) and 6GWiCE project (357719). We would also like to thank Hamza Djelouat, Mikko Sillanpää, and Reijo Leinonen for the productive discussions. IEEEtran
http://arxiv.org/abs/2406.08445v1
20240612173709
SVSNet+: Enhancing Speaker Voice Similarity Assessment Models with Representations from Speech Foundation Models
[ "Chun Yin", "Tai-Shih Chi", "Yu Tsao", "Hsin-Min Wang" ]
eess.AS
[ "eess.AS", "cs.LG", "cs.SD" ]
[ [ June 11, 2024 ================= § ABSTRACT Representations from pre-trained speech foundation models (SFMs) have shown impressive performance in many downstream tasks. However, the potential benefits of incorporating pre-trained SFM representations into speaker voice similarity assessment have not been thoroughly investigated. In this paper, we propose SVSNet+, a model that integrates pre-trained SFM representations to improve performance in assessing speaker voice similarity. Experimental results on the Voice Conversion Challenge 2018 and 2020 datasets show that SVSNet+ incorporating WavLM representations shows significant improvements compared to baseline models. In addition, while fine-tuning WavLM with a small dataset of the downstream task does not improve performance, using the same dataset to learn a weighted-sum representation of WavLM can substantially improve performance. Furthermore, when WavLM is replaced by other SFMs, SVSNet+ still outperforms the baseline models and exhibits strong generalization ability. § INTRODUCTION A useful voice conversion (VC) system can be used in many areas, such as dubbing, personalized virtual assistants, preserving voices, and aiding in voice recovery after surgery. However, effectively evaluating the performance of such systems remains challenging. As a common evaluation for VC systems, the speaker voice similarity assessment aims to evaluate the resemblance between generated speech and natural speech. The evaluation can be objective <cit.> or subjective. While the results of subjective evaluation are closer to human perception, the time and cost required to conduct assessments such as listening tests are considerable. Therefore, automated and efficient methods for assessing speaker voice similarity are valuable. In <cit.>, Hu et al. proposed SVSNet, an end-to-end neural model for speaker voice similarity assessment, and showed satisfactory performance at both the utterance and system levels. SVSNet takes the raw speech waveforms as inputs and processes them with an encoder consisting of a SincNet module, four stacked residual-skipped-WaveNet convolution (rSWC) layers, and a BLSTM layer. However, given the constraint of limited training data, the encoder may not be sufficiently adept at extracting meaningful representations from the waveforms. Therefore, to further reinforce the speaker voice similarity assessment model, we propose SVSNet+, which integrates a large pre-trained speech foundation model (SFM) to extract speech representations. By leveraging a pre-trained SFM that learns effective speech representations from large-scale training data, SVSNet+ can acquire valuable information for the similarity prediction task. When evaluated on the Voice Conversion Challenge 2018 <cit.> and 2020 <cit.> (VCC2018 and VCC2020) datasets, SVSNet+ significantly outperforms previous work at the system level and demonstrates strong generalization ability. This study contributes to further future exploration of applying pre-trained SFM representations to speaker voice similarity assessment. § RELATED WORK §.§ Pre-trained speech foundation models Pre-trained SFMs can undergo training in a supervised or unsupervised learning manner. Although supervised learning typically excels over unsupervised learning in various tasks, the process of collecting large-scale labeled data can be time-consuming and sometimes impractical. Therefore, many state-of-the-art pre-trained models are based on self-supervised learning (SSL), allowing them to acquire meaningful representations from large amounts of unlabeled data <cit.>. As one of the most commonly used models, wav2vec 2.0 <cit.> acquires contextual information by discerning representations that correspond to true quantized latent speech representations. On the other hand, HuBERT <cit.> utilizes clustering to generate labels and predicts hidden cluster assignments for masked speech representations. Modified from HuBERT, WavLM <cit.> uses a larger dataset during pre-training for joint learning of masked speech prediction and denoising. Moreover, the Massively Multilingual Speech (MMS) model <cit.> expands the number of languages in the training dataset and builds pre-trained models covering 1,406 languages based on wav2vec 2.0. Unlike the aforementioned SSL pre-trained models, Whisper <cit.> is an SFM trained in a weakly supervised manner, using a multitask training format, showcasing not only high robustness but also strong generalization ability. All of these SFMs have been employed to enhance performance on diverse speech processing tasks, demonstrating remarkable effectiveness. In this study, we employ all these SFMs to extract speech representations and evaluate their suitability for the speaker voice similarity assessment task. §.§ Speech assessment tasks using SFM-extracted representations In recent years, SFM representations have been applied in various tasks, such as speech enhancement (SE) <cit.>, automatic speech recognition (ASR) <cit.>, automatic speaker verification (ASV) <cit.>, and voice conversion (VC) <cit.>. In speech assessment tasks, it is a prevalent practice to leverage pre-trained SSL representations for mean opinion score (MOS) prediction <cit.>. The latent representations extracted by wav2vec 2.0, HuBERT, and WavLM have been proven beneficial for these tasks. Furthermore, recent work by Zezario et al. <cit.> also showed that Whisper and MMS representations help predict human-perceived speech quality and intelligibility. § PROPOSED METHOD The architecture of SVSNet+ is shown in Fig. 1. The waveforms of the test and reference utterances X_T and X_R are fed to a pre-trained SFM, which encodes them into layer-wise representations. Next, the corresponding weighted-sum representations are derived from layer-wise representations, and a linear layer is used to adjust the representation dimension. Then, the representations R_T and R_R are aligned by the co-attention module in both directions for maintaining symmetry. Afterwards, the distance module calculates the distance between R_T and R̂_R and that between R_R and R̂_T. Finally, the prediction module uses these two distances to calculate a similarity score. §.§ Pre-trained model and weighted sum In this study, we study several SFMs, each containing a feature extractor and a transformer encoder, as shown in Fig. 1(b). For SSL-based SFMs, the feature extractors are CNN-based encoders that generate feature sequences at a frame rate of 20ms for audio sampled at 16kHz. For Whisper, the feature extractor first preprocesses the 16kHz audio input into 30-second chunks by zero-padding or trimming. Each chunk is then transformed into an 80-channel log-magnitude Mel spectrogram at a frame rate of 10ms and further processed by an encoder consisting of two convolutional layers and a GELU activation function. The stride of the second convolutional layer is 2 <cit.>. Therefore, the down-sampling factor of the feature extractor in each pre-trained SFM is 320x. For all SFMs, the extracted features are fed to the transformer encoder and processed by L hidden layers. Finally, to exploit the information from each hidden layer, the representations generated from all hidden layers are combined using the weighted sum module: R_WS := ∑_ℓ=0^L-1 w^ℓR^ℓ, where w^ℓ≥ 0 is the learnable weight for layer ℓ and ∑_ℓ w^ℓ = 1, and R^ℓ is the representation of layer ℓ. The weighted-sum representation is then passed through an additional linear layer for dimension adjustment. §.§ Co-attention module Following <cit.>, the co-attention module is used to align the representation of one input with that of the other input by R̂_R = Attention(R_T,R_R,R_R), R̂_T = Attention(R_R,R_T,R_T), and output two aligned pairs (R_T, R̂_R) and (R_R, R̂_T), which will be input to the distance module. In this study, the scaled dot-product attention mechanism <cit.> is implemented. §.§ Distance module and prediction module Following <cit.>, the utterance embedding is obtained by averaging its representations over time, and the 1-norm distance of each dimension of two embeddings is calculated: D_T,R = ∥ Mean(R_T) - Mean(R̂_R)∥_1, D_R,T = ∥ Mean(R_R) - Mean(R̂_T)∥_1. Then, the prediction module takes in the two distances to derive the similarity scores: Ŝ_T = σ(f_lin2(ρ_ReLU(f_lin1(D_T,R)))), Ŝ_R = σ(f_lin2(ρ_ReLU(f_lin1(D_R,T)))), where σ(.) is an activation function, f_lin1(.) and f_lin2(.) are linear layers, and ρ_ReLU(.) is the rectified linear unit (ReLU) activation function. There are two types of prediction modules: regression and classification. For regression tasks, the identity function is used as the activation function, while for classification tasks, the softmax function is used. The output size of the second linear layer is 1 (for regression) and 4 (for classification). The final score is the average of the two predicted scores: Ŝ = (Ŝ_T + Ŝ_R)/2. The training objective of the proposed model is to match these scores with the corresponding human-labeled similarity scores in the training set. § EXPERIMENTS §.§ Experimental setup §.§.§ Datasets Following <cit.>, the proposed method was evaluated on the Voice Conversion Challenge 2018 and 2020 (VCC2018 and VCC2020) datasets. In each challenge, participants submitted audio files produced by their VC systems. Subsequently, subjective listening tests were conducted to evaluate these systems. Subjects were asked to rate the converted utterances based on their quality and similarity to the reference utterances. This study focuses on similarity assessment. In the VCC2018 dataset <cit.>, the utterances were derived from 36 VC systems and two reference systems. A total of 21,562 converted-natural utterance pairs were evaluated, yielding 30,864 speaker similarity scores ranging from 1 to 4. Higher scores indicate that the speakers in an utterance pair sound more similar to each other. we split the dataset into training and test sets, with 24,864 and 6,000 ratings, respectively. In VCC2020 <cit.>, there were two tasks, intra-lingual semi-parallel VC and cross-lingual VC. Of the 33 participants, 31 teams submitted results for the intra-lingual task, and 28 teams submitted results for the cross-lingual task. There were 5,840 converted-target utterance pairs. Each pair was evaluated by multiple subjects, and the average score was used as the final score for the pair. Including source-target and target-target utterance pairs as lower and upper performance bounds, the full VCC2020 test set contains 6,090 scored pairs. §.§.§ Evaluation metrics Performance was evaluated in terms of linear correlation coefficient (LCC) <cit.>, Spearman’s rank correlation coefficient (SRCC) <cit.>, and mean squared error (MSE) at the utterance and system levels. The utterance-level evaluation was calculated based on the predicted score and the human-labeled score for each test utterance pair, while the system-level evaluation was calculated based on the average predicted score and average human-labeled score for each system. System-level evaluation is more valuable because it directly ranks systems. §.§.§ Pre-trained models We evaluated several SFMs. WavLM-Large <cit.>, wav2vec 2.0-Large (LV-60) <cit.>, MMS-300M, and MMS-1B <cit.> were obtained from their GitHub websites. HuBERT-Large and HuBERT-XLarge <cit.> were accessed via the pipelines subpackage of TorchAudio <cit.>. Whisper-Medium and Whisper-Large <cit.> were taken from HuggingFace's Transformers library <cit.>. Note that all SFMs are versions without any fine-tuning. Furthermore, Whisper's original settings involve preprocessing the input waveform into 30-second chunks via zero-padding or trimming. Since all other SFMs use full-length audio inputs and each utterance in the VCC2018 and VCC2020 datasets is shorter than 10 seconds, we modified Whisper's configuration for a fair comparison. Specifically, the chunk length was set to 10 seconds, and the resulting representation was trimmed back to the original length in the time domain. §.§.§ Training details All models were implemented using PyTorch (v2.0.1) in Python 3.10. Each model was trained using an NVIDIA GeForce RTX 3090 with 24GB RAM. All utterances were downsampled to a sampling rate of 16 kHz. The linear layer after SFM has a hidden size of 256, and the first linear layer in the prediction module has a hidden size of 128. The output size is 1 for regression tasks and 4 for classification tasks. The Adam optimizer <cit.> was used to train the models with a learning rate of 1e-4. For regression tasks, we used the MSE loss for training, while for classification tasks, we used the cross entropy (CE) loss. Model parameters were initialized by the default method of PyTorch. We trained each model for 30 epochs on the VCC2018 training set with a batch size of 5, and evaluated the performance of the model for each epoch using the VCC2018 test set. The model with the best system-level performance on the VCC2018 test set was selected and tested on the VCC2020 test set. §.§ Results §.§.§ SVSNet+ with WavLM-Large In SVSNet <cit.>, the input waveform is encoded by an encoder composed of SincNet, rSWC and BLSTM modules. To enhance SVSNet, we integrate WavLM-Large <cit.> into it and verify the necessity of modules in the original encoder. The results corresponding to the best system-level performance for each combination on the VCC2018 test set are shown in Table 1. We trained two types of SVSNet models, regression and classification, as baselines, termed SVSNet(R) and SVSNet(C), respectively. There are discrepancies between the reproduced models and those in <cit.>, which may be caused by software version and hyperparameter differences. Our proposed models are termed SVSNet+. For in-depth analysis, we implemented the following operations: (1) whether an additional linear layer for dimension adjustment is used; (2) whether WavLM-Large is fine-tuned for the speaker voice similarity assessment task; (3) whether the weighted sum (WS) of representations from all transformer encoder layers or only the representation from the last layer (LL) is used. From Table 1, several observations can be drawn. First, when integrated with WavLM-Large, SVSNet+ outperforms SVSNet in system-level evaluation in most configurations. Second, the rSWC and BLSTM modules used in SVSNet do not bring notable benefits to SVSNet+ (SVSNet+_rBPL(R) vs. SVSNet+_BPL(R) and SVSNet+_BPL(R) vs. SVSNet+_PL(R)). The reason may be that the transformer encoder in WavLM-Large is already good enough at capturing contextual information in the waveform. Third, fine-tuning WavLM-Large during SVSNet+ training does not help (SVSNet+_PLFW(R) vs. SVSNet+_PLW(R) and SVSNet+_PFW(R) vs. SVSNet+_PW(R)). This may be due to the inappropriateness of fine-tuning large pre-trained models with small datasets. Such phenomenon was also mentioned in <cit.>. Fourth, utilizing the weighted-sum representation of WavLM-Large can improve performance compared to using the last-layer representation (SVSNet+_PW(R) vs. SVSNet+_P(R)). Lastly, although the additional linear layer may not provide significant advantages to SVSNet+ (SVSNet+_PLW(R) vs. SVSNet+_PW(R)), it is still valuable for representation resizing, allowing a fairer comparison between SVSNet+ and SVSNet. Among all models, the two best-performing models are SVSNet+_PLW(R) and SVSNet+_PW(R), which are regression types with weighted sum and no fine-tuning. Therefore, these two SVSNet+ configurations will be used in subsequent experiments. §.§.§ SVSNet+ with different SFMs Next, we compare the performance of SVSNet+ with different SFMs, including WavLM-Large, wav2vec 2.0-Large, HuBERT-Large, HuBERT-XLarge, MMS-300M, MMS-1B, Whisper-Medium, and Whisper-Large. The results for the models with and without an additional linear layer are shown in Tables 2 and 3, respectively. The number of transformer encoder layers, embedding dimension, and number of attention heads are noted after each model. From Table 2, it can be found that SVSNet+_HuBERT-Large achieves the best performance in system-level LCC and SRCC, followed by SVSNet+_Whisper-Large. In terms of system-level MSE, all models show excellent and almost equivalent performance, although SVSNet+_wav2vec 2.0-Large performs slightly better than the others. No matter which SFM is employed, the proposed SVSNet+ consistently outperforms the original SVSNet in all system-level metrics. The results reconfirm the advantage of integrating pre-trained SFMs to extract speech representations. Furthermore, it's worth noting that larger SFMs do not always result in better performance. This may be attributed to the difference in characteristics between the training speech of the upstream model and the test speech of the downstream task. Comparing Table 3 with Table 2, it can be seen that removing the additional linear layer results in poorer performance in system-level metrics for most models. Only SVSNet+_WavLM-Large slightly improves over its counterpart with the additional linear layer. Since WavLM jointly learned masked speech prediction and denoising on mixed audio during pre-training, the additional linear layer may blur the extracted speech features, potentially harming the performance of downstream tasks. §.§.§ Evaluated on VCC2020 To evaluate the generalization ability of SVSNet+, the above models trained on VCC2018 were tested on the VCC2020 test set. The system-level evaluation results are shown in Table 4. Unlike VCC2018, VCC2020 consists of two tasks: intra-lingual semi-parallel VC and cross-lingual VC. Moreover, in VCC2018, most VC systems employed conventional vocoders, while in VCC2020, neural vocoders were more common. All of the above differences can lead to serious corpus mismatch. Comparing Table 4 with Tables 2 and 3, due to corpus mismatch, the scores of all models on VCC2020 are worse than those reported on VCC2018. However, certain SVSNet+ models, such as SVSNet+_HuBERT-Large and SVSNet+_Whisper-Large, achieve fairly good performance on VCC2020, although there is still room for further improvement. Examining the performance of SVSNet+ models with the additional linear layer in Table 4, we can see that almost all SVSNet+ models outperform SVSNet in all metrics except SVSNet+_Whisper-Medium in MSE (1.175). The SVSNet+_HuBERT-Large and SVSNet+_Whisper-Large models with the best performance on VC2018 in Table 2 also achieve relatively high performance on VCC2020. Surprisingly, SVSNet+_wav2vec 2.0-Large performs very well and achieves the highest SRCC (0.910), although it does not perform particularly well compared to other models on VCC2018. For SVSNet+ models without the additional linear layer, while most models outperform SVSNet, SVSNet+_wav2vec 2.0-Large performs poorly in both LCC and SRCC. Without further fine-tuning the wav2vec 2.0 representation using the additional linear layer, the resulting SVSNet+ model generalizes poorly. In contrast, removing the additional linear layer benefits SVSNet+_MMS-1B, which achieves the highest scores in LCC and SRCC among all models with the same settings. Since MMS-1B was pre-trained using a larger and more diverse set of data, it excels at extracting more intricate patterns. Additional processing by the linear layer may be detrimental to the extracted features. § CONCLUSIONS This study demonstrates that representations extracted by SFMs can effectively enhance the performance of speaker voice similarity assessment models. Experiments conducted on VCC2018 and VCC2020 show that SVSNet+ leveraging SFM surpasses its predecessor SVSNet. The results also show that for different SFMs, an additional linear layer can have significantly different effects on the performance of assessment models. Along this research path, in addition to integrating a single SFM into SVSNet+, we also conducted preliminary experiments on fused SFMs. By concatenating the representations extracted by HuBERT-Large and Whisper-Large as input, SVSNet+ can achieve better system-level performance on VCC2018, with LCC, SRCC, and MSE of 0.97, 0.969, and 0.004, respectively. We will conduct further research in this direction in the future. § ACKNOWLEDGEMENTS This work was supported in part by the Co-creation Platform of the Speech-AI Research Center, Industry-Academia Innovation School, NYCU, under the framework of the National Key Fields Industry-University Cooperation and Skilled Personnel Training Act, from the Ministry of Education (MOE), the National Development Fund (NDF), and industry partners in Taiwan. IEEEtran
http://arxiv.org/abs/2406.09059v1
20240613124536
Distribution of hooks in self-conjugate partitions
[ "William Craig", "Ken Ono", "Ajit Singh" ]
math.CO
[ "math.CO", "math.NT" ]
§ ABSTRACT We confirm the speculation that the distribution of t-hooks among unrestricted integer partitions essentially descends to self-conjugate partitions. Namely, we prove that the number of hooks of length t among the size n self-conjugate partitions is asymptotically normally distributed with mean μ_t(n) and variance σ_t(n)^2 μ_t(n) ∼√(6n)/π + 3/π^2 - t/2 and σ_t^2(n) ∼π^2 - 6 √(6n)/π^3. Engineering Digital Systems for Humanity: Challenges and Opportunities Patrizio Pelliccione June 17, 2024 ====================================================================== § INTRODUCTION AND STATEMENT OF RESULTS A partition of a non-negative integer n is a non-increasing sequence of positive integers λ_1, …, λ_ℓ whose terms sum to n. We write λ⊢ n to denote that λ is a partition of n. Partitions play an important role in many areas of mathematics, including combinatorics, geometry, mathematical physics, number theory and representation theory. Here we study the combinatorial statistics of partition hook numbers. For a partition λ, integers j,k ≥ 1 and any cell j,k in the Young diagram of λ, the corresponding hook number h(j,k) is the length of the hook H(j,k) formed with the cell j,k as its upper corner. In terms of the conjugate partition λ' = λ_1', …, λ_r', we may write h(j,k) = λ_j - j + λ_k' - k + 1. Below, we see an example demonstrating the computation of hook numbers. Hook numbers play a significant role in the representation theory of the symmetric group, where the partitions of n capture the irreducible representations of S_n. Indeed, if ℋ(λ) is the multiset of hook lengths in λ and ρ_λ is the irreducible representation of S_n associated with λ, then the Frame–Thrall–Robinson hook length formula gives the dimension dimρ_λ = n!∏_h ∈ℋ(λ) h. Furthermore, hook numbers are prominent in mathematical physics and number theory, For example, we highlight the work of Nekrasov and Okounkov <cit.> and Westbury <cit.>, who recognized the deep properties of hook numbers through their extraordinary q-series identity ∑_λ q^|λ|∏_h ∈ℋ(λ) 1 - zh^2 = ∏_n ≥ 1 1 - q^n ^z-1. Using this formula and its generalizations due to Han <cit.>, many connections have been drawn between hook numbers and modular forms, which have led to many interesting results, including theorems about cranks for Ramanujan's partition congruences <cit.> and class numbers of imaginary quadratic fields <cit.>. Establishing combinatorial statistics for partitions is an important and growing field in partition theory (see for instance <cit.>). Here we consider the statistics of hook numbers. Recently, Griffin, Tsai and the second author <cit.> studied the counting function N_t(λ) := #{ h ∈ℋ(λ) : h = t }. If we let N_t(n) be the random variable which takes the value N_t(λ) for λ a random partition of n, then <cit.> proved the following theorem. For t ≥ 1 an integer, the function N_t(n) has an asymptotically normal distribution as n →∞ with mean asymptotic to √(6n)/π - t/2 and variance asymptotic to π^2 - 6 √(6n)/2π^3. It is natural to ask whether the same phenomenon holds for the restricted partition families. In this paper, we show that this is essentially the case for self-conjugate partitions. To make this precise, for integers t ≥ 1 we study the arithmetic statistics of N_t(λ) considered as a random variable restricted to the class 𝒮𝒞 of self-conjugate partitions. Such a study requires the two-variable generating function which simultaneously tracks the size and hook counts of self-conjugate partitions; that is, we require an explicit formula for F_t(T;q) := ∑_λ∈𝒮C T^N_t(λ) q^|λ| =: ∑_n ≥ 0sc_t(n;T) q^n. By means of the Littlewood bijection for t-core partitions, Amdeberhan, Andrews and two of the authors <cit.> derived such a formula in order to address conjectures of the first author and collaborators <cit.> on the arithmetic of hook counts in self-conjugate partitions. In this way we obtain the following direct analog of Theorem <ref>. Let t ≥ 1 be an integer, and consider the random variable N_t(n) giving the distribution of N_t on the set 𝒮C(n) of self-conjugate partitions of n. Then as n →∞, N_t(n) is asymptotically normally distributed with mean μ_t(n) ∼√(6n)/π-t/2+3/π^2 and variance σ_t(n)^2 ∼(π^2-6)√(6n)/π^3+3t/π^2-t^2/4-279/16π^4. It is interesting to note that, in comparison to Theorem <ref>, the main term of the mean is the same, but the main term of the variance is doubled in the self-conjugate case. In the case of t=2, Theorem <ref> says that as n →∞ the number of 2-hooks in a random self-conjugate partition is asymptotically normal with mean μ_2(n) ∼√(6n)/π - 1 + 3/π^2 and variance σ_2(n)^2 ∼(π^2-6) √(6n)/π^3 + 3/π^2 - 1 + 279/16π^4. The convergence of this distribution in the case t=2 is demonstrated below. Table 1 shows the convergence of the measured means μ_2(n) to the asymptotic μ(n) := √(6n)/π: The paper is organized as follows. In Section <ref> we collect preliminary definitions and asymptotic lemmas we will need, which will be derived from the Euler–Maclaurin summation formula. In Section <ref> we apply various asymptotic lemmas and the saddle point method in order to obtain required asymptotics for the polynomials sc_t(n;T) for ranges of T as n →∞. Finally, in Section <ref> we filter these asymptotics through the method of moments in order to prove Theorem <ref>. § ACKNOWLEDGEMENTS The first author thanks the support of the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001179) and by the SFB/TRR 191 “Symplectic Structure in Geometry, Algebra and Dynamics”, funded by the DFG (Projektnummer 281071066 TRR 191). The second author thanks the Thomas Jefferson Fund and grants from the NSF (DMS-2002265 and DMS-2055118). The third author is grateful for the support of a Fulbright Nehru Postdoctoral Fellowship. § NUTS AND BOLTS §.§ Generating functions Here, we state the formula of <cit.> for the generating function F_t(T;q). In order to state this formula, we need the notation a;q _n := ∏_j=0^n-1 1 - a q^j , n ∈ℕ_0 ∪{∞}. With this notation, we state their result. Let t ≥ 1 be an integer. Then the following are true: * If t is even, then we have F_t(T;q) = -q;q^2 _∞· 1 - T^2 q^2t; q^2t_∞^t/2. * If t is odd, we have F_t(T;q) = -q;q^2 _∞· H^*(T;q^t) · 1 - T^2 q^2t; q^2t_∞^t-1/2, where H^*(T;q) is defined by H^*(T;q) := 12T[ C_T^+ √(1-T^2) ; -q _∞ + C_T^- -√(1-T^2) ; -q _∞], where for convenience we define the constants C_T^± := 1 ±√(1-T1+T). We note that H^* can also be represented in terms of q-hypergeometric series <cit.> as H^*(T;q) = 1 - 1T∑_n ≥ 0(T^2-1)^n q^2n^2 + n q^2; q^2 _n -q;q^2 _n+1 + 1T∑_n ≥ 0(T^2-1)^n q^2n^2-n q^2; q^2 _n -q;q^2 _n. We primarily use the representation in terms of products because it is more convenient for much of our asymptotic analysis, although there are certain kinds of calculations which are easier in the q-hypergeometric form. §.§ The dilogarithm function We recall the dilogarithm function Li_2 z, which is given by Li_2(z) := ∑_k ≥ 1z^kk^2 for |z| < 1 and elsewhere by the standard analytic continuation having a branch cut on the line (1,∞). Dilogarithm functions appear frequently when computing asymptotic expansions of infinite products of the general shape a;q _∞. For our calculations, we will need the elementary identity <cit.> Li_2 z + Li_2 -z = Li_2 z^2 2, as well as the derivative <cit.> ddzLi_2 z = - log(1-z)z and the so-called distribution property <cit.> Li_2 x = n ∑_z^n = xLi_2 z . §.§ Euler–Maclaurin summation formulas As infinite products are often unwieldy to deal with directly, a common technique when analyzing a;q _∞ asymptotically is to first take a logarithm and subsequently use techniques for infinite sums. In particular, we will make use of an asymptotic variation of the Euler–Maclaurin summation formula <cit.>. To state this result, we will say that a function f(z) satisfies the asymptotic f(z) ∼∑_n ≥ n_0 c_n z^n as z → 0 if f(z) - ∑_n = n_0^N - 1 + n_0 c_n z^n = O_N z^N for each N ≥ 1. Suppose that 0≤θ < π/2 and let D_θ := { re^iα : r≥0 |α|≤θ}. Suppose f(z) is holomorphic in a domain containing D_θ, and that f and all its derivatives are of sufficient decay as |z| →∞ (i.e. decays at least as quickly as |z|^-1-ε for some ε>0). Suppose also that f has the asymptotic expansion f(z) ∼∑_n ≥ 0 c_n z^n near z=0. Then for any 0 < a ≤ 1, we have as z → 0 that ∑_m ≥ 0 f (m+a)z ∼1z∫_0^∞ f(x) dx - ∑_n ≥ 0 c_n B_n+1(a)n+1 z^n, where B_n(x) are the Bernoulli polynomials. We will apply Euler–Maclaurin summation many times throughout the paper, and in particular we will be interested in using not only a function f as above, but several of its derivatives as well. In order to do this most efficiently, we derive the following corollary. Suppose that 0≤θ < π/2 and let f:→ be holomorphic in a domain containing D_θ, so that in particular f is holomorphic at the origin, and assume that f and all of its derivatives are of sufficient decay as |z| →∞. Then for a∈ℝ and j∈ℕ_0, we have d^jdw^j∑_m≥0 f((m+a)w) = (-1)^j j!/w^j+1 I_f + O 1 . Since our decay assumptions justify the limit interchange below, we have d^jdw^j∑_m ≥ 0 f (m+a)w = ∑_m ≥ 0 m+a ^j f^(j) (m+a)w = 1w^j∑_m ≥ 0 g (m+a)w , where we let g(w) := w^j f^(j)(w). It is then easy to show using integration by parts that I_g = (-1)^j j! · I_f, and since f^(j)(w) is holomorphic at w=0, we have g(w) = O(w^j) which establishes the error term by the expansion in Lemma <ref>. §.§ Asymptotic lemmas In this subsection, we carry out a number of asymptotic estimates for q-series. We begin with some elementary applications of Euler–Maclaurin summation as presented above. Let 0 ≤θ < π/2 and D_θ be as in Lemma <ref>, and let q = e^-w, and let X ∈\ [1,∞). Then we have for an integer j ≥ 0 and w → 0 in D_θ that d^jdq^j -q;q^2 _∞ = π^2 j!24 w^j+1 + O 1 d^jdq^j Xq; q _∞ = - j! ·Li_2 X w^j+1 + O 1 , d^jdq^j X; -q _∞ = - j! ·Li_2 X^2 4 w^j+1 + O 1 , ddq Xq;q _∞ = - Li_2 X w - 1 -X + O(w), ddq X; -q _∞ = - Li_2 X^2 4w + 1 - X 2 + O(w). We first observe that -q;q^2 _∞ = ∑_m ≥ 0 1 + q^2m+1 = ∑_m ≥ 0 f m + 1/2 w , where f(w) := 1 + e^-2w. Using the fact that d/dq = - e^w d/dw, it is easy to show from Corollary <ref> that for any j ≥ 0, we have d^jdq^j -q;q^2 _∞ = j! · I_fw^j+1 + O w^-j as w → 0 in D_θ, and so (<ref>) follows from the fact that I_f = π^2/24. To prove (<ref>), we first observe Xq;q _∞ = ∑_m ≥ 0 f_X m+1 w , where f_X w := 1 - X e^-w. We observe that I_f_X := ∫_0^∞ f_X(y) dy = - Li_2 X , the last identity being valid as long as X ∉[1,∞). Applying (<ref>) completes the proof of (<ref>). Note also that (<ref>) follows from Lemma <ref> after computing the series expansion of f_X(w) = (1-X) + O(w). To see (<ref>), we begin by noting that X; -q _∞ = 1 - X + ∑_m ≥ 0 g_X^- m+1 w + ∑_m ≥ 0 g_X^+ m + 12 w , where g_X^±(w) := 1 ± X e^-2w. Now, we have the integral evaluations, valid for X ∉[1,∞), I_g_X^± := ∫_0^∞ 1 ± X e^-2y dy = - Li_2∓ X 2, and since the constant term of the Taylor expansion of g_X^±(w) for all given X is 1 ± X, and so by Lemma <ref> we obtain X; -q _∞ = - Li_2 X + Li_2 - X 2 w + 1 - X 2 + O w . We can then derive (<ref>) and (<ref>) from (<ref>) and Lemma <ref> and Corollary <ref>. The previous lemma is a fairly standard application of Euler–Maclaurin summation; the fact that these generating functions are simple infinite products makes the method work very cleanly. For our setting, however, we are required to compute asymptotics for H^*(T;q), which is a sum of two distinct infinite products. Since the logarithm of a sum is not especially well-behaved, the computation becomes more intricate. For this reason, we isolate these computations into the following lemma. Let 0 ≤θ < π/2 and D_θ be as in Lemma <ref>, and let q = e^-w, and let T>0. Then we have as w → 0 in D_θ that H^*(T;q^t) = - Li_2 1-T^2 4tw + α_T^+ + O(w), ddq H^*(T;q^t) = (T^2) α_T^-2 √(1-T^2)α_T^+1tw + O(1), d^2dq^2 H^*(T;q^t) = O w^-2, d^3dq^3 H^*(T;q^t) = O w^-3, where we define the constants α_T^± := C_T^+ √(1 - √(1-T^2))± C_T^- √(1 + √(1-T^2)). Recall that H^*(T;q) := C_T^- -√(1-T^2); -q _∞ + C_T^+ √(1-T^2); -q _∞. We set X = √(1-T^2) for this proof; observe that for T > 0, X ∉[1,∞), we obtain from (<ref>) in Lemma <ref> that, as w → 0 in D_θ, we have ± X; -q _∞ = - Li_2 X^2 4 w + 1 ∓ X 2 + O w . We now must move to understand the asymptotics of the derivatives of H^* T; q^t, for which we will require derivatives of H^* T; q^t. As a convenient shorthand, we let F_X := X; -q _∞ and G_X := ∑_n ≥ 1(-1)^n n X q^n-1/(-1)^n X q^n - 1 for the rest of the proof. Observe that Lemma <ref> already gives enough for (<ref>). It is then straightforward to show that F_X^' = ∑_n ≥ 0ddq( 1 - X -q ^n ) ∏_j ≥ 0 j ≠ n 1 - X -q ^j = F_X G_X, and so also we have F_X^'' = F_X G_X^2 + G_X^' and F_X^''' = F_X G_X^3 + 3 G_X G_X^' + G_X^''. Now, if we set H := H^*(T;q), we have d/dq H = H^'/H and d^2dq^2 H = H H^'' - H^'^2H^2, d^3dq^3 H = H^2 H^''' + 2 H^'^3 - 3 H H^' H^''H^3. In the previous notation, we have H = C_T^+ F_X + C_T^- F_-X with X = √(1 - T^2), and so we have H^' = C_T^+ F_X^' + C_T^- F_-X^' = C_T^+ F_X G_X + C_T^- F_-X G_-X, H^'' = C_T^+ F_X^'' + C_T^- F_-X^'' = C_T^+ F_X G_X^2 + G_X^' + C_T^- F_-X G_-X^2 + G_-X^', H^''' = C_T^+ F_X G_X^3 + 3 G_X G_X^' + G_X^'' + C_T^- F_-X G_-X^3 + 3 G_-X G_-X^' + G_-X^''. We now compute asymptotics for the various derivatives of H. We observe that G_± X = ± Xq[ ∑_n ≥ 0 f_± X^+ n + 1/2 w - ∑_n ≥ 0 f_± X^- n+1 w ], where  f_X^± w := e^-2w1 ± X e^-2w. Thus, for w → 0 in D_θ, we have by Corollary <ref> that G_± X = 1w∫_0^∞ f_± X^+(t) dt - ∫_0^∞ f_± X^-(t) dt + O(1) = ± 1 - X^2 2Xw + O(1) and additionally G_± X^' = ∓ 1 - X^2 2Xw^2 + O 1 , G_± X^'' = ± 1 - X^2 Xw^3 + O 1 . Therefore, for q = e^-w and w → 0 in any region D_θ, we have G_± X^2 + G_± X^' = O w^-2 and likewise G_± X^3 + 3 G_± X G_± X^' + G_± X^'' = O(w^-3). We therefore obtain the asymptotics H = F_X α_T^+ + O(w) , H^' = F_X T^2 2Xwα_T^- + O(1) , H^'' = F_X T^2 2Xw^2(T^2)2Xα_T^+ - α_T^- + O(1) , H^''' = F_X · O(w^-3). After the substitution q ↦ q^t in H and after accounting for the chain rule, we obtain from these asymptotics and (<ref>) the desired asymptotic identities. § ASYMPTOTICS FOR SC(N;T) In this section, we apply the saddle point method to compute asymptotics for the polynomial values sc_t(n;T) as n →∞ for T in certain ranges. This asymptotic behavior, we will see, determines the distributions in Theorem <ref>. Let t be a positive integer, η∈(0,1], and η≤ T≤η^-1. If b_t(T) := 1/2√(π^2/6 - Li_2(1-T^2)) if 2|t, 1/2√(π^2/6 - t-1/tLi_2(1-T^2)) if 2 | t, then sc_t(n;T)=√(b_t(T)/4π n^3/2)e^b_t(T)(2√(n)-1/√(n)) 1+O_η n^-1/7 as n →∞. The proof will follow from the saddle point method. We have from Cauchy's theorem that sc_t(n;T)=1/2π∫_-π^π(z_0e^ix)^-nF_t(T;z_0e^ix)dx=1/2π∫_-π^πe^f_t(T;z_0e^ix)dx, where f_t(T,z):=(z^-nF_t(T;z)) for 0<|z|<1. In order to apply the saddle point method, we must determine z_0=e^-α for α>0 such that f_t'(T;z_0)=0. Now, from Theorem <ref>, f_t(T;z) is given by -n(z) + -z;z^2 _∞ + t2 1-T^2 z^2t; z^2t_∞ if 2|t, -n(z) + -z;z^2 _∞ + t-12 1-T^2 z^2t; z^2t_∞ + H^*(T;z^t) if 2 | t. We then have for z = e^-α from (<ref>), (<ref>) and (<ref>) that e^-αf_t'(T;e^-α) = -n - π^224 α^2 - Li 1-T^2 4 α^2 + O(1) if 2|t, -n - π^224 α^2 + (t-1)tLi_2 1-T^2 4 α^2 + O(α^-1) if 2 | t. Thus, the saddle point z_0= e^-α is given by[Although we can obtain a better error term for even t, we choose to treat the even and odd cases uniformly since the lower error terms are sufficient for our purposes.] α=b_t(T) n^-1/2 + O_η(n^-3/2). We now estimate f_t(T;z_0), f”_t(T;z_0), and f”'_t(T;z_0). Putting z_0=e^-α in f_t(T;z_0), we get using (<ref>), (<ref>) and (<ref>) that in both cases, we have at the saddle point f_t(T;z_0) = 2 b_t(T) √(n) + O_η n^-1/2. We now consider f_t” and f_t”'. By comparing the contents of Lemma <ref> with Lemma <ref>, particularly (<ref>) and (<ref>), we see that the terms coming from H^*(T;z^t) can only contribute to the error term, as they are smaller by an order of O n^-1/2, and therefore we may ignore these terms. Now, again using Lemma <ref> we obtain f_t”(T;z_0) = e^2b_t(T) n^-1/2 + O_η(n^-3/2)2 n^3/2b_t(T) + O_η(n) and f_t”'(T;z_0) = O_η(n^2). In order to complete the proof, we now let sc_t(n;T)=I+II, where I:=1/2π∫_|x|≤ n^-5/7e^f_t(T;z_o e^ix)dx      and      II:=1/2π∫_|x|> n^-5/7e^f_t(T;z_o e^ix)dx. To estimate I, we use the Taylor expansion of f_t(T;z) centered at the saddle point z_0=e^-α, given by f_t(T;z)=f_t(T;z_0)+f”_t(T;z_0)(z-z_0)^2/2+O_η f”'_t(T;z_0)· (z-z_0)^3. Since |x|≤ n^-5/7, the estimate (<ref>) implies z-z_0=z_0e^ix-z_0=e^-α(ix+O(x^2)) =(1+O_η(n^-1/2))(ix+O_η(n^-10/7)) =ix+O_η(n^-17/14). Hence, combining with (<ref>) we get f_t(T;z)=f_t(T;z_0)-f”_t(T;z_0)/2 x^2 + O_η(n^-1/7). Combining (<ref>), (<ref>), (<ref>), and (<ref>) with classical integral evaluations, we obtain the asymptotic for I, namely I = e^f_t(T;z_0)/2π[∫_-∞^∞e^-f”_t(T;z_0)x^2/2dx-2∫_n^-5/7^∞e^-f”_t(T;z_0)x^2/2dx](1+O_η(n^-1/7)) =√(b_t(T)/4π n^3/2)e^b_t(T)(2√(n)-1/√(n))(1+O_η(n^-1/7)). To estimate the second integral II, we need a uniform estimate of F_t(T;z) when z is away from the saddle point. More precisely, we estimate the F_t(T;z_0e^ix)/F_t(T;z_0) using e^f_t(T;z_0 e^-ix) = e^f_t(T;z_0)F_t(T;z_0e^ix)F_t(T;z_0). We set z=z_0e^ix, and we first consider the case of t even. Since T>0, we have |F_t(T;z)/F_t(T;z_0)|^2 ≤∏_m≥ 1Max{1,|1+(T^2-1)z^2m/1+(T^2-1)z_0^2m|^2}|1+z^2m-1/1+z_0^2m-1|^2 ≤∏_m≥ 1 E_m(z_0, T, x) ≤∏_√(n)≤ m≤ 2√(n)E_m(z_0, T, x), where for convenience we define E_m(z_0, T, x) := max{1,(1+2(1-T^2)z_o^2m(1-cos(2xm))/(1-z_0^2m)^2)}(1+2z_0^2m-1(cos(2xm)-1)/(1+z_0^2m-1)^2). Since z_0^√(n)→ e^-b_t(T), we know that z_0^m is bounded below for √(n)≤ m≤ 2√(n). Consequently, the same is true for 2z_0^2m/(1-z_0^2m)^2 and 2z_0^2m-1/(1+z_0^2m-1)^2. We consider two case (i.e. T>1 and T≤ 1) to estimate (<ref>). If T>1 and √(n)≤ m≤ 2√(n) then by (<ref>) we have that 2z_0^2m-1/(1+z_0^2m-1)^2≤ A_η, for some η>0. This implies that |F_t(T;z)/F_t(T;z_0)|^2≤∏_√(n)≤ m≤ 2√(n)(1-A_η(1-cos((2m-1)x))). A short computation also shows that (<ref>) still holds for T≤ 1 by choosing a suitable A_η. We divide the range of x into two parts n^-5/7≤ |x|≤π/2√(n) and π/2√(n)≤ |x|≤π. For the first part, we use the inequality 1-cos((2m-1)x)≥2/π^2((2m-1)x)^2 to estimate (<ref>), to get |F_t(T;z)/F_t(T;z_0)|^2 ≤∏_√(n)≤ m≤ 2√(n) 1-2/π^2A_η((2m-1)x ^2≤ 1-A_η x^2n ^√(n)+O(1) ≤ e^-A_η x^2n^3/2≤ e^-A_η n^1/14. For the second part, we count the m∈[√(n),2√(n)] for which there is an r∈ with -n^-1/12+2rπ≤ xm≤ n^-1/12+2rπ. The total number of such m is ≫ n^1/2+O(n^5/12). Hence, we obtain |F_t(T;z)/F_t(T;z_0)|^2 ≤(1-A_η(1-cos(n^-1/12)))^n^1/2+O(n^5/12) = 1-A_η n^-1/6+O(n^-1/3)^n^1/2+O(n^5/12)≪ n^-A_η n^1/14. By combining (<ref>) and (<ref>), we get the upper bound for the integral II, namely II ≪1/2π∫_|x|>n^-5/7e^f_t(T;z_0)F_t(T;z_0e^ix)/F_t(T;z_0)dx ≪_η e^-b_t(T)/√(n)-A_η n^1/14. Since sc_t(n;T)=I+II, the proposition for t even follows using (<ref>) and (<ref>). It now remains to perform the same calculation for the case t odd. Observe that for the first two terms appearing in F_t(T;z) for t odd the calculations are exactly analogous, and therefore we have | F_t(T;z)F_t(T;z_0)|^2 ≪ e^-A_η n^1/14·| H^*(T;z^t)H^*(T;z_0^t)|^2. As before, we bound in the region z = z_0 e^ix for n^-5/7≤ |x| ≤π. It is now necessary to bound the quotient of H^* functions above. We will derive such bounds using a slightly unusual application of Lemma <ref>. Recall from the proof of Lemma <ref> that for X ∈\ [1,∞) and z = e^-w, we have as w → 0 in D_θ that X; -z _∞ = 1-X + ∑_m ≥ 0 g_X^- (m+1) w + ∑_m ≥ 0 g_X^+ m + 12 w , where g_X^±(w) := 1 ± X e^-2w. The argument of Lemma <ref> is then used to give good bounds for H^*(T;z) for a small arc near z=1 after setting X = ±√(1-T^2). We now observe that we can also obtain good asymptotics for H^* near any root of unity ζ_k^h := e^2π i h/k by shifting e^-w↦ e^-w + 2π i h/k. Following the same elementary series arguments as in previous lemmas, we can write for z = e^-w+2π i h/k that X;-z _∞ = 1 - X + ∑_n ≥ 0 1 + X ζ_k^-h(2n+1) e^-(2n+1)w + ∑_n ≥ 0 1 - X ζ_k^-h(2n+2) e^-(2n+2)w. After separating the occurrences of each kth root of unity, X;-z _∞ = 1 - X + ∑_j = 0^k-1∑_m ≥ 0 g^-_X,h/k,j m + j+1/k kw + ∑_j=0^k-1∑_m ≥ 0 g^+_X, h/k, j m + j+1/k kw , where we define g^±_X,h/k,j(w) := 1 ± X ζ_k^-2(j+1)h e^-2w. Now, for each g^±_X,h/k,j function, we have the integral ∫_0^∞ g^±_X,h/k,j(x) dx = - Li_2 X ζ_k^-2(j+1)h, and therefore we have as z →ζ_k^h by Lemma <ref> that X; -z _∞ = - 12kw∑_j=0^k-1Li_2 X ζ_k^-2(j+1)h - 12kw∑_j=0^k-1Li_2 -X ζ_k^-2(j+1)h + O(1). For odd values of k, note that ζ_k^-2(j+1)h runs through each kth root of unity as j runs from 0 to k-1. Therefore, by application of (<ref>) we have for odd values of k that ∑_j=0^k-1Li_2 X ζ_k^-2(j+1)h = 1kLi_2 X^k , ∑_j=0^k-1Li_2 -X ζ_k^2(j+1)h = 1kLi_2 -X^k . Then by application of (<ref>), we finally obtain X;-z _∞ = - Li_2 X^k + Li_2 -X^k 2k^2 w + O(1) = - Li_2 X^2k4k^2 w + O(1). Applying this calculation to the definition of H^*(T;z) given in (<ref>) and the definition of C_T^±, we see that as z →ζ_k^h for any root of unity with odd order, we have H^*(T;z) = - C_T^+2TLi_2 (1-T^2)^k 4k^2 w - C_T^-2TLi_2 (1-T^2)^k 4k^2 w + O(1) = - Li_2 (1-T^2)^k 4k^2 T w + O(1). Because the odd order roots of unity are dense on the unit disk and because the regions D_θ are open intervals on the disk of radius |z| = e^-α_0, we see that max_k ≥ 1 k odd[ - Li_2 1-T^2 ^k 4k^2 T] 1w + O(1) gives an asymptotic upper bound on the size of H^*(T;z) on the whole disk of radius |z|. For η≤ T ≤η^-1 the function 1-T^2 ^k is decreasing as a function of k (since k is odd), and by (<ref>) we see that Li_2 x is an increasing function of x on (-∞,1), and therefore Li_2 (1-T^2)^k is a decreasing function of T on η≤ T ≤η^-1. Because Li_2 x → - ∞ as x → 1^-, both numerator and denominator are optimized at k = 1, and so max_k ≥ 1 k odd[ - Li_2 1-T^2 ^k 4k^2 T] = - Li_2 1-T^2 4T, Because we have shown that the maximal order of H^*(T;z) is achieved in the region near z=1, it follows that | H^*(T;z^t)H^*(T;z_0^t)|^2 ≪ 1, and then by (<ref>) the desired result is complete for t odd. § PROOF OF THEOREM <REF> §.§ Finding Mean and Variance Before beginning the proof of Theorem <ref>, it is important to know the mean and variance of the distributions we consider. In particular, we show the following: The random variable N_t(n) on the space 𝒮𝒞(n) has mean μ_t(n) ∼√(6n)/π-t/2+3/π^2 and variance σ_t(n)^2 ∼(π^2-6)√(6n)/π^3+3t/π^2-t^2/4-279/16π^4 as n →∞. The proof of Theorem <ref> itself will imply Proposition <ref>. For convenience, we give a sketch here of another method for calculating these values which is applicable and straightforward (i.e. requires no guesswork or a priori knowledge of the solution) even if the overall distribution is unknown. We use the standard notation 𝔼sc_t n, ∙ and 𝕍sc_t n, ∙ to denote the mean and variance of sc_t n, m as m varies. Recall that ∑_n ≥ 0 sc(n) 𝔼 sc_t(n,∙) q^n = ∂ F_t(T;q)∂ T|_T=1 and likewise, using the identity 𝕍(X) = 𝔼(X^2) - 𝔼(X)^2 for any random variable X, we have ∑_n ≥ 0 sc(n) 𝕍 sc_t(n,∙) q^n = [ T ∂∂ T^2 F_t(T;q) - T ∂∂ T F_t(T;q) ^2 ] |_T=1 Therefore, the asymptotic growth of 𝔼 sc_t(n,∙) and 𝕍 sc_t(n,∙) as n →∞ can be calculated from the growth of sc(n) as well as the growth of the Fourier coefficients of T-derivatives of F_t(T;q). Although there are a variety of cases to consider in our application, the basic idea is the same in all cases. One can compute directly formulas for ∂^j F_t(T;q)/∂ T^j|_T=1 as a q-series using elementary methods[To deal with the required formulas relating to H^*(T;q), it is easiest to use the q-hypergeometric representations found in (<ref>).]. These representations all take the form ∂^j F_t(T;q)∂ T^j|_T=1 = R_j,t(q) -q;q^2 _∞ for some rational functions R_j,t(q). One can then derive asymptotic expansions (for q = e^-w and w → 0 in relevant regions, see Lemma <ref>) for -q;q^2 _∞∼expπ^2/24w + O(w) using Lemma <ref> and for R_j,t(q) using Laurent expansions. After verifying certain “minor arc" conditions, which will be automatic because -q;q^2 _∞ is modular (see for instance <cit.>), the desired formulas follow from a standard application of Wright's circle method (see for example <cit.>). §.§ Proof of Theorem <ref> We recall the method of moments, as formulated in the following classical theorem of Curtiss. Let { X_n } be a sequence of real random variables, and define the corresponding moment generating function M_X_n(r) := ∫_-∞^∞ e^rx dF_n(x), where F_n(x) is the cumulative distribution function associated with X_n. If the sequence { M_X_n(r) } converges pointwise on a neighborhood of r=0, then { X_n } converges in distribution. The proof of Theorem <ref> follows from Theorem <ref> along with the theory of normal distributions. In particular, let sc_t(n,m) be the number of self-conjugate partitions of n having exactly m hooks of length t (i.e. the coefficient on T^m in sc_t(n;T)), and consider the rth power moments M_tN_t(n); r := 1sc(n)∑_m ≥ 0 sc_t(n,m) e^ m - μ_t(n) r/σ_t(n). By Theorem <ref> and the theory of normal distributions, we need only prove that lim_n →∞ MN_t(n); r = e^r^2/2. It is straightforward to see by the definition of the generating function F_t(T;q) that MN_t(n); r = F_t 1;e^r/σ_t(n)sc(n) e^-μ_t(n)/σ_t(n). By Proposition <ref> with the evaluations T = 1 and T = e^r/σ_t(n), we see that MN_t(n); r = √(b_t e^r/σ_t(n)b_t(1)) e^ 2√(n) - 1/√(n) b_t e^r/σ_t(n) - b_t(1) - μ_t(n)/σ_t(n) 1 + O_η n^-1/7. Since e^r/σ_t(n) > 0 and approaches 1 as n →∞, we can remove the dependence on η from the implied constant. By a direct calculation of the dilogarithm function, we see that b_t(1) = π/2√(6) and b_t e^r/σ_t(n) = π2√(6) + √(3/2) xπ + √(3/2)π^2 - 6 x^22π^3 + O(x^3) 2|t, π2√(6) + √(3/2) (t-1) xπ t + √(3/2) (t-1)π^2 - 6 t + 6 x^22π^3 t^2 + O(x^3) 2 | t. Therefore, we conclude quickly from the construction of μ_t(n) and σ_t(n) that MN_t(n); r = e^r^2/2 + o_r(1) 1 + O_r n^-1/7. By taking n →∞, Theorem <ref> follows. 99 AAOS T. Amdeberhan, G.E. Andrews, K. Ono, and A. Singh, Hook Lengths in Self-Conjugate Partitions. Proceedings of the American Mathematical Society, To appear. AS A. Ayyer and S. Sinha, The size of t-cores and hook lengths of random cells in random partitions. Ann. Appl. Probab., 33 (1): 85–106, 2023. BBCFW C. Ballantine, H. Burson, W. Craig, A. Folsom, and B. Wen, Hook length biases and general linear partition inequalities. Res. Math. Sci. 10, 41 (2023). BCOM K. Bringmann, W. Craig, J. Males, and K. Ono, Distributions on partitions arising from Hilbert schemes and hook lengths. Forum of Mathematics, Sigma, 10:e49, 2022. BJM K. Bringmann, C. Jennings-Shaffer, and K. Mahlburg, On a Tauberian Theorem of Ingham and Euler–Maclaurin summation. Ramanujan J. 61, 55–86 (2023). BM K. Bringmann and K. Mahlburg, Asymptotic inequalities for positive crank and rank moments. Trans. Am. Math. Soc. 366 (2) (2014) 1073–1094. CDH W. Craig, M. L. Dawsey, and G.-N. Han, Inequalities and asymptotics for hook numbers in restricted partitions. Preprint, arXiv:2311.15013. Curtiss J. Curtiss, A note on the theory of moment generating functions. Ann. Math. Statist. 13 (1942), 430–433. GKS F. Garvan, D. Kim, and D. Stanton, Cranks and t-cores. Invent. math. 101 1, 1–18 (1990). GOT M. Griffin, K. Ono, and W.-L. Tsai, Distributions of Hook Lengths in Integer Partitions. Proceedings of the American Mathematical Society, Series B, In Press. GORT M. Griffin, K. Ono, L. Rolen and W-L. Tsai, Limiting Betti distributions of Hilbert schemes on n points. Canadian Mathematical Bulletin, 66 (1), 243–258. Han G.-H.  Han, The Nekrasov-Okounkov hook length formula: refinement, elementary proof, extension and applications. Annales de l'Institut Fourier, Volume 60 (2010) no. 1, 1–29. NekOk N. A. Nekrasov and A. Okounkov, Seiberg-Witten theory and random partitions. In: The unity of mathematics, Prog. Math., Birkhäuser Boston, 2006, vol. 244, 525–596. NR H.T. Ngo and R. Rhoades, Integer partitions, probabilities and quantum modular forms. Res. Math. Sci. 4, 17 (2017). OnoSze K. Ono and L. Sze, 4-core partitions and class numbers. Acta Arith. 65 (1997), 249–272. Westbury B. W. Westbury, Universal characters from the Macdonald identities, Adv. Math. 202 (2006), 50-63. Zag D. Zagier, The Mellin transfom and related analytic techniques. Appendix to E. Zeidler, Quantum Field Theory I: Basics in Mathematics and Physics. A Bridge Between Mathematicians and Physicists, Springer-Verlag, Berlin-Heidelberg-New York (2006), 305–323. ZagDilog D. Zagier, The Dilogarithm Function. In: Cartier, P., Moussa, P., Julia, B., Vanhove, P. (eds) Frontiers in Number Theory, Physics, and Geometry II. Springer, Berlin, Heidelberg.
http://arxiv.org/abs/2406.08977v1
20240613101629
Signature of non-trivial band topology in Shubnikov--de Haas oscillations
[ "Denis R. Candido", "Sigurdur I. Erlingsson", "João Vitor I. Costa", "J. Carlos Egues" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Department of Physics and Astronomy, University of Iowa, Iowa City, Iowa 52242, USA Department of Engineering, Reykjavik University, Menntavegi 1, IS-102 Reykjavik, Iceland Instituto de Física de São Carlos, Universidade de São Paulo, 13560-970 São Carlos, SP, Brazil Instituto de Física de São Carlos, Universidade de São Paulo, 13560-970 São Carlos, SP, Brazil Department of Physics, University of Basel, CH-4056, Basel, Switzerland § ABSTRACT We investigate the Shubnikov-de Haas (SdH) magneto-oscillations in the resistivity of two-dimensional topological insulators (TIs). Within the Bernevig-Hughes-Zhang (BHZ) model for TIs in the presence of a quantizing magnetic field, we obtain analytical expressions for the SdH oscillations by combining a semiclassical approach for the resistivity and a trace formula for the density of states. We show that when the non-trivial topology is produced by inverted bands with “Mexican-hat” shape, SdH oscillations show an anomalous beating pattern that is solely due to the non-trivial topology of the system. These beatings are robust against, and distinct from beatings originating from spin-orbit interactions. This provides a direct way to experimentally probe the non-trivial topology of 2D TIs entirely from a bulk measurement. Furthermore, the Fourier transform of the SdH oscillations as a function of the Fermi energy and quantum capacitance models allows for extracting both the topological gap and gap at zero momentum. Signature of non-trivial band topology in Shubnikov–de Haas oscillations J. Carlos Egues June 17, 2024 ========================================================================= Introduction. — Topological Insulators (TIs) are materials that behave as gapped insulators in bulk whereas also hosting metallic (gapless) topological helical states localized at their edges in 2D TIs <cit.> or surfaces in 3D TIs <cit.>. For that reason, attention to topological materials has been mainly focused on edge- and surface-like phenomena. For instance, the corresponding experimental confirmation of TIs are usually performed via edge- or surface-related effects, e.g., the quantized conductivity for 2D TIs <cit.>, and angle-resolved photoemission spectroscopy for 3D TIs <cit.>. Despite these successful realizations, there are still open problems, e.g., the quantization of the resistivity is not always observed <cit.>. This thus requires the development of alternative methods to probe the presence of topological bands, e.g., via the investigation of bulk properties of the material. Here we first develop an analytical theory describing Shubnikov-de Haas oscillations (SdH) <cit.> in the magnetoresistivity of bulk 2D TIs. Using this theory, we show that topologically non-trivial bands with “Mexican-hat” band structure present an anomalous SdH-beating pattern not found in trivial systems, e.g., InAs quantum well (QW), Fig. <ref>. These beatings are present for a broad Fermi energy (ε_F) range in which ε_F is simultaneously intersecting both electron- and hole-like bands – a sole characteristic of bulk insulators with band inversion. The combination of quantum capacitance models <cit.> and our SdH theory allow for the experimental determination of the Hamiltonian parameters, including both the topological gap and gap at 𝐤=0. This is done via the extraction of the frequencies defining the corresponding SdH-oscillations as a function of the Fermi level. We show that these anomalous beatings are fully distinguishable from the spin-orbit beatings due to Rashba and Dresselhaus spin-orbit coupling in 2D TIs. Our approach thus allows for a novel way of identifying band inversion characterizing topological insulators solely via a bulk measurement. Our method can be applied to different TIs with Mexican hat band structure, including InAs/GaSb QWs <cit.>, strained-InSe <cit.>, few layers of GaS and GaSe <cit.>, 1T'-WTe_2 monolayers <cit.>, patterned InAs double QWs <cit.> and Na_2XY trilayers <cit.>. Hamiltonian.— We use the Bernevig-Hughes-Zhang (BHZ) model <cit.> to obtain the energy dispersion of the lowest conduction (E_1) and the highest valence (HH_1) of 2D TIs, e.g., type-I HgTe/CdTe QW <cit.>, type-II InAs/GaSb QW <cit.> H(𝐤)=([ h(𝐤) H_SO(𝐤); H_SO^*(𝐤) h^*(-𝐤) ]), with h_(𝐤)=-D𝐤^21_2×2+𝐝_(𝐤)·τ, 𝐝_(𝐤)=( Ak_x, -Ak_y, M-ℬ𝐤^2), 𝐤 is the in-plane wave vector, τ the Pauli matrices describing the pseudospin space, H_SO the Dresselhaus (bulk inversion asymmetry or BIA) and Rashba (structure inversion asymmetry asymetry, or SIA) spin-orbit Hamiltonians, and A, B,D,M the effective QW 𝐤·𝐩 parameters. However, for D=-ħ^2/2m^*, A = α and B=M=H_SO(𝐤)=0, this Hamiltonian also describes the energy dispersion of a two-dimensional electron gas (2DEG) with effective mass m^* and spin-orbit Rashba parameter α  [Supplemental Material <cit.>]. The energy dispersions for InAs/GaSb and InAs QWs are plotted for the parameters of Table <ref> in Fig. <ref>(a) and (b), respectively, with the corresponding color map representing the contribution of E_1 and HH_1 subbands and spin value. In Fig. <ref>(a), the InAs/GaSb spectrum contains a band inversion characterized by the higher energy of HH_1-subband with respect to E_1 at k=0. Due to the type-II InAs/GaSb QW structure, the overlap between E_1 and HH_1 envelope functions is small, resulting in a small gap opening via parameter A. The corresponding shape of the spectrum is often described as “Mexican-hat” and it is a characteristic of topologically non-trivial systems. The condition for this Mexican-hat regime in the conduction (valence) band is given by 2[ B+sign( B)D]M>A^2 (2[ B-sign( B)D]M>A^2) with M B>0 and | B|>|D| (see Supplemental Material <cit.>). In the presence of a perpendicular magnetic field 𝐁=B_0ẑ, using the Landau gauge with vector potential A=B_0(-y,0,0), the corresponding BHZ Hamiltonian [Eq. (<ref>)] is obtained via the minimum coupling 𝐩→Π with Π=p-qA, and 𝐩=1/(iħ)∇ <cit.>. It is convenient to introduce the creation and annihilation bosonic operators, a^†=ℓ_c(Π_x+iΠ_y)/√(2)ħ and a=ℓ_c(Π_x-iΠ_y)/√(2)ħ, respectively, with [a,a^†]=1, a|n⟩ =√(n)|n-1⟩, and a^†|n⟩ =√(n+1)|n+1⟩. In leading order in the spin-orbit terms, the Hamiltonian reads H=[[ (ħω_1+ħω_2)(a^†a+1/2)+M ħη a^† ħα a -Δ_BIA; ħη a (ħω_2-ħω_1)(a^†a+1/2)-M Δ_BIA 0; ħα a^† Δ_BIA (ħω_1+ħω_2)(a^†a+1/2)+M -η a; -Δ_BIA 0 -η a^† (ħω_2-ħω_1)(a^†a+1/2)-M ]], with ħω_1=-2B/ℓ_c^2, ħω_2=-2D/ℓ_c^2, ħη=A√(2)/ℓ_c, ħα=α_e√(2)/ℓ_c and ℓ_c=√(ħ/|eB|). In the absence of spin-orbit terms (i.e., α=Δ_BIA=0), Hamiltonian Eq. (<ref>) has an analytical Landau level (LL) structure <cit.> ε_n,τ^σ=± =1/2[±ħω_1+2nħω_2. +.τ√((ħω_2±2nħω_1±2M)^2+4n(ħη)^2)], with σ=± (τ=±) representing the spin (pseudospin) subspace [Despite the common claim affirming that each BHZ block corresponds to different spin, we emphasize this is not the case as the E_1 subbands contain a mix between conduction band up and light hole bands down <cit.>.], n∈ℕ_0 and ε_n=0^σ=±= ± M+(ħω_2±ħω_1)/2. In the right axis of Figs. <ref>(a) and (b) we plot the LLs energies [Eq. (<ref>)] as a function of B for both InAs/GaSb and InAs QWs [For InAs, the LL energy expresion becomes ε_n,σ=ħω_2n+σ/2√((ħω_2)^2+4(ħη)^2n) with σ=± representing the different spins (see Supplemental Material <cit.>).], respectively. While all the LLs for the InAs QW present a monotonic linear dependence on B, LLs for InAs/GaSb do not. It is interesting to note, however, that far from the gap region, both E_1-like LLs (blue color) and HH_1-like LLs (red color) present a linear monotonic dependence on B, with slopes with opposite signs. This happens since for |ε|≫|M| we have effectively decoupled the electron and hole bands (i.e., η∝ A= 0) with positive and negative effective masses, m_e=-ħ^2/(2D+2ℬ) and m_h=-ħ^2/(2D-2ℬ), respectively. For these regions, their energies read ε_n,E_1^σ=±≡ε_n,τ=-^σ=±≈ M + ħ |ω_e|(n+1/2) and ε_n,HH_1^σ=±≡ε_n,τ=+^σ=±≈ -M - ħ |ω_h|(n+1/2), with ω_e=eB/m_e=ω_1+ω_2 and ω_h=eB/m_h=ω_2-ω_1 [see dashed lines in Fig. <ref>(a)]. Conversely, closer to the gap region, E_1- and HH_1-like LLs interact strongly with each other via the off-diagonal η term, resulting in an anti-crossing of these LLs, similarly to the one obtained for the bulk bands at B=0. As we will present next, when the Fermi energy is simultaneously crossing electron- and hole-like bands, an anomalous SdH-oscillation appears. Shubnikov-de Haas oscillations in 2D TIs. Magneto-oscillations in the longitudinal resistivity ϱ_xx(B) are called SdH oscillations <cit.>. They arise due to the sequential crossing between the Fermi energy and the LLs of the system, which yields a corresponding depopulation of the LLs as the magnetic field is increased. The corresponding rate at which these crossings happen dictates the periodicity of the SdH oscillations with respect to 1/B <cit.>. It is well-known that for spin-orbit coupled 2DEGs (e.g., InAs QWs), each split band [see Fig. <ref>(b)] gives rise to SdH oscillations with similar frequencies. The sum of these oscillations, in turn, produces SdH oscillations with beatings, shown in Fig. <ref>(d) <cit.>. More quantitatively, SdH oscillations in the resistivity ϱ_xx ϱ_xx(B) can be obtained using Drude's semi-classical equations accounting for the magnetic field dependence of the electron scattering time (τ) via Fermi's golden rule <cit.>, whereas a more rigorous formalism can be found in Ref. <cit.>. Then, ϱ_xx(B)=m^*/ne^2τ(B) with 1/τ(B)∝ρ(ε_F,B), where ρ(ε,B) is the density of states (DOS) (per spin and area) of our 2DEG, i.e., ρ(ε,B)=D̃/A∑_n,τ,σδ(ε-ε_n,τ^σ), with n∈ℕ_0 and LL degeneracy D̃=A/2πℓ_c^2. To lowest order in the deviations of the DOS δρ(ε,B), the scattering time in the presence of a finite magnetic field is τ(B)≈τ_0[1-δρ(ε,B)/ρ_0], with δρ(ε,B)=ρ(ε,B)-ρ_0 <cit.>, τ_0 and ρ_0 the scattering time and DOS at B=0, respectively. The resistivity then becomes dependent on the magnetic field ϱ_xx(B)=ϱ_xx^0[1+2δρ(ε,B)/ρ_0] <cit.> where ϱ_xx^0 is the Drude's conductivity at B=0. To obtain the analytical formula for ϱ_xx(B), we will use the formalism employed in Refs. <cit.>, which makes use of the Poisson's summation formula <cit.>. For a Hamiltonian with corresponding pseudo-spin τ and spin index σ, the normalized magnetoresistivity δϱ_xx(B)=[ϱ_xx(B)-ϱ_xx^0]/ϱ_xx^0 at T=0 K reads <cit.> δϱ_xx(B)=2∑_l,σ,τ^.e^-π lΓ |dF_τ^σ/dε| cos[2π lF_τ^σ(ε)]|_ε=ε_F, with l∈ℕ representing the different harmonics, Γ the LL broadening, and the F-functions defined by ε_n,τ^σ=ε↔ n=F_τ^σ(ε), with ε_n,τ^σ [Eq. (<ref>)] determined from the BHZ Hamiltonian. Thus, to calculate the magneto-resistivity we have to first find F_τ^σ(ε) by inverting the LL expression. In the Supplemental Material <cit.> we present the (lengthy) analytical formula for F_τ^σ(ε). We start by discussing SdH oscillations for a InAs 2DEG with parameters in Table <ref>. Due to the dominance of the quadratic term in 𝐤 over the linear term, Eq. (<ref>) yields SdH-oscillations due to two cosines with similar F-fuctions F_+^σ(ε_F) and F_-^σ(ε_F) (Supplemental Material <cit.>). Consequently, the total resistivity is a sum of oscillations with similar frequencies, thus presenting a beating pattern, shown in Fig. <ref>(d). The faster frequency is proportional to the 2DEG effective mass m^* and ε_F via, F_+^σ(ε_F)+F_-^σ(ε_F) = ε_F m^*/eħ1/B, while the slower one (defining the beatings) is F_+^σ(ε_F)-F_-^σ(ε_F)∝α/B <cit.>, with Rashba spin-orbit parameter α. The Fourier transform of the corresponding SdH-oscillation is plotted in the inset of Fig. <ref>(d), where we observe similar SdH frequencies around ∼ 10 T. Surprisingly, InAs/GaSb QW in the topological regime presents SdH oscillations with a completely different pattern, shown in Fig. <ref>(c). Its Fourier transform shows that instead of similar frequencies yielding a beating pattern, this system contains two frequencies with very different values, ∼ 3 T and ∼ 23 T [see inset Fig. <ref>(c)]. This happens due to the current being carried by electron- and hole-like bands with contrasting Fermi areas. As we explain below, this arises solely due to the non-trivial topology of the system emerging from the band inversion between E_1 and HH_1 subbands. In Fig. <ref>(b) we plot δϱ_xx(B) versus 1/B for different Fermi energy values (right y-axis). The anomalous beatings (green curves) are present for a wide range of ε_F where E_1- and HH_1-like bands coexist with the same energy, i.e., |Δ| ≲ |ε_F| ≲ |M|. As already discussed, far from the anti-crossing region between E_1 and HH_1 (|ε_F|≫|M|), the eigenenergies [Eq. (<ref>)] can be described by LLs of decoupled E_1-electron and HH_1-hole gases, with F-functions F_E_1 (ε_F) ≡ F_τ=-^σ(ε_F)≈ (ε_F-M)/ħω_e -1/2 and F_HH_1(ε_F)≡ F_τ=+^σ (ε_F)≈ - (ε_F+M)/ħω_h +1/2, respectively, and corresponding SdH frequencies f_SdH^E_1= ε_F m_e/ħ e and f_SdH^HH_1=-ε_F |m_h|/ħ e, plotted as blue and red dashed lines in Fig. <ref>(c). In contrast to the InAs case, these frequencies are now completely distinct from each other, which is a consequence of the striking difference in the electron and hole effective masses of InAs/GaSb QW. Furthermore, due to the different signs of the m_e and m_h effective masses, these frequencies possess opposite dependence on ε_F, i.e., while f_SdH^E_1 increases with ε_F, f_SdH^HH_1 decreases. As a consequence, increasing |ε_F| yields an increase in the frequency separation [see Fig. <ref>(c)]. Interestingly, f_SdH^E_1→0 (f_SdH^HH_1→0) for ε_F → M (ε_F → -M), which is follows from the absence of HH_1-like (E_1-like) states for ε_F>-M (ε_F< M). Therefore, we only have one frequency for |ε_F| > |M|, and beating is absent [see gray area in Fig. <ref>(b)]. Around ε_F≈±Δ the frequencies become comparable and we observe the usual beating pattern of a 2DEG with spin-orbit coupling [see black curve in Fig. <ref>(b)]. Finally, the F-functions do not exist within the gap -|Δ|≲ε_F ≲ |Δ|. Experimental realization. — Here we demonstrate how the experimental study of the SdH oscillations versus ε_F in TIs with Mexican-hat shaped bands allows for the reconstruction of their bulk bands, and corresponding confirmation of its non-trivial topology. A top gate V_g applied to the system can control the Fermi level, with the quantum capacitance formula permitting us to translate voltage values to the corresponding Fermi level, i.e., ε_F=ε_F(V_g) <cit.>. First, by increasing the voltage such that ε_F≫|M|, we will obtain magneto oscillations solely due to E_1-states with corresponding frequency f_SdH^E_1. Performing the FFT of these oscillations permits us to extract the effective mass of the E_1-states as f_SdH^E_1=ε_F m_e/ħ e. The same analysis can be done for negative V_g such that ε_F≪-|M|, and the effective mass of HH_1-states can also obtained. As we diminish ε_F via V_g, the oscillations display and additional frequency once ε_F≲-M is reached, thus allowing us to obtain the parameter M defining the gap at k=0. Both frequencies will vanish for ε_F→|Δ|, and we can extract the value of the bulk topological gap 2Δ. With all these parameters we are able to fully reconstruct the TI band structure and its corresponding topology. A few works have already performed measurement in similar systems <cit.>. However, here we provide a systematic way of probing the non-trivial band topology and extracting the corresponding effective parameters of the 2D TI. Robustness against spin-orbit coupling. — To further test the robustness of the anomolous beatings as evidence of the non-trivial topology of the system, we study SdH oscillations in the presence of different spin-orbit couplings Δ_BIA and α [See Eq. (<ref>)]. The presence of Δ_BIA term breaks the spin degeneracy of both conduction and valance bands, plotted in Fig. <ref>(a) for Δ_BIA=2 meV. The corresponding SdH oscillations and its Fourier transform are plotted in Fig. <ref>(b) and (c) for ε_F=29 meV and ε_F=-5 meV, respectively. Compared to the case with Δ_BIA=0, shown in Fig. <ref>(c), here we obtain a slightly different beating pattern. Nevertheless, the corresponding FFT plot shows that Δ_BIA does not shift the main frequency peaks centered at ∼3 T and ∼23 T, but rather split them, similarly to the case of Rashba SO in 2DEG, shown in Fig. <ref>(d). In Fig. <ref>(d) we present results for the SIA term α_e=10 meV.nm, which produces similar spin split features. However, for ε_F=29 meV (ε_F=-5 meV) this term splits mainly the higher (lower) SdH frequency peak, as α_e couples different spin components of the E_1-subbands, i.e., the band responsible for oscillations with higher (low) frequency for ε_F=29 meV (ε_F=-5 meV). In short, the results summarized in Fig. <ref> show that the presence of different spin-orbit coupling does not alter nor does it change the main features of the anomalous SdH-oscillations and its frequencies, thus proving that the non-trivial band topology of the system can be inferred from SdH bulk transport measurements. Conclusion. — We have developed an analytical formalism to describe the SdH-oscillations of 2D TIs with Mexican hat band structure. Their SdH-oscillations contain anomalous beatings that are very distinct from the ones found in ordinary trivial semiconductors in the presence of spin-orbit coupling. These beatings are originated from two contrasting frequencies that arise from the presence of overlaping electron and hole-like Fermi surfaces, a unique characteristic of inverted band insulators with Mexican hat band structure. Finally, we show that an analysis of the frequencies versus the Fermi energy allows for a straightforward extraction of the BHZ Hamiltonian parameters, including the topological gap. Our work thus demonstrates an alternative method – based solely on bulk properties – for demonstrating the manifestation of the non-trivial topology of 2D TIs. Acknowledgments.— DRC acknowledges funding from the University of Iowa Fund. SIE was supported by the Reykjavik University Research Fund. JCE acknowledges funding from the National Council for Scientific and Technological Development (CNPq) Grant No. 301595/2022-4 and São Paulo Research Foundation (FAPESP), Grant 2020/00841-9.
http://arxiv.org/abs/2406.08682v1
20240612224945
FIP-GNN: Graph neural networks for scalable prediction of grain-level fatigue indicator parameters
[ "Gyu-Jang Sim", "Myoung-Gyu Lee", "Marat I. Latypov" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
[snu-mse] Department of Materials Science and Engineering & RIAM, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea [az-mse]Department of Materials Science and Engineering, University of Arizona, Tucson, AZ 85721, USA [az-am]Graduate Interdisciplinary Program in Applied Mathematics, University of Arizona, Tucson, AZ 85721, USA snu-mse]Gyu-Jang Sim snu-mse]Myoung-Gyu Lee az-mse,az-am]Marat I. Latypovcor1 [cor1]corresponding author latmarat@arizona.edu § ABSTRACT High-cycle fatigue is a critical performance metric of structural alloys for many applications. The high cost, time, and labor involved in experimental fatigue testing call for efficient and accurate computer models of fatigue life. We present graph neural networks for polycrystals that, for the first time, can (i) predict fatigue indicator parameters – grain-level responses to cyclic loading well beyond monotonic elastic and inelastic regimes reported in literature; and (ii) generalize these predictions to large microstructure volume elements with grain populations well beyond those used in training. These advances can make significant contributions to statistically rigorous and computationally efficient modeling of high-cycle fatigue – a long-standing challenge in the field. Graph neural networks High-cycle fatigue Fatigue indicator parameters Surrogate models Microstructure Understanding and predicting fatigue are crucial in the design and qualification of advanced structural alloys. Determination of the lifespan of alloys in high cycle fatigue (HCF) is especially challenging due to its high variability <cit.> and the need in time-consuming mechanical tests to a large number of cycles required for crack initiation. The cost, time, and labor involved in experimental HCF testing highlights the need in efficient and accurate computer models. One of the approaches to modeling HCF aims to compute the driving force for crack initiation because it is the crack initiation, rather than growth, accounts for most of the HCF life in polycrystalline alloys <cit.>. The driving force is typically quantified by fatigue indicator parameter (FIP) calculated in microstructure-based full-field simulations, e.g., with the crystal plasticity finite element method (CPFE) method <cit.>. Calculation of FIPs most predictive of crack initiation observed in experiments has been the subject of intensive research <cit.>, and multiple FIPs have been introduced to date <cit.>. The Fatemi–Socie FIP is one of the most widely used FIPs because it has relatively small mesh sensitivity <cit.> and does not require explicit implementation of the crack tip, allowing the FIP calculation within purely continuum CPFE simulations. Utilizing the Fatemi–Socie FIP as a metric of spatially resolved driving force for crack initiation, Przybyla et al. <cit.> developed a methodology of evaluating and ranking microstructures in terms of their resistance to HCF. Their approach is based on statistical analysis of FIP values in polycrystals represented by microstructure volume elements (MVEs). A challenge in the methodology is that statistical analysis of most interest for the HCF resistance focuses on the extreme value (EV) distributions that require MVEs prohibitively large for CPFE simulations. For example, for an Al 7075-T6 alloy, MVEs containing even 10^6 grains were still reported insufficiently representative for FIP EV distributions, which is already impractically expensive – a CPFE simulation on a single MVE with only 1000 grains requires more than 100 CPU hours <cit.>. Given the high computational cost of the CPFE method and the need in large populations of grains for EV distributions, researchers adopted the concept of statistical volume elements (SVEs), where multiple MVEs are used for simulations to represent one microstructure in lieu of a single large MVE <cit.>. With this approach, EV distributions of FIPs can be obtained from a large number of grains in an SVE of a microstructure of interest, which, however, still comes at a high computational cost associated with CPFE simulations on multiple MVEs. The high computational cost of the CPFE method for statistically rigorous modeling of HCF emphasizes the need in new efficient approaches. Machine learning is one of the routes towards reducing the computational cost of modeling microstructure–property relationships in materials. A machine learning strategy extensively explored in literature is the development of surrogate models that can reproduce the results of computationally demanding direct numerical simulations such as CPFEM at a fraction of their computational cost. Quite a few surrogate models have been reported to date <cit.>, and they primarily differ in their approach to quantitative description of microstructure. Reported approaches include the description by spatial correlations <cit.>, latent representation by convolutional neural networks learned during training <cit.>, and graphs <cit.>. Graph representation of polycrystals and the subsequent use of graph neural networks (GNNs) have been recently gaining momentum. Compared to other descriptions, graph representation does not require full 3D voxel-to-voxel data yet captures the overall grain structure and its connectivity in polycrystals. Graph representation and GNNs have been successful in modeling a range of properties of polycrystals, both mechanical <cit.> and non-mechanical <cit.>. To cite a few studies relevant to micromechanics, Vlassis et al. <cit.> developed GNN models to predict the elastic energy functional of materials described by hyperelastic constitutive laws. Pagan et al. <cit.> trained GNNs to predict grain-average stress in Ni and Ti alloys using data from CPFE simulations and experiments. Sadeghpour et al. <cit.> modeled the offset yield strength of 316L steel, while Hestroffer et al. <cit.> developed GNNs for the effective yield strength and elastic modulus of α-Ti microstructures with various crystallographic textures. Hu and Latypov <cit.> generalized GNNs ability to predict anisotopic effective properties to arbitrary loading directions. Hu et al. <cit.> proposed a temporal GNN model that integrates a GNN with recurrent neural networks to predict orientation evolution paths and stress history of each grain in a polycrystal. These studies demonstrate the potential of GNNs for modeling micromechanics of polycrystals, both in terms of their overall properties and grain-level responses. In this context, GNNs are promising for computationally efficient prediction of grain-level FIPs. At the same time, most GNN studies to date modeled micromechanical polycrystal responses in purely elastic and initial yield regimes or under monotonic loading. The predictive power of GNNs beyond of highly non-linear responses such as FIPs and under cyclic loading is currently unknown. In this study, we demonstrate the ability of GNNs to model grain-level FIPs as non-linear micromechanical responses of polycrystals under cyclic loading. We demonstrate that GNNs can tackle the challenge of modeling FIPs in large MVEs with large grain populations at a modest computational cost. To this end, we trained GNNs using publicly available dataset on an Al 7075-T6 alloy <cit.> featuring CPFE simulation results by Stopka, Yaghoobi et al. <cit.>. The dataset includes computer-generated MVEs of various sizes (30^3 to to 250^3 voxels, see <Ref>) and grain counts (275 to 160000) and their micromechanical responses from CPFE simulations, including spatially-resolved FIPs. For each MVE, we created a microstructure graph, wherein grains are represented by graph nodes, while grain boundaries are represented by graph edges that link nodes corresponding to adjacent grains. We incorporated different grain properties as feature sets to nodes in these graphs, including (i) Euler angles and (ii) quaternions – both representing grain orientations – as well as (iii) Schmid factors calculated for each grain respect to the loading direction. We further calculated grain-level Fatemi–Socie FIP values by averaging the maximum values from all CPFE integration points in a grain as the node response for learning and inference. We trained and optimized the GNNs with the three feature choices using a set of 200 smallest MVEs (MVE30) and then tested the capabilities of the trained models to predict grain-level FIPs in large MVEs. A sketch of the FIP-GNN is shown in <Ref>, further descriptions of the GNN model, data, training, and hyperparameter optimization are detailed in the Data and Methods section. We first compare GNNs trained with different choices of the node features: Euler angles, quaternions, and Schmid factors. To this end, we split the MVE30 set to 180 training MVEs and 20 validation MVEs to optimize GNN hyperparameters and evaluate the impact of the feature choice on the GNN performance. All three GNNs capture grain-level FIPs with R^2>0.89 and with mean average relative error (meanARE) within 10(<Ref>). Among the three models, the GNN that uses Euler angles as features shows the lowest accuracy (<Ref>a). While the GNNs trained with quaternions and Schmid factors both have superior accuracy, the GNN relying on quaternions (<Ref>b) includes more than three times more learnable weights to reach the accuracy comparable with the GNN using Schmid factors (<Ref>c). The difference in the number of weights arise because we optimized hyperparameters (primarily n and k, see <Ref> and <Ref>a) individually for each model and the GNN with quaternions has the highest optimal number of nodes in the hidden layer, n=32, among the three models (<Ref>). Further, the GNN with Schmid factors learns fastest among the three models with only 1000 epochs needed to reach the plateau in the validation error, as opposed to the 5000 to 6000 epoch range for the GNNs based on orientations as node features. Since Schmid factors as features lead to the best combination of accuracy, size, and learning speed in GNNs, we focus on developing GNNs for FIPs using Schmid factors as the node features of choice for the remainder of the study. We next evaluate the capability of GNNs to generalize FIP predictions to large MVEs with grain populations well beyond those used in training. To this end, we trained the best GNN identified above (Schmid factors as features and optimized n and k hyperparameters) on the entire MVE30 set and then evaluated its accuracy on MVEs of progressively increasing size from 45^3 to 250^3 voxels. Predictions for one MVE from each of five size sets shows that the GNN can predict grain-level FIP values for large MVEs with a wide range of grain populations significantly exceeding those the GNN has “seen” during training (<Ref>). Indeed, the GNN shows a consistent overall accuracy of R^2>0.96 for five MVEs of all studied sizes and the corresponding grain populations. To support this result, we extend this analysis to all MVEs from each size set and visually summarize the results in a box plot of error distributions (<Ref>f). We find that the difference in grain-level FIPs predicted by the GNN from high-fidelity CPFE values has a mean close to zero consistently across all studied MVE sizes. The range of the error does not correlate with the MVE size, which once again attests to the GNNs ability to predict GNNs for MVEs with large grain populations with consistent accuracy. Finally, the field of FIPs predicted by GNN and mapped to a couple of example MVEs also show good visual agreement with the CPFE fields (<Ref>b). The presented GNN approach enables not only response predictions but also insights into micromechanics relevant to HCF reflected by FIPs. The use of 12 Schmid factors calculated for each grain as node features resulted in the most accurate GNN model (<Ref>) with a minimum number of learnable weights. Here, one may wonder if all 12 Schmid factors are necessary for an accurate prediction of grain-level FIPs. To address this question, we tested the accuracy of the GNN model using Schmid factors for only three most active slip systems for each grain. With three highest Schmid factors per grain as features, we obtain a GNN model with R^2=0.956 for the MVE30-VAL set. This accuracy is only marginally lower than the GNN trained on the full set of 12 Schmid factors and nearly identical to that of the GNN model using quaternions (<Ref>b-c). This surprisingly good accuracy of a GNN that uses only three Schmid factors suggests that the driving force for crack initiation quantified by FIPs is associated with slip activity on very few dominant systems. This is consistent with arguments in literature that few slip systems get activated in each grain during deformation of polycrystals <cit.>. In fact, our additional tests showed that using only one and two Schmid factors for most active systems still allows FIP predictions with R^2>0.8. In addition, the result with fewer Schmid factors indicates that the GNN with Schmid factors as features showed better accuracy than orientations (<Ref>) not simply because of greater number of node features. This clearly follows from the identical accuracy of the GNN with three Schmid factors as three features (R^2=0.956) compared to the GNN with quaternions as four features (R^2=0.956). Use of Schmid factors leading to more accurate GNNs than GNNs using orientations at the same or smaller number of features can be attributed to more direct relevance to the response of interest (FIPs) and better compatibility with convolution functions in GNNs <cit.>. While Schmid factors sufficed in this study of FIPs in polycrystals under uniaxial cycling loading, elements of full Schmid tensors (as in <cit.>) can serve as features for modeling HCF under more complex (multi-axial) loading conditions. We can gain further insight into the role of local microstructure in FIP and crack initiation from the architecture of the GNN architecture optimized as part of this study. Our hyperparameter optimization included identification of the optimal number of message passing layers, k. Each message passing layer aggregates features from the nearest-neighbor nodes in the graph <cit.> so that stacking k layers aggregates features from neighbors of the k^th order, i.e., graph nodes k edges apart. In terms of polycrystal micromechanics, k signifies the order of the neighbor grains that affect the response (FIP) in any give grain. From our hyperparameter optimization, we found k=4 layers as optimal for predicting grain-level FIPs (see <Ref>a). It means that accounting for Schmid factors up to fourth-order nearest neighbors is needed for FIP predictions with the high accuracy reported above. This purely data-driven result is consistent with explicit studies of the impact of grain neighborhood on FIPs using physics-based CPFE simulations. Stopka et al. <cit.> studied the impact of altering neighboring grains on the FIPs in the same Al 7075-T6 alloy. The authors found that the FIP value in a “hot-spot” grain is sensitive to the orientations of the nearest neighbor grains of up to fourth order. Similarly, our results show that accounting for features from grains more than four edges away (using k>4) did not improve the accuracy of FIP predictions (<Ref>a). In conclusion, we developed GNNs for polycrystals that, for the first time, can (i) predict FIPs – grain-level responses to cyclic loading well beyond monotonic elastic and inelastic regimes reported in literature; and (ii) generalize these predictions to MVEs with grain populations significantly larger than MVEs used in training. These advances can make significant contributions to statistically rigorous and computationally efficient modeling of HCF – a long standing challenge in the field. The computational gain with presented GNNs is two-fold: (i) GNNs can serve as orders of magnitude faster surrogates to CPFE simulations of FIPs and (ii) GNNs can obtain FIPs and their distributions from very large MVEs out of reach for direct CPFE simulations. Indeed, FIP inference with a trained GNN took no more than 200 for the largest MVEs considered in this study (MVE250) and as little as 2.5 for samples from the MVE30 set on a consumer-grade workstation. In comparison, direct CPFE simulations of the same FIPs after cyclic loading require over 100 hours for one sample of the MVE90 set on a single CPU <cit.>. In addition, unlike CPFE simulations <cit.>, the proposed GNN models exhibit O(N) complexity, i.e., their computational cost increases linearly with the number of grains, N, in an MVE. The moderate computational complexity and the generalization capability of GNNs to large grain populations demonstrated in this study opens opportunities for rapid calculation of statistically significant FIP EV distributions in microstructures for their ranking in terms of HCF resistance and design for superior HCF life. We also demonstrated that the GNN approach provides insights into the variables relevant to micromechanical response of polycrystals during HCF loading consistent with prior physics-based CPFE modeling. § ACKNOWLEDGEMENTS GJS and MGL acknowledge the support from NRF of Korea (grant No. 2022R1A2C2009315) and from the Ministry of Science and ICT (grant No. 2022M317A4072293). The authors thank Dr. Krzysztof Stopka (Purdue University) for help with processing the MVE/FIP dataset used in this study. GJS further thanks Yunju Jang for assistance with some of the illustrations. § DATA AND METHODS This section provides the details on the (i) data used for training and testing; (ii) microstructure representation with graphs; (iii) GNN architecture design; (iv) GNN training, hyper-parameter tuning, and validation. §.§ Data For training GNNs in this study, we made use of a dataset published by Stopka and Yaghoobi <cit.>, which contains an ensemble of polycrystalline MVEs and the micromechanical data from CPFE simulations carried out on these MVEs. The MVEs represent 3D polycrystalline aggregates of an Al 7075-T6 alloy with predominantly equiaxed microstructure and uniform texture <cit.>. The grain size of the MVEs follows a lognormal distribution with an average equivalent grain diameter of 14 and its standard deviation of 2. The crystallographic orientation of each grain in the dataset is described by Euler angles in the ZXZ convention. The dataset contains six subsets of MVEs differentiated by their size that cover a broad spectrum of grain populations. The subsets correspond to six MVE sizes with 30, 45, 90, 160, 200, and 250 voxels along each of the three principal axes. These sizes correspond to the average grain counts per MVE of about 280, 930, 7500, 41000, 80000, and 160000 grains, respectively. We refer to these MVE subsets as MVE30, MVE45, MVE90, MVE160, MVE200, and MVE250, where the number indicates the MVE size in terms of the voxel number in one direction. In terms of the micromechanical data relevant to this study, the dataset includes Socie–Fatemi FIP values calculated with CPFE simulations. The simulations captured tension-compression loading of the MVEs along the x axis for two cycles (sufficient for convergence of FIP <cit.>) and strain amplitude of 0.7% in a completely reversed manner (R_ε=-1). From these simulations, we adopted the grain-average FIP values as the specific grain-level responses to be modeled with GNNs. FIP used in this study is the Fatemi–Socie FIP defined as <cit.> FIP_α=Δγ^α_p/2[ 1 + Kσ^α_n/σ_y], where Δγ^α_p is the range of plastic shear strain and σ^α_n is the maximum normal stress on the α^th slip system, K is a parameter quantifying the influence of normal stress (set to 10), and σ_y is the macroscopic yield strength. §.§ Microstructure graph, node features and response variables To leverage GNNs for modeling FIPs, we create a microstructure graph for each MVE in the dataset. In the microstructure graph, grains are represented by graph nodes, while grain boundaries are represented by graph edges that link nodes corresponding to spatially adjacent grains (<Ref>a). The MVEs in the dataset are periodic so that the construction of the graphs needs to account for the microstructure periodicity. Specifically, nodes that represent grains on one side of the MVE need to be connected to the nodes representing grain neighbors on the opposite side (A–B and C–D node pairs in <Ref>a). Furthermore, when a grain is cut by a side of the MVE and thus its part appears on the opposite side, the grain needs to be represented by a single node in the graph. To satisfy the continuity of the microstructure in the graphs of periodic MVEs, we adopt a simple method that duplicates the given MVE and appends its dummy copies to all the six faces of the original MVE. Once we have the MVE augmented with dummy copies, we identify grain neighbors using the DREAM.3D software <cit.>. From the grain neighbor data, we finally construct a graph with the NetworkX library <cit.>. We then introduce grain-level properties as node features into the constructed microstructure graphs. We tested two orientation representations: (i) Euler angles, and (ii) quaternions. The MVE dataset that we adopted had triplets of Euler angles in radians as the raw orientation data, which can be readily used as three-element feature vectors. For quaternions as features, we converted the Euler angles into rotation matrices from which we then calculated the four-element quaternion vectors <cit.>. We further considered Schmid factors of individual grains in respect to the loading direction. Schmid factors quantify resolved shear stresses on each slip system that ultimately drive the dislocation glide in individual grains <cit.>. For each node in the microstructure graph, we thus assigned 12 Schmid factors calculated for the 12 slip systems in aluminum in respect to the loading direction in the CPFE simulations. In addition to “input” features, we introduced grain-level FIP values as a response variable for inference at each node (<Ref>b). Following the calculation method by Przyblyla et al. <cit.>, we determined the grain-wise FIP by averaging the maximum FIP values over all integration points of each grain. At each integration point, the maximum FIP is found among all slip systems calculated in CPFE <cit.> according to <Ref>. The grain-average FIP values are the node properties modeled by the FIP-GNNs developed in this study. §.§ GNN architecture design, optimization, and training The GNN architecture considered in this study contained message passing layers and a final output layer as the key components of the architecture. A message passing layer aggregates graph node features from first-order nearest neighbors <cit.> using an aggregation or convolution function. Stacking k message passing layers results in aggregation of features from k^th order neighbors. We considered multiple convolution methods, including SAGE <cit.>, GCN <cit.>, and GIN <cit.>. Among these methods, SAGE convolution demonstrated significantly better performance in terms of validation error, and was therefore chosen. SAGE convolution includes hidden layers with n neural nodes that process features of each graph node. We treated the number of message passing layers, k, the number of neural nodes in the hidden layer, n, as well as learning parameters (optimizer, rate, decay, number of warm-up steps) as the main hyperparameters that we optimized to our data. To ensure generalization capability of GNNs to large MVEs, we trained the models and optimized their hyperparameters using only a set of 200 MVEs of the smallest size – MVE30. For hyperparameter optimization, we randomly split the set of 200 MVE30 into training (MVE30-TRAIN) and validation (MVE30-VAL) sets in the 90:10 ratio. Using validation error as the optimization metric, we found that the convolution method, number of message passing layers, k, and the number of hidden layers, n, had the biggest impact on the GNN training outcome. SAGE convolution was found the best among the tested three methods, while the optimal combinations of the k and n parameters depended on the selected node features (orientations vs. Shcmid factors) as summarized in <Ref>. <Ref>a further shows the effect of k and n on the mean squared error for the validation set (MVE30-VAL).
http://arxiv.org/abs/2406.08830v1
20240613054929
Center-Sensitive Kernel Optimization for Efficient On-Device Incremental Learning
[ "Dingwen Zhang", "Yan Li", "De Cheng", "Nannan Wang", "Junwei Han" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Center-Sensitive Kernel Optimization for Efficient On-Device Incremental Learning Dingwen Zhang^1 Yan Li^1 De Cheng^2[1] Nannan Wang^2 Junwei Han^1[1] ^1Northwestern Polytechnical University ^2Xidian University {zhangdingwen2006yyy, yanli.ly.cs, junweihan2010}@gmail.com {dcheng, nnwang}@xidian.edu.cn June 17, 2024 ============================================================================================================================================================================================================================================= [1]Corresponding author § ABSTRACT To facilitate the evolution of edge intelligence in ever-changing environments, we study on-device incremental learning constrained in limited computation resource in this paper. Current on-device training methods just focus on efficient training without considering the catastrophic forgetting, preventing the model getting stronger when continually exploring the world. To solve this problem, a direct solution is to involve the existing incremental learning mechanisms into the on-device training framework. Unfortunately, such a manner cannot work well as those mechanisms usually introduce large additional computational cost to the network optimization process, which would inevitably exceed the memory capacity of the edge devices. To address this issue, this paper makes an early effort to propose a simple but effective edge-friendly incremental learning framework. Based on an empirical study on the knowledge intensity of the kernel elements of the neural network, we find that the center kernel is the key for maximizing the knowledge intensity for learning new data, while freezing the other kernel elements would get a good balance on the model's capacity for overcoming catastrophic forgetting. Upon this finding, we further design a center-sensitive kernel optimization framework to largely alleviate the cost of the gradient computation and back-propagation. Besides, a dynamic channel element selection strategy is also proposed to facilitate a sparse orthogonal gradient projection for further reducing the optimization complexity, upon the knowledge explored from the new task data. Extensive experiments validate our method is efficient and effective, e.g., our method achieves average accuracy boost of 38.08% with even less memory and approximate computation compared to existing on-device training methods, indicating its significant potential for on-device incremental learning. § INTRODUCTION The rapid advancement of embody AI <cit.> lead to a surge in demand for intelligent edge systems that should inherently adapt to the ever-changing environments. This gives rise to the concept of on-device incremental learning, which requires the edge devices be able to update their knowledge efficiently when encountering a series of new tasks while preserving prior knowledge, all within very limited resource budget. Thanks to the recent progress on on-device training study, various approaches aiming at improving the training efficiency on edge devices appear <cit.>, such as the approach yielding sparse training parameter set <cit.> and employing gradient quantization to reduce back-propagation costs <cit.>. However, when edge devices are deployed in ever-changing environments, they are further required to be able to learn efficiently from the changing data. Here, the challenge is that given the inherent resource limitation of edge devices, it is impractical to continuously accumulate new data and then retrain the entire model from scratch. Another naive choice is to update the model only with the newly collected data, which, unfortunately, would lead to catastrophic forgetting and thus limiting the usage of the model in future environment. Under this circumstance, it comes to the idea to bring incremental learning schemes into the on-edge training framework to facilitate the on-device incremental learning. However, the fact is that although existing incremental learning techniques <cit.> can alleviate catastrophic forgetting and achieve high performance for both the old tasks/data and the new ones, such methods always introduce computation-consuming components like historical model <cit.> and memory buffer <cit.>, making the on-device implementation unfeasible. To address above issues, we revisit the incremental learning scheme to figure out whether we can build a edge-friendly incremental learning scheme with both good learning performance and very little computational cost at the same time. This starts with an empirical study on the knowledge intensity of the kernel elements of the network parameter. Inspired by the research on parameter-efficient fine-tuning <cit.>, our empirical study is conducted, seeking to understand which parameters, specifically the elements within the convolution kernels, are pivotal for the learning process and this defines the “knowledge intensity” of the kernel elements. Specifically, the empirical study is informed by two paradigms: the sensitivity-induced assessment and the amplitude-induced assessment. The former leverages data-related gradient information to discern the sensitivity of different parameters to incoming data, while the latter employs the inherent model weight to explore the contribution of each parameter to knowledge capturing. Based on the empirical study, we find that the central elements within the convolution kernel play a more pivotal role in learning knowledge from data, which is particularly pronounced within the last few layers of the network. This finding provides valuable insights for addressing the trade-off between effectiveness and efficiency in on-device incremental learning. Based on the above finding, we propose a novel technique, called Center-Sensitive Kernel Optimization (CSKO). Instead of directly using the central kernel elements to learn new data while freezing the surrounding ones to retain previously learned knowledge, we decouple the center kernel elements from the original 3 × 3 kernels to form new 1 × 1 kernels placed on the side of the main network. As we know, directly selecting the central kernel element and optimizing it in the common operation manner would involve gradient calculations across all parameters of the 3 × 3 kernels and all network layers, thus causing high memory consumption. On the contrary, our proposed CSKO mechanism enables the decoupled parameters undergo independent gradient computation and back-propagation on the separated network branch with 1 × 1 kernels. In such way, the proposed CSKO mechanism can largely alleviate the cost of the gradient computation and back-propagation, while preserving all the necessary properties for realizing a good incremental learning capability. To leverage the upcoming new task data to further reduce the optimization burden, we involve a Dynamic Channel Element Selection (DCES) strategy within the central kernel. As we know, the computational complexity of the orthogonal gradient projection[A technique widely used in the incremental learning framework to balance the stability and plasticity] <cit.> is O(N^3) to the number of channel elements due to the involved Singular Value Decomposition (SVD) of the covariance matrix. Thus, by reducing the number of channel elements, the proposed DCES is able to alleviate the computational cost for the orthogonal gradient projection dramatically. Moreover, as the channel elements are selected based on their importance for learning new task data, the reduce of learnable parameters though DCES will not hurt the plasticity of the whole framework but further improve the stability. Consequently, the overall performance both in terms of both the accuracy and complexity could be improved by DCES. To sum up, the contributions of this paper are summarized as follows: * We revisit the incremental learning mechanisms upon an empirical study on the knowledge intensity of the kernel element. The study reveals that the central kernel is usually more pivotal than others in the learning process of the common network architectures with 3× 3 convolutions kernels. * Based on the above findings, we propose a center-sensitive kernel optimization mechanism that realizes the separate gradient calculation and back-propagation of the newly formed 1× 1 central kernels, coupled with dynamic channel element selection strategy. It provides a simple but effective baseline for facilitating the on-device incremental learning. * Extensive experiments on public benchmarks show that the proposed method can achieve very high accuracy for learning new tasks while maintaining the old ones. Notably, our approach is the only one that restrict the overall memory cost less than 128 MB, indicating that our algorithm can be deployed on most on-device systems like Luckfox Pico Pro RV1106, Raspberry Pi Zero W. § EMPIRICAL STUDY ON THE KNOWLEDGE INTENSITY OF KERNEL ELEMENTS Recent research has demonstrated that different parameters in pre-trained model exhibit varying contributions to downstream tasks <cit.>. Some studies even suggest that more tunable parameters do not necessarily lead to better performance, and fine-tuning a subset of model parameters could usually achieve comparable or even better performances <cit.>. Motivated by this, we seek to evaluate which parameters, i.e., the elements in the convolution kernels of a deep model, are more pivotal to learn new knowledge of upcoming training data. Specifically, we define this character as the knowledge intensity of kernel elements. Sensitivity-induced knowledge intensity assessment. One way to measure the knowledge intensity of the kernel elements is to assess their importance for learning knowledge from the new task data. Define 𝐖^i∈ℝ^D× C× K× K as the convolution kernels of the i-th convolution layer in the pre-trained model, where K× K is the kernel size, C is the number of channels of the input feature map, and D is the number of current filters (i.e., number of channels of output feature map by current convolution layer). We quantify the importance of specific parameter element w^i∈𝐖^i by evaluating the effect of its changes on the classification loss ℒ, obtaining the sensitivity-induced knowledge intensity measurement. Given the data pair (x, y) from the training data, we calculate the sensitivity score of w^i by S_w^i = |ℒ(x, y, 𝐖^i) - ℒ(x, y, 𝐖^i|w^i=ŵ^i)|, where ŵ^i=w^i+Δ w^i, and the Δ w^i denotes the update for w^i. Then, following <cit.>, the computation can be approximated by using the first-order Taylor series expansion: S_w^i≈ |∂ℒ/∂ w^i|, where ∂ℒ/∂ w^i represents the gradient on the weight element w^i. To investigate the knowledge intensity of each kernel element, we accumulate the sensitivity scores of the weight elements at the same location across both the channel and kernel dimensions as follows: S_𝐖^i[u,v] = ∑_d=1^D∑_c=1^C|∂ℒ/∂𝐖^i[d,c,u,v]|, where ∂ℒ/∂𝐖^i[d,c,u,v] represents the weight gradient located at the position (u, v) within the c-th channel of the d-th convolution kernel. To better analyze the contribution of each position, we normalize the sensitivity scores of each parameter element: S_𝐖^i[u,v]^*=S_𝐖^i[u,v]/S_𝐖^i, where S_𝐖^i = ∑_u∑_vS_𝐖^i[u,v] represents the sum of sensitivity scores across all positions. For the conventional 3 × 3 convolution kernel, we analyze the sensitivity of 9 spatial positions across all blocks, as shown in Fig. <ref>. Amplitude-induced knowledge intensity assessment. Another effective way to evaluate the knowledge intensity is the amplitude of the parameter weights, where larger amplitude of the weight indicates higher knowledge intensity of the corresponding parameter after model training <cit.>. Under this circumstance, we analyze the amplitude of the 9 kernel elements within the 3 × 3 convolution kernels, as shown in Fig. <ref>, where the amplitude-induced knowledge intensity of each kernel element is calculated as follows, A_𝐖^i[u,v] = ∑_d=1^D∑_c=1^C|𝐖^i[d,c,u,v]|. As shown in Fig. <ref>, the two assessment approaches reflect a coincident finding: The central kernel element, i.e. the one at position (2, 2), always exhibits a higher knowledge intensity than others and its superiority tend to be overpowering in very deep network block and layers. The above finding indicates that in the incremental learning task, if one can only select one kernel element to maximize the plasticity of the network to learn on the new task data while freeze other kernel elements to keep stability on the old task knowledge due to the limitation on the computation resource, the selected kernel element must be the central kernel element while the surrounding kernel elements are the frozen ones. Moreover, the finding even points out the best locations for facilitating incremental learning on the central kernel elements—the last three layers of the forth block of the network—which is highly valuable for guiding the concrete design of the on-edge incremental learning framework. § CENTER-SENSITIVE KERNEL OPTIMIZATION Although the above finding provides a promising parameter selection insight for efficient on-device training, relying on conventional parameter optimization techniques still cannot meet the critical requirements of edge devices in terms of memory and computation efficiency. This challenge is attributed to the inherent characteristics of conventional optimization strategies, which necessitate the collaboration of other parameter elements to facilitate the computation of gradients and the execution of back-propagation, even when optimization is conducted at an element level. It is known that back-propagation imposes substantial computational and memory requirements, which become a significant impediment for on-device training. To handle above issue, we introduce an Center-Decoupled Mechanism (CDM) that collaborates with the Center-Sensitivity Kernel Selection (CSKS) strategy, enabling independent gradient computation and backpropagation for the central position elements of convolution kernels. This optimization process eliminates the need for collaboration among different parameter elements, thereby substantially reducing the computational overhead and memory requirements. Specifically, the parameters of the i-th convolution layer 𝐖^i∈ℝ^D × C × K × K are decoupled into 𝐖_θ^i∈ℝ^D × C × K × K and 𝐖_α^i∈ℝ^D × C × 1 × 1: 𝐖_α^i[d, c, 1, 1] = 𝐖^i[d, c, ⌈K/2⌉, ⌈K/2⌉], ∀ d ∈ [1, D], c ∈ [1, C], where ⌈·⌉ represents the round up of the element. The decoupled parameters 𝐖_α^i are allocated for new knowledge learning, possessing the ability to perform gradient computations and back-propagation independently of the frozen backbone parameters 𝐖_θ^i. The formalization of 𝐖_θ^i is as follows: 𝐖_θ^i[d, c, u, v] = {[ 0, u=v=⌈K/2⌉,; 𝐖^i[d,c,u,v], , ] ∀ d ∈ [1, D], c ∈ [1, C]. . After training, we integrate the optimized decoupled parameters 𝐖_α back into the backbone model parameters 𝐖_θ, keeping the initial network structure unchanged. During the inference stage, the network operates under its original structure, eliminating the need for any additional modifications. This learning mechanism facilitates efficient element-level optimization in an effectiveness-equivalent manner, which offers an innovative perspective for efficient on-device training. l0.45 < g r a p h i c s > Visualization of the channel sensitivity analysis results for a convolution layer within the trainable layers on on CIFAR-100 (top) and TinyImageNet (bottom). Dynamic Channel Element Selection. Considering the dynamic incoming data, we propose a Dynamic Channel Element Selection (DCES) strategy. This strategy aims to dynamically select channels based on the channel element importance to different incoming training data. We study the contribution of different channels of parameters on different data, such as CIFAR-100 and TinyImageNet, on the pre-trained ResNet-18. Different channels exhibit varying degrees of contribution to different task data. In contrast to spatial sensitivity, channel sensitivity exhibits specificity towards different data. Therefore, in contrast to the static center-sensitivity selection strategy, we incorporate the DCES strategy into the online learning process to enable dynamic parameter selection based on incoming data. This strategy can improve training efficiency of edge devices and plasticity in incremental learning. Specifically, for each given incremental learning task, we perform an initial evaluation of the channel sensitivity to the task-specific data, which serves as the basis for dynamic channel element selection. Benefiting from the proposed center-sensitivity kernel optimization mechanism, the computation of channel sensitivity is confined to the selected trainable parameters 𝐖_α^i∈ℝ^D × C × 1 × 1. The accumulated sensitivity scores for a given channel are computed as the combined sensitivity score: S_𝐖_α^i[c] = ∑_d=1^D|∂ℒ/∂𝐖_α^i[d,c,1,1]𝐖_α^i[d,c,1,1]|. We set a channel selection proportion s ∈ [0, 1] based on the sensitivity measured by Eq. <ref>. After dynamic channel element selction, the selected parameters 𝐖̃_α^i∈ℝ^D × (sC) × 1 × 1 are trained, while the remaining channels parameters are frozen. The channel selection is based on the principle that channels with higher sensitivity scores are more important for the task at hand. This mechanism enables the efficient resource allocation towards the more important parameter optimization, thereby balancing the plasticity and efficiency in on-device incremental learning. Other key benefit of this strategy is the development of a sparse orthogonal gradient projection strategy. The “Orthogonal Gradient Projection” (OGP) strategy <cit.> presents a promising avenue for incremental learning on edge devices, as it can mitigate the catastrophic forgetting without additional parameter allocation or data storage. Notwithstanding its advantages, the original strategy brings a substantial memory overhead, which is attributed to the SVD operation during the computation of the null space. For a matrix of size N × N, its computational complexity is O(N^3), and the memory complexity is O(N^2). Interestingly, the proposed dynamic channel element selection mechanism inherently provides a potential solution to this challenge. Therefore, we develop an improved sparse orthogonal gradient projection strategy for incremental learning. Specifically, the trainable parameters following our dynamic channel element selection 𝐖̃_α^i∈ℝ^D × (sC) × 1 × 1 exhibit the sparsity of s ∈ [0, 1] across channels. Given that the input feature and output feature of the i-th layer are denoted as X̃^i∈ℝ^B × (sC) × H × W and Ỹ^i∈ℝ^B × D × H' × W' respectively, then Ỹ^i = X̃^i𝐖̃_α^i. This observation implies that only a subset of the channels within the input features are actively engaged. The covariance matrix required for calculating the null space is derived from the input features X̃^i∈ℝ^B × (sC) × H × W, i.e., M̃ = Cov(X̃^i, T(X̃^i)), where M̃∈ℝ^(sC) × (sC), Cov represents the operation of covariance computation, and T denotes the transpose operation. Therefore, the covariance matrix can also be characterized by sparsity. This allows us to perform the SVD on the sparse covariance matrix, thereby enabling more efficient computation of the null space. Theoretically, in the conventional orthogonal gradient projection-based method, the memory complexity of the SVD operation of the covariance matrix M∈ℝ^C × C is O(C^2). In the proposed method, the memory complexity of sparse SVD is only O((sC)^2), achieving a memory saving of (1/s)^2 times. This approach not only maintains the benefits of the original strategy but also significantly reduces the memory overhead, making it more suitable for implementation on resource-limited edge devices. § EXPERIMENTS §.§ Experimental Setup Datasets. We evaluate the efficiency and effectiveness of the proposed method on two representative benchmarks, CIFAR-100 <cit.> and TinyImageNet <cit.>. Following the experimental setup of recent incremental learning work <cit.>, we first train on half of the classes in the dataset, then divide the remaining data into either 5, 10, or 20 tasks for incremental training. Evaluation Metrics. We evaluate the incremental learning performance of the model using the average accuracy of all tasks. We measure memory footprint <cit.> (mainly including model parameters, gradients, and activations during the training) and training Floating Point Operations Per Second (FLOPs) <cit.> to assess the efficiency of the method. Baseline models. We demonstrate the superiority of the proposed method in balancing performance and efficiency through two aspects of comparison. On the one hand, we compare the proposed method with recent pure on-device training method <cit.> to demonstrate the potential and superiority of our method for on-device incremental learning. We also provide the results of directly applying incremental learning strategy to on-device training method, confirming the impracticality of such straightforward combination. On the other hand, we compare our proposed method with several representative conventional incremental learning methods that adopt the same incremental settings. These conventional incremental learning methods typically focus on enhancing the performance of incremental learning, but their computational load and memory overhead are insufficient to meet the training demands of edge devices. Therefore, our focus is on comparing the advantages of the proposed method in balancing performance and efficiency. Experimental Details. To ensure a fair comparison, we strictly follow the incremental settings in prior works <cit.> and use ResNet-18 as the backbone model. Following <cit.>, all models are pre-trained within 100 epochs, using only cross-entropy loss, and then each task is incrementally learned for 60 epochs, using cross-entropy loss and prototype balance loss. We follow the training strategy of <cit.> to better evaluate the feasibility of our method for on-device incremental learning, which involves selecting the final 2 or 4 layers for training. More details can be found in Appendix. §.§ Experimental Results Comparison with On-Device Training Methods. Our experiment focuses on validating the superiority of the proposed method in balancing efficiency and performance. As shown in Table <ref>, we conduct a detailed comparison of our method with the recent on-device training method GF <cit.>, demonstrating the potential of our method for learning on edge devices. On-device training methods can facilitate efficient model training on edge devices. However, due to the lack of incremental learning capabilities in the pure on-device training method, it can lead to catastrophic forgetting, resulting in typically poor performance in incremental learning. An intuitive idea is to apply incremental learning strategies to on-device training methods. In this regard, we choose an Orthogonal Gradient Projection-based (OGP) incremental learning strategy <cit.>, which does not necessitate additional data or model parameters. This strategy was directly applied to GF, which indeed enhanced the incremental learning performance to a certain extent but at the cost of substantial memory overhead. This indicates the impracticality of straightforward combination of incremental learning and on-device training methods. In contrast, our method, while maintaining a resource expenditure comparable to GF, still significantly boosts the performance of incremental learning, with average improvements of 46.47% and 29.71% on CIFAR-100 and TinyImageNet, respectively. Comparison with Conventional and On-device Incremental Learning Methods. As shown in Fig. <ref>, compared with the conventional incremental learning methods, our proposed method exhibits a significant reduction in resource utilization on the CIFAR-100 dataset. Specifically, it achieves a minimum of 3 times and 11.8 times compression in computational and memory resources, respectively. It is worth noting that for the convenience of experimental comparison, we use a small model ResNet-18 as the backbone structure. When this structure is replaced with a larger model such as ViT <cit.>, the GFLOPs and memory usage of all methods will increase approximately linearly. At that time, the 3 times and 11.8 times resource compression brought by our method will greatly enhance the possibility of on-device training. Ablation Study. As shown in Fig. <ref>, the proposed CSKO framework mainly contains the following three components: Center-Sensitivity Kernel Selection (CSKS) and Center-Decoupled Mechanism (CDM), Dynamic Channel Element Selection (DCES). Here, we conduct comprehensive ablation studies on them to justify their effectiveness. l0.5 Performance under different number of last trainable layers on TinyImageNet dataset. 1.2pt 1*1010Layers 1*1010T=5 1*1010T=10 1*1010T=20 1*1010GFLOPs 1*1010Mem(MB) 1.0pt 2 43.46 42.09 41.08 144.00 67.47 -||—||— 3 42.07 41.47 39.95 146.05 78.15 -||—||— 4 41.58 40.42 38.78 151.42 97.57 1.2pt From the experimental results shown in Table <ref>, we can make the following observations: (1) The removal of all modules, i.e., only the direct combination of GF and OPG, is our baseline. It leads to a substantial degradation in performance due to the forgetting of old knowledge when updating all training layer parameters. The straightforward introduction of incremental learning strategy OPG also results in significant resource overhead. (2) The introduction of DCES or CSKS can mitigate the resource overhead associated with the computational consuming operations among the incremental learning strategy, e.g., the Singular Value Decomposition (SVD), thereby significantly reducing the overall memory overhead of the method. Furthermore, these mechanisms can minimize the forgetting of prior knowledge while ensuring effective learning of new knowledge by strategically optimizing a subset of parameters. Specifically, the introduction of CSKS exhibits a significant improvement in performance, i.e., more than 10 percent accuracy on both datasets. It also reduces the total memory cost from 727.47 MB to 87.47 MB. This validates our finding that the central elements of the convolution kernel are more pivotal for learning new knowledge, and freezing the remaining parameters can significantly mitigate the forgetting of old knowledge. As for DCS, it also performs well on simultaneously improving the learning accuracy and reducing the memory cost. (3) CDM presents an edge-friendly optimization mechanism that can further reduce memory costs while maintaining equivalent performance in collaboration with CSKS. Notably, it reduces the gradient memory cost by 7.27 times while keeping the accuracy identical. Based on the above ablation experiments, we can conclude that our method can achieve significant performance improvement through the optimization of a small number of carefully selected parameters, achieving dual benefits of performance and efficiency. These results align with existing research that underscores the pivotal role of efficient parameter subset fine-tuning for superior performance gains. We also explore the settings with different number of final trainable layers, as shown in Table. <ref>. We can observe that increasing the number of trainable layers does not result in improved performance. This may be due to the fact that using more parameters to learn new tasks can lead to the forgetting of previously learned knowledge. Exploration on Channel Elements Selection Criteria. We explore the impact of choosing different channel selection criteria for different network layers. Here, we focus on the last two layers of the network as a case study, and the results are shown in Table <ref>, "New/Old" refers to the selection according to the sensitivity of channels towards new/old task data. If channels that are more sensitive to new task data are selected, these channels are employed for the acquisition of new knowledge, while the remaining channels are frozen. This selection paradigm will lean more towards the plasticity of the model. On the other hand, if channels exhibiting a significant sensitivity to old task data, these channels are rendered inactive to conserve the old knowledge deemed to be of greater importance, while the remaining channels are used for learning new knowledge. This selection strategy favors the stability of the model. We can find that the choice of different criteria for channel selection does not result in significant performance differences. This can be attributed to the inherent differences between new and old data, which theoretically leads to the selection of mutually exclusive channels. This implies that we always allocate channels to new and old tasks in a rational manner, thereby ensuring that they have access to channels that are deemed most crucial to their respective tasks. Exploration on Different Kernel Shapes. To further validate our findings, we extend our exploration to other shapes of convolution kernels, such as the 5 × 5 convolution kernel utilized in AlexNet <cit.>. Figure <ref> shows the spatial sensitivity results of 3 × 3 convolution and 5 × 5 convolution in AlexNet. We can find that in different networks and different sizes of convolution kernels, the central position always shows more importance, which further supports our findings. At the same time, we also have a new observation, that is, the cross position of the convolution kernel tends to show more significant importance than the corner position. This observation prompts us to question whether selecting elements from the cross positions for learning new knowledge could potentially enhance performance. To answer this question, we conduct an experiment with four distinct element selection strategies: 3 × 3 convolution, 3 × 1 convolution, 1 × 3 convolution, and 1 × 1 convolution. As shown in Table <ref>, the 3 × 1 and 1 × 3 convolutions do not yield superior performance. On the contrary, the 1 × 1 convolution demonstrated improved performance, while the performance of the 3 × 3 convolution is slightly inferior. This can potentially be attributed to the fact that excessive parameter updates may induce the model to forget previously learned knowledge. This observation further supports our findings regarding the beneficial effect of our approach on the stability-plasticity balance in incremental learning. § CONCLUSION In conclusion, this study proposes a simple but effective edge-friendly incremental learning framework. Our empirical study find that the central kernel is pivotal for maximizing knowledge intensity when learning new data, while freezing other kernels can effectively balance new knowledge learning and catastrophic forgetting. We further propose a center-sensitive kernel optimization framework and dynamic channel element selection strategies significantly reduce the cost of gradient calculations and back-propagation. Besides, the proposed dynamic channel element selection strategy facilitate a sparse orthogonal gradient projection, further reducing optimization complexity. Extensive experiments demonstrate our method is efficient and effective, indicating its potential for advancing edge intelligence in dynamic environments. Future work will continue to explore the performance and applicability of on-device incremental learning. unsrt § ADDITIONAL INFORMATION ON EXPERIMENT SETUP Datasets. CIFAR-100 contains 100 classes, each with 500 training images and 100 testing images. TinyImageNet consists of 200 classes, each comprising 500 training images and 50 testing images. Experiment Details. The parameters are optimized using the Adam optimizer <cit.>, with a weight decay of 0.0005. The initial learning rate is set to 0.001, which is reduced to 0.1 of the original every 45 epochs. The batch size is set to 128. We empirically set the channel sparsity rate s to 0.5. All models are implemented within the PyTorch framework [https://pytorch.org/]. To ensure an equitable baseline for incremental learning, we match the same pre-trained accuracy of all datasets as <cit.>. Evaluation Metrics Explanation. Training FLOPs. Floating Point Operations Per Second (FLOPs) is a common used metric to measure the the computational cost of a model. The FLOPs of the forward pass are calculated as the number of multiplications and additions in each network layer. Consider the i-th layer 𝐖^i∈ℝ^D × C × K × K, with input feature X^i∈ℝ^B × C × H × W and output feature Y^i∈ℝ^B × D × H' × W', then Y^i = X^i𝐖^i. During the backpropagation, the two primary computational steps involve the calculation of the input gradient and the parameter gradient: 𝐆^i_X^i = 𝐆^i_Y^i*rot180(𝐖^i), 𝐆^i_𝐖^i = 𝐆^i_X^i*X^i, where 𝐆^i_Y^i denotes the output gradient and rot180(·) signifies the transposition operation. Therefore, the computational cost of the backpropagation process is typically calculated as twice that of forward computation process <cit.>. Memory Footprint. Following work <cit.>, the memory overhead during the training of a model primarily comprises components: activation, model weights and gradient. A conventional backpropagation implementation for a convolution layer within the model relies on the input features from the forward pass. Consequently, these features are stored in memory during the forward computation subsequent use in the backward propagation stage. Given the input feature of the i-th layer X^i∈ℝ^B × C × H × W, the number of intermediate activation in this layer is calculated as B_i× C_i× H_i× W_i. The weight of the i-th layer is represented as 𝐖^i∈ℝ^D × C × K × K, and then the number of the model parameters can be expressed as D_i× C_i× K_i× K_i. The computation method for the gradient matrix is similar to that for the weight, therefore, the total number of the gradient can also be represented as D_i× C_i× K_i× K_i. Typically, activations, model weights, and gradients are stored in memory as 32-bit floating-point representations, equivalent to 4 bytes. Therefore, considering the aforementioned three factors, the total memory overhead can be expressed as (2 × B_i× C_i× H_i× W_i+2 × (D_i× C_i× K_i× K_i)) × 4. § MEMORY ANALYSIS The proposed method consider the primary factors contributing to the significant memory overhead during training, and introduces effective strategies that are specifically designed to mitigate these issues. Specifically, under the proposed learning mechanism bolstered by the center-sensitive kernel optimization and dynamic channel element selection strategies, the trainable parameters can achieve independent gradient computation at the parameter level. Therefore, the gradient of the trainable parameters 𝐖̃_α^i∈ℝ^D × (sC) × 1 × 1 for the i-th layer is quantitatively limited to D_i× (sC_i), thereby achieving a memory saving of around K × K/s times in terms of gradient, where s ∈ [0, 1] denotes sparsity. GF is an effective method to facilitate training on edge devices, which mitigates the memory usage during the training process by employing a patch approximation strategy for activation. It simplify the input feature X^i∈ℝ^B × C × H × W to approximated X_a^i∈ℝ^B × C ×⌈H/r⌉×⌈W/r⌉, where r the size of patch. Despite its efficacy in enabling efficient training on edge devices, it exhibits a significant issue of catastrophic forgetting during the incremental acquisition of new knowledge. Drawing inspiration from this, we adopt GF as our foundational method and propose an efficient and effective incremental learning strategy designed for edge devices. This strategy enhances the capacity of edge devices to learn incrementally while mitigating the issue of catastrophic forgetting. As the proposed method is orthogonal to GF, it also benefits from its advantage in saving memory overhead for activation. In this study, the implementation of the incremental learning strategy necessitates additional memory allocation. Conventional orthogonal gradient projection-based incremental learning strategies indeed impose a significant memory overhead attributed to the SVD operation to covariance matrix. However, benefiting from the proposed center-sensitive kernel optimization mechanism coupled with the dynamic channel element selection strategy, we further develop a sparse orthogonal gradient projection incremental learning strategy. Remarkably, this strategy ensures the efficacy of the original version while incurring only a minimal memory overhead. Theoretically, in the conventional orthogonal gradient projection-based method, the memory complexity of the SVD operation of the covariance matrix M∈ℝ^C × C is O(C^2). In the proposed method, the memory complexity of sparse SVD is only O((sC)^2), achieving a memory saving of (1/s)^2 times. § LIMITATIONS. The Center-Sensitive Kernel Optimization Mechanism proposed in this paper achieves a significant reduction in computational and memory load within an acceptable performance degradation range by updating a subset of parameters. Although the current incremental learning researches are more concerned with convolutional networks and our method shows good generality under arbitrary convolutional backbone, it is interesting to explore parameter selection strategies under more diverse network structures, such as Transformers. We will continue to follow up on this topic under various types of architectures and further improve our work accordingly. § POTENTIAL IMPACT. Although the proposed method can significantly reduce resource overhead and demonstrate the potential of on-device training, it should be used with careful consideration of the negative performance degradation caused by improving training efficiency, even if such degradation is currently acceptable. However, users should be more careful about the balance between performance and resource overhead when facing scenarios that require high accuracy of prediction results, such as medical treatment. Therefore, we encourage users to further apply the strategy proposed in this paper after fully considering the demand of accuracy in application scenarios.
http://arxiv.org/abs/2406.08938v1
20240613090722
Mirror and Preconditioned Gradient Descent in Wasserstein Space
[ "Clément Bonet", "Théo Uscidda", "Adam David", "Pierre-Cyril Aubin-Frankowski", "Anna Korba" ]
math.OC
[ "math.OC", "cs.LG" ]
toc Human-Robot Interface for Teleoperated Robotized Planetary Sample Collection and Assembly Lorenzo Pagliara*, Vincenzo Petrone*, Enrico Ferrentino and Pasquale Chiacchio Department of Computer Engineering, Electrical Engineering and Applied Mathematics (DIEM) University of Salerno 84084 Fisciano, Italy e-mail: {lpagliara, vipetrone, eferrentino, pchiacchio}@unisa.it * L. Pagliara and V. Petrone are co-first authors June 17, 2024 ================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT As the problem of minimizing functionals on the Wasserstein space encompasses many applications in machine learning, different optimization algorithms on ℝ^d have received their counterpart analog on the Wasserstein space. We focus here on lifting two explicit algorithms: mirror descent and preconditioned gradient descent. These algorithms have been introduced to better capture the geometry of the function to minimize and are provably convergent under appropriate (namely relative) smoothness and convexity conditions. Adapting these notions to the Wasserstein space, we prove guarantees of convergence of some Wasserstein-gradient-based discrete-time schemes for new pairings of objective functionals and regularizers. The difficulty here is to carefully select along which curves the functionals should be smooth and convex. We illustrate the advantages of adapting the geometry induced by the regularizer on ill-conditioned optimization tasks, and showcase the improvement of choosing different discrepancies and geometries in a computational biology task of aligning single-cells. § INTRODUCTION =-1 Minimizing functionals on the space of probability distributions has become ubiquitous in Machine Learning for e.g. sampling <cit.>, generative modeling <cit.> or learning neural networks <cit.>, and is a challenging task as it is an infinite-dimensional problem. Wasserstein gradient flows <cit.> provide an elegant way to solve such problems on the Wasserstein space, i.e., the space of probability distributions with bounded second moment, equipped with the 2-Wasserstein distance from optimal transport (OT). These flows provide continuous paths of distributions decreasing the objective functional and can be seen as analog to Euclidean gradient flows <cit.>. Their implicit time discretization, referred to as the JKO scheme <cit.>, has been studied in depth <cit.>. In contrast, explicit schemes, despite being easier to implement, have been less investigated. Most previous works focus on the optimization of a specific objective functional with a time-discretation of its gradient flow with the 2-Wasserstein metrics. For instance, the forward Euler discretization leads to the Wasserstein gradient descent. The latter takes the form of gradient descent (GD) on the position of particles for functionals with a closed-form over discrete measures, e.g. Maximum Mean Discrepancy (MMD), which can be of interest to train neural networks <cit.>. For objectives involving absolutely continuous measures, such as the Kullback-Leibler (KL) divergence for sampling, other discretizations can be easily computed such as the Unadjusted Langevin Algorithm (ULA) <cit.>. This leaves the question open of assessing the theoretical and empirical performance of other optimization algorithms relying on alternative geometries and time-discretizations. In the optimization community, a recent line of works has focused on extending the methods and convergence theory beyond the Euclidean setting by using more general costs for the gradient descent scheme <cit.>. For instance, mirror descent (MD), originally introduced by <cit.> to solve constrained convex problems, uses a cost that is a divergence defined by a Bregman potential <cit.>. Mirror descent benefits from convergence guarantees for objective functions that are relatively smooth in the geometry induced by the (Bregman) divergence <cit.>, even if they do not have a Lipschitz gradient, i.e., are not smooth in the Euclidean sense. More recently, a closely related scheme, namely preconditioned gradient descent, was introduced in <cit.>. It can be seen as a dual version of the mirror descent algorithm, where the role of the objective function and Bregman potential are exchanged. In particular, its convergence guarantees can be obtained under relative smoothness and convexity of the Fenchel transform of the potential, with respect to the objective. This algorithm appears more efficient to minimize the gradient magnitude than mirror descent <cit.>. The flexible choice of the Bregman divergence used by these two schemes enables to design or discover geometries that are potentially more efficient. Mirror descent has already attracted attention in the sampling community, and some popular algorithms have been extended in this direction. For instance, ULA was adapted into the Mirror Langevin algorithm <cit.>. Other sampling algorithms have received their counterpart mirror versions such as the Metropolis Adjusted Langevin Algorithm <cit.>, diffusion models <cit.>, Stein Variational Gradient Descent (SVGD) <cit.>, or even Wasserstein gradient descent <cit.>. Preconditioned Wasserstein gradient descent has been also recently proposed for specific geometries in <cit.> to minimize the KL in a more efficient way, but without an analysis in discrete time. All the previous references focus on optimizing the KL as an objective, while Wasserstein gradient flows have been studied in machine learning for different functionals such as more general f-divergences <cit.>, interaction energies <cit.>, MMDs <cit.> or Sliced-Wasserstein (SW) distances <cit.>. In this work, we propose to bridge this gap by providing a general convergence theory of both mirror and preconditioned gradient descent schemes for general target functionals, and investigate as well empirical benefits of alternative transport geometries for optimizing functionals on the Wasserstein space. We emphasize that the latter is different from <cit.>, wherein mirror descent is defined in the Radon space of probability distributions, using the flat geometry defined by TV or L^2 norms on measures, see <Ref> for more details. Contributions. We are interested in minimizing a functional :_2(^d)→∪{+∞} over probability distributions, through schemes of the form, for k≥ 0, _k+1 = _∈ L^2(μ_k) ⟨ℱ(μ_k), -𝕀⟩_L^2(μ_k)+1/τ(,𝕀) , μ_k+1 = (_k+1)_#μ_k, =-1 with different costs :L^2(μ_k)× L^2(μ_k) →_+, and in providing convergence conditions. While we can recover a map =_k ∘_k-1…∘_1 such that μ_k=_#μ_0, the scheme (<ref>) proceeds by successive regularized linearizations retaining the Wasserstein structure, since the tangent space to _2(^d) at μ is a subset of L^2(μ) <cit.>. This paper is organized as follows. In <Ref>, we provide some background on Bregman divergences and differentiability over the Wasserstein space. In <Ref>, we consider Bregman divergences on L^2(μ) for the cost in (<ref>), generalizing the mirror descent scheme to the Wasserstein space. In <Ref>, we consider alternative costs in (<ref>), that are analogous to OT distances with translation-invariant cost, extending the dual space preconditioning scheme to the latter space. Finally, in <Ref>, we apply the two schemes to different objective functionals, including standard free energy functionals such as interaction energies and KL divergence, but also to Sinkhorn divergences <cit.> or SW <cit.> with polynomial preconditioners on single-cell datasets. Notation. Consider the set _2(^d ) of probability measures μ on ^d with finite second moment and ⊂_2(^d) its subset of absolutely continuous probability measures with respect to the Lebesgue measure. For any μ∈_2(^d ), we denote by L^2(μ) the Hilbert space of functions f : ^d →^d such that ∫f^2 dμ < ∞ equipped with the norm ‖·‖_L^2(μ) and inner product ·,·_L^2(μ). For a Hilbert space X, the Fenchel transform of f:X→ is f^*(y)=sup_x∈ X ⟨ x,y⟩ - f(x). Given a measurable map :^d →^d and μ∈_2(^d ), _#μ is the pushforward measure of μ by ; and ⋆μ = ∫(·-x)μ(x). For μ,ν∈_2(^d ), the 2-Wasserstein distance is _2^2 (μ, ν) = inf_γ∈Π(μ,ν)∫x-y^2 γ(x,y), where Π(μ,ν) is the set of couplings between μ and ν, and we denote by Π_o(μ,ν) the set of optimal couplings. We refer to the metric space (_2(^d ),_2) as the Wasserstein space. § BACKGROUND In this section, we fix μ∈_2(^d) and introduce first the Bregman divergence on L^2(μ) along with the notions of relative convexity and smoothness that will be crucial in the analysis of the optimization schemes. Then, we introduce the differential structure and computation rules for differentiating a functional :_2(^d)→ along curves and discuss notions of convexity on _2(^d). We refer the reader to <Ref> and <Ref> for more details on L^2(μ) and the Wasserstein space respectively. Finally, we introduce the mirror descent and preconditioned gradient descent on ^d. Bregman divergence on L^2(μ). <cit.> defined the Bregman divergence of Fréchet differentiable functionals. In our case, we only need Gâteaux differentiability. In this paper, ∇ refers to the Gâteaux differential, which coincides with the Fréchet derivative if the latter exists. Let ϕ_μ:L^2(μ)→ℝ be convex and continuously Gâteaux differentiable. The Bregman divergence is defined for all ,∈ L^2(μ) as _ϕ_μ(,) = ϕ_μ() - ϕ_μ() - ⟨∇ϕ_μ(),-⟩_L^2(μ). We use the same definition on ^d. The map ϕ_μ (respectively ∇ϕ_μ) in the definition of _ϕ_μ above is referred to as the Bregman potential (respectively mirror map). If ϕ_μ is strictly convex, then _ϕ_μ is a valid Bregman divergence, i.e. it is positive and separates maps μ-almost everywhere (a.e.). In particular, for ϕ_μ()=1/2_L^2(μ)^2, we recover the L^2 norm as a divergence _ϕ_μ(,) = 1/2-_L^2(μ)^2. Bregman divergences have received a lot of attention as they allow to define provably convergent schemes for functions which are not smooth in the standard (e.g. Euclidean) sense <cit.>, and thus for which gradient descent is not appropriate. These guarantees rely on the notion of relative smoothness and relative convexity <cit.>, which we introduce now on L^2(μ). Let ψ_μ, ϕ_μ:L^2(μ)→ℝ convex and continuously Gâteaux differentiable. We say that ψ_μ is -smooth (respectively -convex) relative to ϕ if and only if for all ,∈ L^2(μ), _ψ_μ(,) ≤_ϕ_μ(,) (respectively _ψ_μ(,) ≥_ϕ_μ(,)). =-1 Similarly to the Euclidean case <cit.>, relative smoothness and convexity are equivalent with respectively ϕ_μ -ψ_μ and ψ_μ-ϕ_μ being convex (see <Ref>). Yet, proving the convergence of (<ref>) requires only that these properties hold at specific functions (directions), a fact we will soon exploit. In some situations, we need the L^2 Fenchel transform ϕ_μ^* of ϕ_μ to be differentiable, e.g. to compute its Bregman divergence _ϕ_μ^*. We show in <Ref> that a sufficient condition to satisfy this property is for ϕ_μ to be strictly convex, lower semicontinuous and superlinear, i.e. lim_→∞ϕ_μ()/_L^2(μ)=+∞. Moreover, in this case, (∇ϕ_μ)^-1 = ∇ϕ_μ^*. When needed, we will suppose that ϕ_μ satisfies this. Differentiability on (_2(^d ),_2). =-1 Let :_2(^d)→∪{+∞}, and denote D()={μ∈𝒫_2(^d), (μ)<+∞} the domain of and D(_μ) = {∈ L^2(μ), _#μ∈ D()} the domain of _μ defined as _μ():=(_#μ) for all ∈ L^2(μ). In the following, we use the differential structure of (_2(^d), _2) introduced in <cit.>, and we say that ℱ(μ) is a Wasserstein gradient of at μ∈ D() if for any ν∈_2(^d) and any optimal coupling γ∈Π_o(μ,ν), (ν) = (μ) + ∫⟨(μ)(x), y-x⟩ γ(x,y) + o(_2(μ,ν)). If such a gradient exists, then we say that is _2-differentiable at μ <cit.>. The differentiability of _μ and are clearly related. Indeed, if satisfies (<ref>), _μ defined as above is Fréchet differentiable (<Ref>). Moreover there is a unique gradient belonging to the tangent space of _2(^d) verifying (<ref>) <cit.>. We will always restrict ourselves to this particular gradient, as it satisfies, for all ∈ D(_μ), ∇_μ()=(_#μ)∘, see <Ref>. _2-differentiable functionals include c-Wasserstein costs, potential energies (μ) = ∫ Vdμ or interaction energies 𝒲(μ) = ∬ W(x-y) dμ(x)dμ(y) for V and W differentiable and L-smooth <cit.>. However, entropy functionals, e.g. the negative entropy defined as (μ)=∫log(ρ(x))dμ(x) for distributions μ admitting a density ρ w.r.t. the Lebesgue measure, are not _2-differentiable. In this case, we can consider subgradients (μ) at μ for which (<ref>) becomes an inequality. To guarantee that the Wasserstein subgradient is not empty, we need ρ to satisfy some Sobolev regularity, see e.g. <cit.> or <cit.>. Then, if ∇logρ∈ L^2(μ), the only subgradient of in the tangent space is (μ)=∇logρ, see <cit.> and <cit.>. Free energies write as sums of potential, interaction and entropy terms <cit.>. It is notably the case for the KL to a fixed target distribution, that is the sum of a potential and entropy term <cit.>, or the MMD as a sum of a potential and interaction term <cit.>. Examples of functionals. =-1 The definitions of Bregman divergences on L^2(μ) and of _2-differentiability enable us to consider alternative Bregman potentials than the L^2(μ)-norm mentioned above. For instance, for V and W convex, differentiable and L-smooth with W even, we can use potential energies ϕ_μ^V():=(_#μ), for which _ϕ_μ^V(,)=∫_V((x),(x))dμ(x) where _V is the Bregman divergence of V on ^d. Notice that ϕ_μ()=1/2^2_L^2(μ) is a specific example of a potential energy where V=1/2·^2. In particular, we have (μ)=∇ V. We will also consider interaction energies ϕ_μ^W() := 𝒲(_#μ), for which _ϕ_μ^W(,) = ∬_W((x)-(x'), (x)-(x'))dμ(x)dμ(x') (see <Ref>). In that case, 𝒲(μ) = ∇ W ⋆μ. We will also use ϕ_μ^()=(_#μ) with the negative entropy. Note that Bregman divergences on the Wasserstein space using these functionals were proposed by <cit.>, but only for =𝕀 and OT maps . Convexity and smoothness in (_2(^d ),_2). =-1 In order to study the convergence of gradient flows and their discrete-time counterparts, it is important to have suitable notions of convexity and smoothness. On (_2(^d ),_2), different such notions have been proposed based on specific choices of curves. The most popular one is to require the functional to be -convex along geodesics (see <Ref>), which are of the form μ_t = ((1-t)𝕀 + t _μ_0^μ_1)_#μ_0 if μ_0∈ and μ_1∈_2(^d), with _μ_0^μ_1 the OT map between them. In that setting, /2_2^2(μ_0,μ_1) = /2_μ_0^μ_1-𝕀_L^2(μ_0)^2 ≤(μ_1)-(μ_0)-⟨(μ_0), _μ_0^μ_1-𝕀⟩_L^2(μ_0). =-1 For instance, free energies such as potential or interaction energies with convex V or W, or the negative entropy, are convex along geodesics <cit.>. However, some popular functionals, such as the 2-Wasserstein distance μ↦1/2_2^2(μ,η) itself, for a given η∈_2(^d), are not convex along geodesics. Instead <cit.> showed that it was sufficient for the convergence of the gradient flow to be convex along other curves, e.g. along particular generalized geodesics for the 2-Wasserstein distance <cit.>, which, for μ,ν∈_2(^d), are of the form μ_t = ((1-t)_η^μ + t _η^ν)_#η for _η^μ, T_η^ν OT maps from η to μ and ν. Observing that for ϕ_μ() = 1/2_L^2(μ)^2, we can rewrite (<ref>) as _ϕ_μ_0(_μ_0^μ_1,𝕀) ≤__μ_0(_μ_0^μ_1, 𝕀), we see that being convex along geodesics boils down to being convex in the L^2 sense for =𝕀 and chosen as an OT map. This observation motivates us to consider a more refined notion of convexity along curves. Let μ∈_2(^d), ,∈ L^2(μ) and for all t∈ [0,1], μ_t = (_t)_#μ with _t = (1-t)+t. We say that :_2(^d)→ is -convex (resp. -smooth) relative to :_2(^d)→ along t↦μ_t if for all s,t∈ [0,1], __μ(_s,_t)≥__μ(_s, _t) (resp. __μ(_s,_t) ≤__μ(_s,_t)). =-1 Notice that in contrast with <Ref>, <Ref> is stated for a fixed distribution μ and directions (,), and involves comparisons between Bregman divergences depending on μ and curves (_s)_s∈[0,1] depending on ,. The larger family of and for which <Ref> holds, the more restricted is the notion of convexity of - (resp. of -) on _2(^d). For instance, 2-Wasserstein generalized geodesics with anchor η∈_2(^d) correspond to considering , as all the OT maps originating from η, among which geodesics are particular cases when taking η=μ (hence =𝕀). If we furthermore ask for -convexity to hold for all μ∈_2(^d) and ,∈ L^2(μ) (i.e., not only OT maps), then we recover the convexity along acceleration free-curves as introduced in <cit.>. Our motivation behind <Ref> is that the convergence proofs of MD and preconditioned GD require relative smoothness and convexity properties to hold only along specific curves. Mirror (MD) and preconditioned gradient descent (PGD) on ^d. These schemes read respectively as ∇ϕ (x_k+1)-∇ϕ (x_k)=-τ∇ f(x_k) <cit.> and y_k+1- y_k=-τ∇ h^* (∇ g(y_k)) <cit.>, where the objectives f,g and the regularizers h,ϕ are convex C^1 functions from ^d to . The algorithms are closely related since, using the Fenchel transform and setting g=ϕ^* and h^*=f, we see that, for y=∇ϕ (x), the two schemes are equivalent when permuting the roles of the objective and of the regularizer. For MD, convergence of f is ensured if f is both 1/τ-smooth and -convex relative to ϕ <cit.>. Concerning PGD, assuming that h,g are Legendre, g(y_n) converges to the minimum of g if h^* is both 1/τ-smooth and -convex relative to g^* with >0 <cit.>. § MIRROR DESCENT =-1 For every μ∈_2(^d), let ϕ_μ:L^2(μ)→ℝ be strictly convex, proper and differentiable and assume that the (sub)gradient (μ)∈ L^2(μ) exists. In this section, we are interested in analyzing the scheme (<ref>) where the cost is chosen as a Bregman divergence, i.e. _ϕ_μ as defined in <Ref>. This corresponds to a mirror descent scheme in _2(^d): _k+1 = _∈ L^2(μ_k) _ϕ_μ_k(,𝕀) + τ⟨ℱ(μ_k), -𝕀⟩_L^2(μ_k), μ_k+1 = (_k+1)_#μ_k. Iterates of MD. In all that follows, we assume that the iterates (<ref>) exist, which is true e.g. for a superlinear ϕ_μ_k, since the objective is a sum of linear functions and of the continuous ϕ_μ_k. In the previous section, we have seen that the second term in the proximal scheme (<ref>) can be interpreted as a linearization of the functional at μ_k for Wasserstein (sub-)differentiable functionals. Now define for all ∈ L^2(μ_k), () = _ϕ_μ_k(,𝕀) + τ⟨(μ_k), -𝕀⟩_L^2(μ_k). Then, deriving the first order conditions of (<ref>) as ∇(_k+1)=0, we obtain μ_k-a.e., ∇ϕ_μ_k(_k+1) = ∇ϕ_μ_k(𝕀) - τ(μ_k) ⟺_k+1 = ∇ϕ_μ_k^*(∇ϕ_μ_k(𝕀) - τ(μ_k)). Note that for ϕ_μ()=1/2^2_L^2(μ), the update (<ref>) translates as _k+1= 𝕀 - τ(μ_k), and our scheme recovers Wasserstein gradient descent <cit.>. This is analogous to mirror descent recovering gradient descent when the Bregman potential is chosen as the Euclidean squared norm in ^d <cit.>. We discuss in <Ref> the continuous formulation of (<ref>), showing it coincides with the gradient flow of the mirror Langevin <cit.>, the limit of the JKO scheme with Bregman groundcosts <cit.>, Information Newton's flows <cit.>, or Sinkhorn's flow <cit.> for specific choices of ϕ and . Our proof of convergence of the mirror descent algorithm will require the Bregman divergence to satisfy the following property, which is reminiscent of conditions of optimality for couplings in OT. For μ,ρ∈ and ν∈_2(^d), setting _ϕ_μ^μ,ν=__#μ=ν _ϕ_μ(,𝕀), _ϕ_ρ^ρ,ν=__#ρ =ν _ϕ_ρ(,𝕀), the functional ϕ_μ is such that, for any ∈ L^2(μ) satisfying _#μ=ρ, we have _ϕ_μ(_ϕ_μ^μ,ν,) ≥_ϕ_ρ(_ϕ_ρ^ρ,ν, 𝕀). The inequality in <Ref> can be interpreted as follows: the "distance" between ρ and ν is greater when observed from an anchor μ that differs from ρ and ν. We show that a sufficient condition for Bregman divergences to satisfy this assumption are the following conditions on the Bregman potential ϕ. Let μ,ρ∈ and ν∈_2(^d). Let ϕ_μ be a pushforward compatible functional, i.e. there exists ϕ:_2(^d)→ such that for all ∈ L^2(μ), ϕ_μ()=ϕ(_#μ). Assume furthermore ϕ(μ) and ϕ(ρ) invertible (on ^d). Then, ϕ_μ satisfies <Ref>. All the maps ϕ_μ^V, ϕ_μ^W and ϕ_μ^ defined in <Ref> satisfy the assumptions of <Ref> under mild requirements, see <Ref>. The proof of <Ref> is given in <Ref>. It relies on the definition of an appropriate optimal transport problem _ϕ(ν,μ) = inf_γ∈Π(ν,μ) ϕ(ν)-ϕ(μ)-∫⟨ϕ(μ)(y), x-y⟩ γ(x,y), and on the proof of existence of OT maps for absolutely continuous measures (see <Ref>), which implies _ϕ(ν,μ) = _ϕ_μ(_ϕ_μ^μ,ν, 𝕀) with _ϕ_μ^μ,ν defined as in <Ref>. From there, we can conclude that ϕ_μ satisfies <Ref>. We notice that the corresponding transport problem recovers previously considered objects such as OT problems with Bregman divergence costs <cit.>, but is strictly more general (as our results pertain to the existence of OT maps), as detailed in <Ref>. =-1 We now analyze the convergence of the MD scheme. Under a relative smoothness condition along curves generated by =𝕀 and =_k+1 solutions of (<ref>) for all k≥ 0, we derive the following descent lemma, which ensures that ((μ_k))_k is non-increasing. Its proof can be found in <Ref> and relies on the three-point inequality <cit.>, which we extended to L^2(μ) in <Ref>. =-1 Let >0, τ≤1/. Assume for all k≥ 0, is -smooth relative to ϕ along t↦((1-t)𝕀 + t _k+1)_#μ_k, which implies _ϕ_μ_k(_k+1,𝕀) ≥__μ_k(_k+1, 𝕀). Then, for all k≥ 0, (μ_k+1) ≤(μ_k) - 1/τ_ϕ_μ_k(𝕀, _k+1). Assuming additionally the convexity of along the curves μ_t=((1-t)𝕀 + t _ϕ_μ^μ, ν)_#μ, t∈ [0,1] and that ϕ satisfies <Ref>, we can obtain global convergence. =-1 Let ν∈_2(^d), ≥ 0. Suppose <Ref> and the conditions of <Ref> hold, and that is -convex relative to ϕ along the curves t↦((1-t)𝕀 + t_ϕ_μ_k^μ_k,ν)_#μ_k. Then, for all k≥ 1, (μ_k) - (ν) ≤/(1-τ)^-k - 1_ϕ(ν,μ_0) ≤1 - τ/kτ_ϕ(ν, μ_0). Moreover, if >0, taking ν=μ^* the minimizer of , we obtain a linear rate: for all k≥ 0, _ϕ(μ^*, μ_k) ≤(1-τ)^k _ϕ(μ^*, μ_0). The proof of <Ref> can be found in <Ref>, and requires =-1 <Ref> to hold so that consecutive distances between iterates and the global minimizer telescope. This is not as direct as in the proofs of <cit.> over ^d, because the minimization problem of each iteration (<ref>) happens in a different space L^2(μ_k). We discuss in <Ref> how to verify the relative smoothness and convexity on some examples. In particular, when both and ϕ are potential energies, it is inherited from the relative smoothness and convexity on ^d, and the conditions are similar with those for MD on ^d. We also note that relative smoothness assumptions along descent directions as stated in <Ref> and relative strong convexity along optimal curves between the iterates and a minimizer as stated in <Ref> have been used already in the literature of optimization over measures in very specific cases, e.g. for descent results for the KL along SVGD <cit.> or for Sinkhorn convergence in <cit.>. We further analyze in <Ref> the convergence of Bregman proximal gradient scheme <cit.> for objectives of the form (μ)=(μ)+(μ) with non smooth; which includes the KL divergence decomposed as a potential energy plus the negative entropy. Implementation. =-1 We now discuss the practical implementation of MD on (_2(^d),_2) as written in (<ref>). If ϕ_μ is pushforward compatible, we have ∇ϕ_μ_k(_k+1) = ϕ((_k+1)_#μ_k)∘_k+1; but if ∇ϕ_μ_k^* is unknown, the scheme is implicit in _k+1. A possible solution is to rely on a root finding algorithm such as Newton's method to find the zero of ∇ at each step. However, this procedure may be computationally costly and scale badly w.r.t. the dimension and the number of samples, see <Ref>. Nonetheless, in the special case ϕ_μ^V()=∫ V∘ μ with V differentiable, strongly convex and L-smooth, since (μ) = ∇ V and (∇ V)^-1=∇ V^*, the scheme reads as ∀ k≥ 0, _k+1 = ∇ V^*∘(∇ V - τℱ(μ_k)). This scheme is analogous to MD in ^d <cit.> and has been introduced as the mirror Wasserstein gradient descent <cit.>. Moreover, for V=1/2·_2^2, as observed earlier, we recover the usual Wasserstein gradient descent, i.e. _k+1 = 𝕀 - τ(μ_k) <cit.>. The scheme can also be implemented for Bregman potentials that are not pushforward compatible. For specific ϕ, it recovers notably (mirrored) SVGD <cit.> or the Kalman-Wasserstein gradient descent <cit.>. We refer to <Ref> for more details. § PRECONDITIONED GRADIENT DESCENT =-1 As seen in <Ref>, preconditioned gradient descent on ^d has dual convergence conditions compared to MD. Our goal is to extend these to (<ref>) and _2(^d). Let μ∈_2(^d), h:^d→ proper and strictly convex on ^d. We consider in this section ϕ_μ^h()=∫ h∘ dμ and (,𝕀) = ϕ_μ_k^h((𝕀 - ) /τ)τ = ∫ h((x-(x))/τ)τ dμ_k(x). This type of discrepancy is analogous to OT costs with translation-invariant ground cost c(x,y)=h(x-y), which have been popular as they induce an OT map <cit.>. Such costs have been introduced e.g. in <cit.> to promote sparse transport maps. More generally, for ϕ_μ strictly convex, proper, differentiable and superlinear, we have (∇ϕ_μ)^-1 = ∇ϕ_μ^* and the following theory is still valid. For simplicity, we leave studying more general ϕ for future works. Here, the scheme (<ref>) results in: _k+1 = _∈ L^2(μ_k) ∫ h(x-(x)/τ)τ dμ_k(x) + ⟨ℱ(μ_k), -𝕀⟩_L^2(μ_k), μ_k+1 = (_k+1)_#μ_k. Deriving the first order conditions similarly to <Ref>, we obtain the following update: ∀ k≥ 0, _k+1 = 𝕀 - τ (∇ϕ_μ_k^h)^-1( (μ_k)) = 𝕀 - τ∇ h^*∘ℱ(μ_k). =-1 Notice that for h=1/2·_2^2 the squared Euclidean norm, ϕ_μ^h and ϕ_μ^h^* recover the squared L^2(μ) norm, and schemes (<ref>) and (<ref>) coincide. The scheme (<ref>) is analogous to preconditioned gradient descent <cit.>, which provides a dual alternative to mirror descent. For the latter, the goal is to find a suitable preconditioner h^* allowing to have convergence guarantees, or to speed-up the convergence for ill-conditioned problems. It was recently considered on the Wasserstein space by <cit.> and <cit.> with a focus on the KL divergence as objective and for h=·_p^p with p>1 <cit.> or h quadratic <cit.>. Moreover, their theoretical analysis was mostly done using the continuous formulation ∂_tμ_t - div(μ_t∇ h^* ∘(μ_t)) = 0 <cit.>, while we focus on deriving conditions for the convergence of the discrete scheme (<ref>) for more general functionals objectives. Convergence guarantees. =-1 Inspired by <cit.>, we now provide a descent lemma on (ϕ_μ_k^((μ_k)))_k under a technical inequality between the Bregman divergences of ϕ_μ_k^ and _μ_k for all k≥ 0. Additionally, we also suppose that is convex along the curves generated by =𝕀 and _k+1. This last hypothesis ensures that __μ_k(_k+1,𝕀)≥ 0, and thus that (ϕ_μ_k^((μ_k)))_k is non-increasing. Analogously to the Euclidean case, ϕ_μ^ quantifies the magnitude of the gradient, and provides a second quantifier of convergence leading to possibly different efficient methods compared to mirror descent <cit.>. The proof relies mainly on the three-point identity (see e.g. <cit.> or <Ref>) and algebra with the definition of Bregman divergences. Let >0. Assume τ≤1/, and for all k≥ 0, convex along t↦((1-t)_k+1 + t𝕀)_#μ_k and _ϕ_μ_k^((μ_k+1)∘_k+1, (μ_k)) ≤__μ_k(𝕀, _k+1). Then, for all k≥ 0, ϕ_μ_k+1^((μ_k+1)) ≤ϕ_μ_k^((μ_k)) - 1/τ__μ_k(_k+1,𝕀). =-1 Under an additional assumption of a reverse inequality between the Bregman divergences of ϕ_μ_k^ and _μ_k, and assuming that ϕ_μ^ attains its minimum in 0, we can show the convergence of the gradient quantified by ϕ^ (see <Ref>), and the convergence of ((μ_k))_k towards the minimum of . Let ≥ 0 and μ^*∈_2(^d) be the minimizer of . Assume the conditions of <Ref> hold, and for =_, _#μ_k=μ^* __μ_k(𝕀,), __μ_k(𝕀,)≤_ϕ_μ_k^((_#μ_k)∘, (μ_k)). Then, for all k≥ 1, since (μ^*)=0 and ϕ_μ_k^(0)=(0), ϕ_μ_k^((μ_k)) - h^*(0) ≤/(1-τ)^-k - 1((μ_0)-(μ^*)) ≤1-τ/τ k((μ_0)-(μ^*)). Moreover, assuming that attains its minimum at 0 and >0, converges towards its minimum at a linear rate, i.e. for all k≥ 0, (μ_k)-(μ^*) ≤(1-τ)^k ((μ_0)-(μ^*)). The proofs of <Ref> and <Ref> can be found in <Ref> and <Ref>. =-1 We now discuss sufficient conditions to obtain the inequalities between the Bregman divergences required in <Ref> and <Ref>. <cit.> showed on ^d for a cost h and an objective function g, that these conditions were equivalent to -smoothness and -convexity of the preconditioner (analogous to ϕ_μ^*) relative to the convex conjugate of the objective g^* (analogous to _μ^*). To write the inequalities we assumed as a relative smoothness/convexity property of ϕ_μ_k^ w.r.t. _μ_k^*, we would need at least to ensure that _μ_k^* is differentiable, as to define its Bregman divergence according to <Ref>, e.g. by assuming _μ_k strictly convex and superlinear (see <Ref>). The latter is true for several examples of functionals we already mentioned, such as potential or interaction energies with strongly convex potentials. In this case, the inequality between the Bregman divergences in <Ref> is equivalent with the smoothness of ϕ^_μ relative to _μ_k^* along t↦((1-t)(μ_k) + t(μ_k+1)∘_k+1)_#μ_k. In particular, for a potential energy, the conditions coincide with those of <cit.> in ^d. We refer to <Ref> for more details. § APPLICATIONS AND EXPERIMENTS In this section, we first discuss how to verify the relative convexity and smoothness between functionals in practice. Then, we provide some examples of mirror descent and preconditioned gradient descent on different objectives. We refer to <Ref> for more details on the experiments. Relative convexity of functionals. To assess relative convexity or smoothness as stated in <Ref>, we need to compare the Bregman divergences along the right curves. When both functionals are of the same type, for example potential (respectively interaction) energies, this property is lifted from the convexity and smoothness on ^d of the underlying potential functions (respectively interaction kernels) to _2(^d), see <Ref> for more details. When both are potential energies, the schemes (<ref>) and (<ref>) are equivalent to parallel MD and preconditioned GD since there are no interactions between the particles, and the conditions of convergences coincide with the ones obtained for MD and preconditioned GD on ^d. In other cases, this provide schemes that are novel to the best of our knowledge. For functionals which are not of the same type, it is less straightforward. Using equivalent notions of convexity (<Ref>), we may instead compare their Hessians along the right curves, see <Ref> for an example between an interaction and a potential energy. For a functional obtained as a sum = + with _μ and _μ convex, since __μ = __μ + __μ, __μ≥max{__μ, __μ}, and thus is 1-convex relative to and . This includes e.g. the KL divergence which is convex relative to the potential and the negative entropy. MD on interaction energies. We first focus on minimizing interaction energies 𝒲 with kernel W(z)=1/4z_Σ^-1^4 - 1/2z_Σ^-1^2 with Σ∈ S_d^++(), whose minimizer is an ellipsoid <cit.>. Since its Hessian norm can be bounded by a polynomial of degree 2, following <cit.>, W is smooth relative to K_4(z)=1/4z_2^4 + 1/2z_2^2 and 𝒲 is smooth relative to ϕ_μ() = ∬ K_4((x)-(y)) dμ(x)dμ(y). Supposing additionally that the distributions are compactly supported, we can show that 𝒲 is smooth relative to the interaction energy with K_2(z)=1/2z_2^2. For ill-conditioned Σ, the convergence can be slow. Thus, we also propose to use K_2^Σ(z) = 1/2z_Σ^-1^2 and K_4^Σ(z) = 1/4z_Σ^-1^4 +1/2z_Σ^-1^2. We illustrate these schemes on <Ref> and observe the convergence we expect for the schemes taking into account Σ. In practice, since ∇ϕ_μ() = (∇ K ⋆_#μ)∘, the scheme needs to be approximated using Newton's algorithm which can be computationally heavy. Using ϕ_μ^V()=∫ V∘ dμ with V=K_2^Σ, we obtain a more computationally friendly scheme with the same convergence, see <Ref>, but for which the smoothness is trickier to show. MD on KL. We now focus on minimizing (μ)=∫ Vdμ + (μ) for V(x)=1/2 x^TΣ^-1 x with Σ possibly ill-conditioned, whose minimizer is the Gaussian ν=(0,Σ), and for which _2-gradient descent is slow to converge. We study the MD scheme in (<ref>) with negative entropy as the Bregman potential (NEM), and compare it on <Ref> with the Forward-Backward (FB) scheme studied in <cit.> and the ideally preconditioned Forward-Backward scheme (PFB) with Bregman potential ϕ_μ^V (see (<ref>) in <Ref>). For computational purpose, we restrain the minimization in (<ref>) over affine maps, which can be seen as taking the gradient over the submanifold of Gaussians <cit.>. Starting from (0,Σ_0), the distributions stay Gaussian over the flow, and their closed-form is reported in (<ref>) (<Ref>). We note that this might not be the case for the scheme (<ref>), and thus that this scheme does not enter into the framework developed in the previous sections. Nonetheless, it demonstrates the benefits of using different Bregman potentials. We generate 20 Gaussian targets ν on ^10 with Σ=UDU^T, D diagonal and scaled in log space between 1 and 100, and U a uniformly sampled orthogonal matrices, and we report the averaged KL over time. Surprisingly, NEM, which does not require an ideal (and not available in general) preconditioner, is almost as fast to converge as the ideal PFB, and much faster than the FB scheme. Preconditioned GD for single-cells. =-1 Predicting the response of cells to a perturbation is a central question in biology. In this context, as the measuring process is destructive, feature descriptions of control and treated cells must be dealt with as (unpaired) source μ and target distributions ν. Following <cit.>, OT theory to recover a mapping T between these two populations has been used in <cit.>. Inspired by the recent success of iterative refinement in generative modeling, through diffusion <cit.> or flow-based models <cit.>, our scheme (<ref>) follows the idea of transporting μ to ν via successive and dynamic displacements instead of, directly, with a static map T̅. We model the transition from unperturbed to perturbed states through the (preconditioned) gradient flow of a functional (μ) = D(μ, ν) initialized at μ_0 = μ, where D is a distributional metric, and predict the perturbed population via μ̂ = min_μℱ(μ). We focus on the datasets used in <cit.>, consisting of cell lines analyzed using (i) 4i <cit.>, and (ii) scRNA sequencing <cit.>. For each profiling technology, the response to respectively (i) 34 and (ii) 9 treatments are provided. As in <cit.>, training is performed in data space for the 4i data and in a latent space learned by the scGen autoencoder <cit.> for the scRNA data. We use three metrics: the Sliced-Wasserstein distance SW_2^2 <cit.>, the Sinkhorn divergence S_ε,2^2 <cit.> and the energy distance ED <cit.>, and we compare the performances when minimizing this functional via preconditioned GD vs. (vanilla) GD. We measure the convergence speed when using a fixed relative tolerance tol=10^-3, as well as the attained optimal value ℱ(μ̂). Note that we follow <cit.> and additionally consider 40% of unseen (test) target cells for evaluation, i.e., for computing ℱ(μ̂) = D(μ̂, ν). As preconditioner, we use the one induced by h^*(x) = (x_2^a + 1)^1/a-1 with a > 0, which is well suited to minimize functionals which grow in x-x^*^a/(a-1) near their minimum <cit.>. We set the step size τ = 1 for all the experiments. Then, we tune a very simply: for a given metric D and a profiling technology, we pick a random treatment and select a ∈{1.25, 1.5, 1.75} by grid search, and we generalize the selected a for all the other treatments. Results are described in Figure <ref>: Preconditioned GD significantly outperforms GD over the 43 datasets, in terms of convergence speed and optimal value ℱ(μ̂). For instance, for D=S_2,ε^2, we converge in 10 times less iterations while providing, on average, a better estimate of the treated population. We also compare our iterative (non parametric) approach with the use of a static (non parametric) map in <Ref>. § CONCLUSION In this work, we extended two non-Euclidean optimization methods on ^d to the Wasserstein space, generalizing _2-gradient descent to alternative geometries. We investigated the practical benefits of these schemes, and provided rates of convergences for pairs of objectives and Bregman potentials satisfying assumptions of relative smoothness and convexity along specific curves. While these assumptions can be easily checked is some cases (e.g. potential or interaction energies) by comparing the Bregman divergences or Hessian operators in the Wasserstein geometry, they may be hard to verify in general. Different objectives such as the Sliced-Wasserstein distance or the Sinkhorn divergence, or alternative geometries to the Wasserstein-2 as studied in this work, require to derive specific computations on a case-by-case basis. We leave this investigation for future work. bool preprint or bool final Clément Bonet acknowledges the support of the center Hi! PARIS. Adam David gratefully acknowledges funding by the BMBF 01|S20053B project SALE. Pierre-Cyril Aubin-Frankowski was funded by the FWF project P 36344-N. Anna Korba acknowledges the support of ANR-22-CE23-0030. plainnat PART: *Appendix toc § RELATED WORKS Wasserstein Gradient flows with respect to non-Euclidean geometries. Several existing schemes are based on time-discretizations of gradient flows with respect to optimal transport metrics, but different than the Wasserstein-2 distance. To simplify the computation of the backward scheme, <cit.> added an entropic regularization into the JKO scheme while <cit.> considered using the Sliced-Wasserstein distance instead. More recently, <cit.> suggested using Bregman divergences e.g. when geodesic distances are not known in closed-forms. The most popular objective in Wasserstein gradient flows is the KL. However, this can be intricate to compute as it requires the evaluation of the density at each step, which is not known for particles, and thus requires approximations using kernel density estimators <cit.> or density ratio estimators <cit.>. Restricting the velocity field to a reproducing kernel Hilbert space (RKHS), an update in closed-form can be obtained, which is given by the SVGD algorithm <cit.>. This algorithm can also be seen as using an alternative Wasserstein metric <cit.>. However, the restriction to RKHS can hinder the flexibility of the method. This motivated the introduction of new schemes based on using the Wasserstein distance with a convex translation invariant cost <cit.>. Particle systems preconditioned by they empirical covariance matrix have also been recently considered, and can be seen as discretization of the Kalman-Wasserstein or Covariance Modulated gradient flow <cit.>. Mirror descent with flat geometry. The space of probability distributions can be endowed with different metrics. When endowed with the Fisher-Rao metric instead of the Wasserstein distance, the geometry becomes very different. Notably, the shortest path between the two distributions is now a mixture between them. In this situation, the gradient is the first variation. <cit.> studied the mirror descent in this space and notably showed connections with Sinkhorn algorithm when the mirror map and the optimized function are KL divergences. <cit.> extended the mirror descent algorithm for more general time steps, and notably recovered the “Wasserstein Mirror Flow” proposed by <cit.> as a special case. Bregman divergence on _2(^d). Several works introduced Bregman divergences on _2(^d). <cit.> first studied the existence of Monge maps for the OT problem with Bregman costs c(x,y)=_V(x,y) and symmetrized Bregman costs c(x,y)=_V(x,y)+_V(y,x). For Bregman costs, the resulting OT problem was named the Bregman-Wasserstein divergence and its properties were studied in <cit.>. The Bregman-Wasserstein divergence has also been used by <cit.> to show the convergence of the Mirror Langevin algorithm while <cit.> studied its JKO scheme with KL objective. <cit.> introduced the notion of Bregman divergence on Wasserstein space for a geodesically strictly convex :_2(^d)→ as ∀μ,ν∈_2(^d), _(μ,ν) = (μ)-(ν) - ⟨(ν), _ν^μ - 𝕀⟩_L^2(ν), where _ν^μ is the OT map between ν and μ w.r.t _2. The Bregman divergence used in our work and as defined in <Ref> is more general as it allows using more general maps and contains as special case (<ref>). <cit.> studied properties of this Bregman divergence for different functionals and provided closed-forms for one-dimensional distributions or Gaussian, but did not use it to define a mirror scheme. Mirror descent on _2(^d). <cit.> defined a mirror flow by using the continuous formulation. They focused on KL objectives with Bregman potential ϕ(μ)=1/2_2^2(μ,ν) with some reference measure ν∈_2(^d), and defined the flow as the solution of φ(μ_t) = ϕ(μ_t) d/dtφ(μ_t) = - (μ_t). We note that ϕ is pushforward compatible and hence enters our framework. Also related to our work, <cit.> studied a Wasserstein Newton's flow, which, analogously to the relation between Newton's method and mirror descent <cit.>, is another discretization of our scheme for ϕ=. We clarify the link with the Mirror Descent algorithm we define in this work with the previous continuous formulation above in <Ref>. § BACKGROUND ON L^2(Μ) §.§ Differential calculus on L^2(μ) We recall some differentiability definitions on the Hilbert space L^2(μ) for μ∈_2(^d). Let ϕ:L^2(μ)→ℝ. We start by recalling the notions of Gâteaux and Fréchet derivatives. A function ϕ:L^2(μ)→ℝ is said to be Gâteaux differentiable at T if there exists an operator ϕ'():L^2(μ)→ℝ such that for any direction h∈ L^2(μ), ϕ'()(h) = lim_t→ 0 ϕ(+th) - ϕ()/t, and ϕ'() is a linear function. The operator ϕ'() is called the Gâteaux derivative of ϕ at and if it exists, it is unique. The Fréchet derivative of ϕ denoted δϕ is defined implicitly by ϕ(+th) = ϕ() + tδϕ(,h) + t o(h). If ϕ is Fréchet differentiable, then it is also Gâteaux differentiable, and both derivatives agree, i.e. for all ,h∈ L^2(μ), δϕ(,h) = ϕ'()(h) <cit.>. Moreover, since L^2(μ) is a Hilbert space, and δϕ(,·) and ϕ'() are linear and continuous, if ϕ is Fréchet (resp. Gâteaux) differentiable, by the Riesz representation theorem, there exists ∇ϕ∈ L^2(μ) such that for all h∈ L^2(μ), δϕ(,h) = ⟨∇ϕ(), h⟩_L^2(μ) (resp. ϕ'()(h)=⟨∇ϕ(),h⟩_L^2(μ)). As a brief comment on these notions in the context of convexity, if the subdifferential of a convex f at x contains a single element then it is the Gâteaux derivative and we have an inequality f(y)≥ f(x)+⟨∇ f(x),y-x ⟩. Instead Fréchet différentiability gives an equality (<ref>) corresponding to a series expansion. §.§ Convexity on L^2(μ) Let ϕ:L^2(μ)→ be Gâteaux differentiable. We recall that ϕ is convex if for all t∈ [0,1], ,∈ L^2(μ), ϕ((1-t)+t) ≤ (1-t) ϕ() + tϕ(), which is equivalent by <cit.> with ∀,∈ L^2(μ), ϕ()≥ϕ() + ⟨∇ϕ(), -)⟩_L^2(μ)_ϕ(,) ≥ 0. We now present equivalent definitions of the relative smoothness and relative convexity, which is the equivalent of <cit.>. Let ψ ,ϕ : L^2(μ)→ℝ be convex and Gâteaux differentiable functions. The following conditions are equivalent: * ψ -smooth relative to ϕ * ϕ- ψ convex * If twice Gâteaux differentiable, ⟨∇^2 ψ () , ⟩_L^2(μ)≤⟨∇^2ϕ() , ⟩_L^2(μ) for all ,∈ L^2(μ) * ⟨∇ψ () - ∇ψ (), -⟩_L^2(μ)≤⟨∇ϕ()- ∇ϕ(), -⟩_L^2(μ) for all ,∈ L^2(μ). The following conditions are equivalent: * ψ -convex relative to ϕ * ψ -ϕ convex * If twice differentiable, ⟨∇^2 ψ () , ⟩_L^2(μ)≥⟨∇^2ϕ() , ⟩_L^2(μ) for all ,∈ L^2(μ) * ⟨∇ψ () - ∇ψ (), -⟩_L^2(μ)≥⟨∇ϕ()-∇ϕ(), -⟩_L^2(μ) for all ,∈ L^2(μ). We do it only for the smoothness. It holds likewise for the convexity. <ref><ref>: ∀,∈ L^2(μ), _ψ(,) ≤_ϕ(,) ∀,∈ L^2(μ), ψ()-ψ() - ⟨∇ψ(), -⟩_L^2(μ) ≤(ϕ()-ϕ()-⟨∇ϕ(), -⟩_L^2(μ)) ∀,∈ L^2(μ), (ϕ - ψ)() - ⟨∇(ϕ-ψ)(), -⟩_L^2(μ)≤ (ϕ-ψ)(). For the rest of the equivalences, we apply <cit.>. Indeed, ϕ-ψ convex is equivalent with ∀,∈ L^2(μ), ⟨∇(ϕ-ψ)()-∇(ϕ-ψ)(), -⟩_L^2(μ)≥ 0 ∀,∈ L^2(μ), ⟨ϕ()-∇ϕ(), -⟩_L^2(μ)≥⟨∇ψ()-∇ψ(),-⟩_L^2(μ), which gives the equivalence between <ref> and <ref>. And if ψ and ϕ are twice differentiables, it is also equivalent with ∀,∈ L^2(μ), ⟨∇^2(ϕ-ψ)() , ⟩_L^2(μ)≥ 0 ∀,∈ L^2(μ), ⟨∇^2ϕ(),⟩_L^2(μ)≥⟨ψ(),⟩_L^2(μ), which gives the equivalence between <ref> and <ref>. § BACKGROUND ON WASSERSTEIN SPACE §.§ Wasserstein differentials We recall the notion of Wasserstein differentiability introduced in <cit.>. First, we introduce sub and super differential. Let :_2(^d)→ (-∞, +∞] lower semi-continuous and denote D()={μ∈_2(^d), (μ)<∞}. Let μ∈ D(). Then, a map ξ∈ L^2(μ) belongs to the subdifferential ∂^-(μ) of at μ if for all ν∈_2(^d), (ν) ≥(μ) + sup_γ∈Π_o(μ,ν)∫⟨ξ(x), y-x⟩ dγ(x,y) + o(_2(μ,ν)). Similarly, ξ∈ L^2(μ) belongs to the superdifferential ∂^+(μ) of at μ if -ξ∈∂^-(-)(μ). Then, we say that a functional is Wasserstein differentiable if it admits sub and super differentials which coincide. A functional :_2(^d)→ is Wasserstein differentiable at μ∈_2(^d) if ∂^-(μ)∩∂^+(μ) ≠∅. In this case, we say that (μ)∈∂^-(μ)∩∂^+(μ) is a Wasserstein gradient of at μ, satisfying for any ν∈_2(^d), γ∈Π_o(μ,ν), (ν) = (μ) + ∫⟨(μ)(x), y-x⟩ dγ(x,y) + o(_2(μ,ν)). Recall that the tangent space of _2(^d) at μ∈_2(^d) is defined as 𝒯_μ_2(^d) = {∇ψ, ψ∈𝒞_c^∞(^d) }⊂ L^2(μ), where the closure is taken in L^2(μ), see <cit.>. <cit.> showed that if is Wasserstein differentiable, then there is always a unique gradient living in the tangent space, and we can restrict ourselves without loss of generality to this gradient. <cit.> further showed that Wasserstein gradients provide linear approximations even if the perturbations are not induced by OT plans, i.e. differentials are “strong Fréchet differentials”. Let μ,ν∈_2(^d), γ∈Π(μ,ν) any coupling and let :_2(ℝ^d)→ be Wasserstein differentiable at μ with Wasserstein gradient (μ)∈𝒯_μ_2(^d). Then, (ν) = (μ) + ∫⟨(μ)(x), y-x⟩ dγ(x,y) + o(√(∫x-y_2^2 dγ(x,y))). The Wasserstein gradient of can be computed in practice using the first variation δ/δμ <cit.>, which is defined, if is exists, as the unique function (up to a constant) such that, for χ satisfying ∫dχ = 0, / t(μ+tχ)|_t=0 = lim_t→ 0 (μ+tχ) - (μ)/t = ∫δ/δμ(μ) χ. Then the Wasserstein gradient can be computed as (μ) = ∇δ/δμ(μ). We now show that we can relate the Fréchet derivative of _μ():=(_#μ) with the Wasserstein gradient of belonging to the tangent space of _2(^d) at μ. Let :_2(^d)→∪{+∞} be a Wasserstein differentiable functional on D(). Let μ∈_2(^d) and _μ() = (_#μ) for all ∈ D(_μ). Then, _μ is Fréchet differentiable, and for all ∈ D(_μ), ∇_μ()=(_#μ)∘. Let μ∈_2(^d), ,∈ D(_μ), ϵ>0. Since is Wasserstein differentiable at _#μ, applying <Ref> at _#μ with ν=(+ϵ(-))_#μ and γ = (, +ϵ(-))_#μ∈Π(_#μ, ν), we obtain, _μ( + ϵ(-)) = ((+ϵ (-))_#μ) = (_#μ) + ∫⟨(_#μ)(x), y-x⟩ γ(x,y) + o(√(∫x-y_2^2 γ(x,y))) = _μ() + ϵ∫⟨(_#μ)((x)), (x)-(x)⟩ μ(x) + o(ϵ√(∫(x)-(x)_2^2 μ(x))) = _μ() + ϵ⟨(_#μ)∘, -⟩_L^2(μ) + ϵ o(-_L^2(μ)). Thus, δ_μ(, -) = ⟨(_#μ)∘, -⟩_L^2(μ). Note that in the third equality we used that ∈ L^2(μ). =-1 A similar formula can be found in <cit.>, however the space H used there is not L^2(μ) but a lifting L^2(Ω;^d) of measures on random variables. They should not be confused. §.§ Wasserstein Hessians A natural object of interest is the Hessian of the objective , which we define below. This notion is usually defined along Wasserstein geodesics, i.e. curves of the form μ_t=(𝕀 +t ∇ψ)_#μ for ψ∈𝒞_c^∞(^d) <cit.>. The Wasserstein Hessian of a functional :_2(^d)→ at μ is defined for any ψ∈𝒞_c^∞(^d) as: _μ(ψ,ψ):= [2]t(μ_t)|_t=0 where (μ_t,v_t)_t∈ [0,1] is a Wasserstein geodesic with μ_0=0,v_0=∇ψ. <Ref> can be straightforwardly related to the usual symmetric bilinear form defined on 𝒯_μ_2(^d)×𝒯_μ_2(^d) <cit.>: The Wasserstein Hessian of , denoted _μ is an operator over 𝒯_μ𝒫_2(^d) verifying _μv_0, v_0_L^2(μ)=d^2/dt^2ℱ(ρ_t)|_t=0 if (ρ_t,v_t)_t∈ [0,1] is a Wasserstein geodesic starting at μ. =-1 In this work, we are interested in more general curves, which are acceleration free, i.e. of the form μ_t = ( + t v)_#μ with ,v∈ L^2(μ). Thus, we define analogously the Hessian and the Hessian operator along such curves. We note that if is invertible, μ_t = (𝕀 + t v∘^-1)_#_#μ, and the notions can be linked with Wasserstein Hessian. However, in general, this does not need to be the case. The Hessian of :_2(^d)→ along t↦μ_t = ( + t v)_#μ for ,v∈ L^2(μ) is defined as _μ, t(v, v) = d^2/dt^2(μ_t). Moreover, we define the Hessian operator H_μ,t:L^2(μ)→ L^2(μ) as the operator satisfying for all t∈ [0,1], d^2/dt^2(μ_t) = ⟨H_μ, tv,v⟩_L^ 2(μ). <cit.> derived a general closed form of the Wasserstein Hessian through the first variation of . Here, we extend their formula along any curve μ_t = ( + t v)_#μ with ,v∈ L^2(μ). We first provide a lemma computing the derivative of the Wasserstein gradient. Let :_2(^d)→ be twice continuously differentiable and assume that δ/δμ∇δ/δμ = ∇δ^2/δμ^2. Let μ∈_2(^d) and for all t∈ [0,1], μ_t = (_t)_#μ where _t is differentiable w.r.t. t with d_t/dt∈ L^2(μ). Then, d/dt((μ_t)∘_t) (x) = ∫[∇_y∇_xδ^2/δμ^2((_t)_#μ)(_t(x), _t(y)) d_t/dt(y)] dμ(y) + ∇^2δ/δμ((_t)_#μ)(_t(x)) d_t/dt(x). See <Ref>. This allows us to define a closed-form for H_μ, t. Under the same assumptions as in <Ref>, let μ_t = (_t)_#μ with _t = + t v, ,v∈ L^2(μ), then d^2/dt^2(μ_t) = ⟨H_μ,t v, v⟩_L^2(μ), with H_μ,t:L^2(μ)→ L^2(μ) defined as, for all v∈ L^2(μ), x∈ℝ^d, H_μ,t[v](x) = ∫∇_y∇_xδ^2/δμ^2((_t)_#μ)(_t(x), _t(y)) v(y) dμ(y) + ∇^2δ/δμ((_t)_#μ)(_t(x)) v(x). See <Ref>. We note that if _t is invertible for all t, with v_t=v∘_t^-1, we can write d^2/dt^2(μ_t) = ⟨H_μ, t v, v⟩_L^2(μ) = ∫H_μ,t[v](x), v(x)⟩ dμ(x) = ∫⟨H_μ, t[v](_t^-1(x_t)), v_t(x_t)⟩ dμ_t(x_t) =⟨H_μ_t v_t, v_t⟩_L^2(μ_t), because H_μ, t[v](x) = ∫∇_y∇_xδ^2/δμ^2(μ_t)(_t(x), y_t) v_t(y_t) dμ_t(y) + ∇^2 δ/δμ(μ_t)(_t(x)) v(x), and thus H_μ, t[v](_t^-1(x_t)) = ∫∇_y∇_xδ^2/δμ^2(μ_t)(x_t, y_t) v_t(y_t) dμ_t(y) + ∇^2δ/δμ(μ_t)(x_t) v_t(x_t) = H_μ_t[v_t](x_t). Here are two examples of satisfying δ/δμ∇δ/δμ = ∇δ^2/δμ^2 for which <Ref> provides an expression of the Wasserstein Hessian. [Potential energy] Let (μ) = ∫ V dμ with V convex and twice differentiable. Then, it is well known that δ/δμ(μ) = V and δ^2/δμ^2 = 0. Thus, applying <Ref>, we recover for μ_t = (_t)_#μ, ^2/ t^2(μ_t) = ∫⟨∇^2 V(_t(x)) v(x), v(x)⟩ dμ(x). [Interaction energy] Let 𝒲(μ) = ∬ W(x-y) dμ(x)dμ(y) with W convex, symmetric and twice differentiable. Then, we have for all x,y∈^d, δ/δμ(x) = (W ⋆μ)(x) and δ^2/δμ^2(x, y)= W(x-y) (see e.g. <cit.>), and thus applying <Ref>, for μ_t=(_t)_#μ, the operator is H𝒲_μ,t[v](x) = -∫∇^2 W(_t(x)- _t(y)) v(y) dμ(y) + (∇^2 W ⋆ (_t)_#μ)(_t(x)) v(x), and ^2/ t^2𝒲(μ_t) = ∬⟨∇^2 W(_t(x)-_t(y)) (v(x)-v(y)), v(x)⟩ dμ(y)dμ(x). §.§ Convexity in Wasserstein space We first recall the definition of -convex functionals <cit.>. is -convex along geodesics if for all μ_0,μ_1 ∈_2(^d), ∀ t∈ [0,1], (μ_t) ≤ (1-t)(μ_0) +t(μ_1) - t(1-t)/2_2^2(μ_0,μ_1), where (μ_t)_t∈ [0,1] is a Wasserstein geodesic between μ_0 and μ_1. If we want to derive the minimal set of assumptions for the convergence of the gradient descent algorithms on Wasserstein space, we can actually restrict the smoothness and convexity to specific curves. In the next proposition, we characterize the convexity along one curve. The relative smoothness or convexity follows by considering the convexity of respectively 𝒢 - ℱ or ℱ-𝒢. Let :_2(^d)→ be twice continuously differentiable. Let μ∈_2(^d), ,∈ L^2(μ), μ_t = (_t)_#μ for all t∈ [0,1] where _t = (1-t)+t. Furthermore, denote for t_1,t_2∈ [0,1], μ_t^t_1→ t_2 = ((1-t)_t_1 + t _t_2)_#μ. Then, the following statement are equivalent, * For all t_1,t_2,t∈ [0,1], ℱ(μ_t^t_1→ t_2) ≤ (1-t)ℱ((_t_1)_#μ) + t ℱ((_t_2)_#μ), i.e. ℱ is convex along t↦μ_t. * For all t_1,t_2∈ [0,1], we have _ℱ_μ(_t_2, _t_1)≥ 0, i.e. ((_t_2)_#μ) - ((_t_1)_#μ) - ⟨((_t_1)_#μ)∘_t_1, _t_2-_t_1⟩_L^2(μ)≥ 0. * For all t_1,t_2∈ [0,1], ⟨((_t_2)_#μ)∘_t_2 - ((_t_1)_#μ)∘_t_1, _t_2-_t_1⟩_L^2(μ)≥ 0. * For all s∈ [0,1], d^2/dt^2ℱ(μ_t)|_t=s≥ 0. See <Ref>. =-1 As stated in <Ref>, if we require the convexity to hold along all curves with =𝕀 and the gradient of some convex function, i.e. an OT map, then is convex along geodesics. Likewise, if the convexity holds for all , that are gradients of convex functions, then we obtain the convexity along generalized geodesics. If we require the convexity and the smoothness to hold along any curve of the form μ_t = ((1-t) + t)_#μ for μ∈_2(^d) and ,∈ L^2(μ), then it coincides with the transport convexity and smoothness recently introduced by <cit.> as by <Ref>, δ_μ(,-)=⟨(_#μ)∘, -⟩_L^2(μ), and thus the convexity reads as follows __μ(,) = (_#μ) - (_#μ) - ⟨(_#μ) ∘, -⟩_L^2(μ)≥ 0. And for _μ()=1/2_L^2(μ), the -smoothness of relative to expresses as __μ(,) = (_#μ) - (_#μ) - ⟨(_#μ) ∘, -⟩_L^2(μ)≤/2-_L^2(μ) = __μ(,). =-1 This type of convexity is actually a particular case of the notion of convexity along acceleration-free curves introduced by <cit.> (also introduced by <cit.> under the name of total convexity). The latter requires convexity to hold along any curve of the form μ_t=((1-t)π^1 + tπ^2)_#γ with γ∈Π(μ,ν), μ,ν∈_2(^d) and π^1(x,y)=x, π^2(x,y)=y. The transport convexity of <cit.> is thus a particular case for couplings obtained through maps, i.e. γ=(,)_#μ. <cit.> notably showed that this notion of convexity is equivalent with the geodesic convexity for Wasserstein differentiable functionals. We can also define the strict convexity using strict inequalities in <Ref>-<ref>-<ref>-<ref> (but not in <ref>). Finally, as we defined the relative -convexity and -smoothness of relative to using Bregman divergences in <Ref>, we can show that it is equivalent with - and - being convex. Let ,:_2(^d)→ be two differentiable functionals. Let μ∈_2(^d), ,∈ L^2(μ) and for all t∈ [0,1], μ_t = (_t)_#μ with _t = (1-t)+t. Then, is -convex (resp. -smooth) relative to along t↦μ_t if and only if - (resp. -) is convex along t↦μ_t. By <Ref>, is -convex relative to along t↦μ_t if for all s,t∈ [0,1], __μ(_s,_t) ≥__μ(_s,_t). This is equivalent with __μ - _μ(_s,_t) ≥ 0, which is equivalent by <Ref> <ref> with - convex along t↦μ_t. The result for the -smoothness follows likewise. § ADDITIONAL RESULTS ON MIRROR DESCENT §.§ Optimal transport maps for mirror descent Let ϕ:_2(^d)→ be a strictly convex functional along all acceleration-free curves and denote for μ∈ L^2(μ), ϕ_μ()=ϕ(_#μ). Since ϕ is strictly convex along all acceleration-free curves, by <Ref>, for all ≠∈ L^2(μ), _ϕ_μ(,)>0 and thus ϕ_μ is strictly convex. Recall that ∀,∈ L^2(μ), _ϕ_μ(,) = ϕ_μ() - ϕ_μ() - ⟨∇ϕ_μ(),-⟩_L^2(μ) = ϕ(_#μ) - ϕ(_#μ) - ⟨ϕ(_#μ)∘, -⟩_L^2(μ), where we used <Ref> for the computation of the gradient. Let us now define for all μ,ν∈_2(^d), _ϕ(ν,μ) = inf_γ∈Π(ν,μ) ϕ(ν)-ϕ(μ)-∫⟨ϕ(μ)(y), x-y⟩ γ(x,y). This problem encompasses several previously considered objects, as discussed in more detail in <Ref>. Our motivation for introducing <Ref> is to prove that for ϕ_μ verifying the assumptions of <Ref>, its associated Bregman divergence _ϕ_μ satisfies the property given in <Ref>. First, we can observe that as γ=(,)_#μ∈Π(_#μ, _#μ), we have _ϕ_μ(,)≥_ϕ(_#μ, _#μ). Then, for μ∈, assuming that ϕ(μ)=∇ϕ_μ(𝕀) is invertible , we can leverage Brenier's theorem <cit.>, and show in <Ref> that the optimal coupling of <Ref> is of the form (_ϕ_μ^μ,ν,𝕀)_#μ with _ϕ_μ^μ,ν=__#μ=ν _ϕ_μ(,𝕀). Moreover, if ν∈, we also have that _ϕ_μ^μ,ν is invertible with inverse _ϕ_ν^ν,μ = __#ν=μ_ϕ_ν(𝕀,). Let μ∈, ν∈_2(^d) and assume ϕ(μ) invertible. Then, * There exists a unique minimizer γ of (<ref>). Besides, there exists a uniquely determined μ-almost everywhere (a.e.) map _ϕ_μ^μ,ν:^d→^d such that γ = (_ϕ_μ^μ,ν,𝕀)_#μ. Finally, there exists a convex function u:^d → such that _ϕ_μ^μ,ν = ∇ u ∘ϕ(μ) μ-a.e. * Assume further that ν∈. Then there exists a uniquely determined ν-a.e. map _ϕ_ν^ν,μ:^d→^d such that γ=(𝕀, _ϕ_ν^ν,μ)_#ν. Moreover, there exists a convex function v:^d→ such that _ϕ_ν^ν,μ = ϕ(μ)^-1∘∇ v ν-a.e., and _ϕ_μ^μ,ν∘_ϕ_ν^ν,μ = 𝕀 ν-a.e. and _ϕ_ν^ν,μ∘_ϕ_μ^μ,ν = 𝕀 μ-a.e. * As a corollary, _ϕ(ν,μ) = min__#μ=ν _ϕ_μ(,𝕀) = min__#ν=μ _ϕ_ν(𝕀, ). <ref>. Observe that problem (<ref>) is equivalent with inf_γ∈Π(ν,μ) ∫x-∇__2ϕ(μ)(y)_2^2 γ(x,y). Then, since for any γ∈Π(ν,μ), (𝕀, ∇__2ϕ(μ))_#γ∈Π(ν,∇__2ϕ(μ)_#μ), we have inf_γ∈Π(ν,μ) ∫x-∇__2ϕ(μ)(y)_2^2 γ(x,y) ≥inf_γ∈Π(ν,∇__2ϕ(μ)_#μ) ∫x-z_2^2 γ(x,z). Let μ∈. Since ϕ(μ) is invertible, ϕ(μ)_#μ∈. By Brenier's theorem, there exists a convex function u such that (∇ u)_#(∇__2ϕ(μ))_#μ = ν and the optimal coupling is of the form γ^* = (∇ u,𝕀)_#∇__2ϕ(μ)_#μ. Let γ=(∇ u∘∇__2ϕ(μ), 𝕀)_#μ∈Π(ν,μ), then ∫z-y_2^2 γ^*(z, y) = ∫∇ u(∇__2ϕ(μ)(y)) - ∇__2ϕ(μ)(y)_2^2 μ(y) = ∫x-∇__2ϕ(μ)(y)_2^2 γ(x,y). Thus, since γ∈Π(ν,μ), γ is an optimal coupling for (<ref>). <ref>. We symmetrize the arguments. Assuming ν∈ and ∇ϕ_μ(𝕀)=ϕ(μ) invertible, by Brenier's theorem, there exists a convex function v such that (∇ v)_#ν = ∇__2ϕ(μ)_#μ (and such that ∇ u∘∇ v=𝕀 ν-a.e. and ∇ v∘∇ u =𝕀 ϕ(μ)_#μ-a.e.) and the optimal coupling is of the form γ^* = (𝕀, ∇ v)_#ν. Let γ=(𝕀, ∇__2ϕ(μ)^-1∘∇ v)_#ν∈Π(ν,μ), then ∫x-z_2^2 γ^*(x,z) = ∫x-∇ v(x)_2^2 ν(x) = ∫ x - ∇__2ϕ(μ)((∇__2ϕ(μ))^-1(∇ v(x))) _2^2 ν(x) = ∫ x - ∇__2ϕ(μ)(y)_2^2 γ(x,y). Thus, since γ∈Π(ν,μ), γ is an optimal coupling for (<ref>). Moreover, noting _ϕ_μ^μ,ν=∇ u ∘ϕ(μ) and _ϕ_ν^ν,μ = ϕ(μ)^-1∘∇ v, we have μ-a.e., _ϕ_ν^ν,μ∘_ϕ_μ^μ,ν = ϕ(μ)^-1∘∇ v ∘∇ u ∘ϕ(μ) = 𝕀 and ν-a.e., _ϕ_μ^μ,ν∘_ϕ_ν^ν,μ = ∇ u ∘ϕ(μ) ∘ϕ(μ)^-1∘∇ v = 𝕀 from the aforementioned consequences of Brenier's theorem. We continue this section with additional results relative to the invertibility of mirror maps, which are required in <Ref>. Let μ∈_2(^d) and let W:^d→ be even, ϵ-strongly convex for ϵ>0 and differentiable, then 𝒲(μ) is invertible. On one hand, 𝒲(μ) = ∇ W ⋆μ. Moreover, W ϵ-strongly convex is equivalent with ∀ x,y∈^d, x≠ y, ⟨∇ W(x) - ∇ W(y), x-y⟩≥ϵx-y_2^2, which implies for all x,y,z∈^d, ⟨∇ W(x-z) - ∇ W(y-z), x-y⟩≥ϵx-y_2^2. By integrating with respect to μ, it implies ⟨ (∇ W ⋆μ)(x) - (∇ W ⋆μ)(y), x-y⟩=∫⟨∇ W(x-z) - ∇ W(y-z), x-y⟩ dμ(z)≥ϵx-y_2^2. Thus, ∇ W ⋆μ is ϵ-strongly monotone, and in particular invertible <cit.>. Let μ∈ such that its density is of the form ρ∝ e^-V with V:^d→ ϵ-strongly convex for ϵ>0, then (μ) is invertible. Let μ such distribution. Then, (μ) = ∇logρ = -∇ V. Since V is ϵ-strongly convex, then ∇ V is ϵ-strongly monotone and in particular invertible <cit.>. We conclude this section with a discussion of (<ref>) with respect to related work. The OT problem (<ref>) recovers other OT costs for specific choices of ϕ. For instance, for ϕ_μ()=1/2_L^2(μ)^2, it coincides with the squared 2-Wasserstein distance. And more generally, for ϕ_μ^V()= ∫ V∘ dμ, since by <Ref>, for all ,∈ L^2(μ), _ϕ_μ^V(,) = ∫_V((x),(x)) μ(x), where _V is the Euclidean Bregman divergence, i.e. for all x,y∈^d, _V(x,y) = V(x)-V(y)-⟨∇ V(y), x-y⟩, _ϕ coincides with the Bregman-Wasserstein divergence <cit.> ℬ_V(μ,ν) = inf_γ∈Π(μ,ν)∫_V(x,y) dγ(x,y). §.§ Continuous formulation Let ϕ:L^2(μ)→ be pushforward compatible and superlinear. Introducing the (mirror) map φ(μ)=ϕ(μ), we can write informally the discrete scheme (<ref>) and its limit when τ→ 0 as φ(μ_k) = ϕ(μ_k) φ(μ_k+1)∘_k+1 = φ(μ_k) - τ(μ_k) φ(μ_t) = ϕ(μ_t) d/dtφ(μ_t) = - (μ_t). =-1 However, d/dtφ(μ_t) = d/dtϕ(μ_t) = ϕ_μ_t(v_t) where ϕ_μ_t:L^2(μ_t)→ L^2(μ_t) is the Hessian operator defined such that d^2/dt^2ϕ(μ_t) = ⟨ϕ_μ_t(v_t), v_t⟩_L^2(μ_t) and v_t∈ L^2(μ_t) is a velocity field satisfying ∂_tμ_t + div(μ_t v_t) = 0. Thus, the continuity equation followed by the Mirror Flow is given by ∂_t μ_t - div(μ_t (Hϕ_μ_t)^-1(μ_t)) = 0. For ϕ_μ^V as Bregman potential, since ϕ_μ^V(v) = (∇^2 V)v (see <Ref>), the flow is a solution of ∂_tμ_t - div(μ_t (∇^2V)^-1(μ_t)) = 0. For (μ) = (μ||ν) with ν∝ e^-U, this coincides with the gradient flow of the mirror Langevin <cit.> and with the continuity equation obtained in <cit.> as the limit of the JKO scheme with Bregman groundcosts. For ϕ=, this coincides with Information Newton's flows <cit.>. Note also that <cit.> defined mirror flows through the scheme τ→ 0 of (<ref>), but focused on (μ)=(μ||ν) and ϕ(μ)=1/2_2^2(μ,η). §.§ Derivation in specific settings In this Section, we analyze several new mirror schemes obtained through different Bregman potential maps. We start by discussing the scheme with an interaction energy as Bregman potential. Then, we study mirror descent with negative entropy or KL divergence as Bregman potential. For the last two, we derive closed-forms for the case where every distribution is Gaussian, which is equivalent with working on the Bures-Wasserstein space, and to use the gradient on the Bures-Wasserstein space <cit.>. In particular, this space is a submanifold of and the tangent space is the space of affine maps with symmetric linear term, i.e. of the form T(x)=b + S(x-m) with S∈ S_d(). Interaction mirror scheme. Let us take as Bregman potential ϕ_μ()=∬ W((T(x)-T(x')) dμ(x)dμ(x'). The general scheme is given by ∀ k≥ 0, (∇ W⋆μ_k+1)∘_k+1 = ∇ W ⋆μ_k - τ(μ_k). For the particular case W(x)=1/2x_2^2, the scheme can be made more explicit as ∇ W ⋆μ(x) = ∫∇ W(x-y) dμ(y) = ∫ (x-y) dμ(y) = x - m(μ) with m(μ)=∫ ydμ(y) the expectation, and thus it translates as ∀ k≥ 0, x_k+1 - m(μ_k+1) = x_k - m(μ_k) - τ(μ_k). On one hand, recall from <Ref> that the Hessian of ϕ is given, for μ∈_2(^d), v∈ L^2(μ), by ∀ x∈^d, ϕ_μ[v](x) = - ∫ v(y) dμ(y) + v(x), since ∇^2 W = I_d. On the other hand, the scheme can be written as, for all k≥ 0, ϕ(μ_k+1)(x_k+1) = ϕ(μ_k)(x_k) - τ(μ_k)(x_k) x_k+1 - m(μ_k+1) = x_k - m(μ_k) - τ(μ_k)(x_k) y_k = x_k - m(μ_k) y_k+1 = y_k - τ(μ_k)(x_k). Passing to the limit τ→ 0, we get y_t = x_t- m(μ_t) dy_t/dt = -(μ_t)(x_t). But dy_t/dt = dx_t/dt - dm(μ_t)/dt. Now, by setting v_t(x) = dx_t/dt and noting that, by integration by part, d/dtm(μ_t) = ∫ x ∂_tμ_t = - ∫ x·div(μ_t v_t) = ∫ v_t(y) dμ_t(y), we obtain indeed dy_t/dt = ϕ_μ_t[v_t](x). Negative entropy mirror scheme. Let us consider ϕ(μ) = ∫log(ρ(x)) dμ(x) where dμ(x) = ρ(x)dx. For such Bregman potential, the mirror scheme can be obtained for all k≥ 0 as ∇logρ_k+1∘_k+1 = ∇logρ_k - τ(μ_k). In general, this scheme is not tractable. Nonetheless, supposing that μ_k=𝒩(m_k,Σ_k) for all k, the scheme translates as -Σ_k+1^-1(_k+1(x) - m_k+1) = -Σ_k^-1(x-m_k) - τ(μ_k). For (μ)=(μ)+(μ) with V(x)=1/2 x^T Σ^-1 x, the scheme is -Σ_k+1^-1(x_k+1 - m_k+1) = -Σ_k^-1(x_k-m_k) -τ(-Σ_k^-1 (x_k - m_k) + Σ^-1 x_k) = - (1-τ)Σ_k^-1(x_k-m_k) - τΣ^-1 x_k = -((1-τ)Σ_k^-1 + τΣ^-1) x_k + (1-τ)Σ_k^-1 m_k. Assuming m_k=0 for all k and taking the covariance, then we obtain the following update rule for the covariance matrices: Σ_k+1^-1 = ((1-τ)Σ_k^-1 + τΣ^-1)^T Σ_k ((1-τ)Σ_k^-1 + τΣ^-1). We illustrate this scheme in <Ref>. For (μ) = (μ), we obtain -Σ_k+1^-1(_k+1(x) - m_k+1) = -(1-τ) Σ_k^-1(x-m_k). Assuming m_k=0 for all k, for τ<1, taking the covariance, we get Σ_k+1^-1Σ_k+1Σ_k+1^-1 = (1-τ)^2 Σ_k^-1Σ_kΣ_k^-1 Σ_k+1 = 1/(1-τ)^2Σ_k = 1/(1-τ)^2kΣ_0 τ→ 0∼ e^2τ kΣ_0. The underlying flow is thus t↦(0,e^2tΣ_0) and the negative entropy decreases along this curve as (ρ_t) = -d/2log(2π e) - dt - ∑_i=1^d log(λ_i), where (λ_i)_i denote the eigenvalues of Σ_0. This is much faster than the heat flow for which the negative entropy decreases as <cit.> (ρ_t) = -d/2log(2π e) - 1/2∑_i=1^d log(λ_i + 2t), with the scheme given by <cit.> ∀ k≥ 0, m_k+1 = m_0 Σ_k+1 = Σ_k (I_d +τΣ_k^-1)^2. KL mirror scheme. When we want to optimize the KL divergence, i.e. a functional of the form (μ) = (μ) + ∫ Vdμ, then a natural choice of Bregman potential is also a functional of the form ϕ(μ) = (μ) + ∫ψdμ with V -convex and -smooth relative to ψ. Indeed, usually, for mirror maps only composed from a potential, the non smooth term is . Let us note (μ) = (μ)+(μ) where (μ) = ∫ Vdμ and ϕ(μ) = Ψ(μ) + (μ) with Ψ(μ) = ∫ψdμ. In that case, we have, since _ℋ_μ + _μ = _ℋ_μ + __μ, for all ,∈ L^2(μ), __μ(, ) = __μ(,) + __μ(,) ≤__μ(, ) + _Ψ_μ(,) ≤max(1,) _ϕ_μ(,). Similarly, __μ(,) ≥min(1,) _ϕ_μ(,). We now focus on the case where all measures are Gaussian in order to be able to compute a closed-form, i.e. V(x) = 1/2 (x-m)^T Σ^-1 (x-m), ψ(x) = 1/2 x^T Λ^-1 x and for all k, μ_k = (m_k, Σ_k). In this case, remember that ∇logμ_k(x) = -Σ_k^-1 (x-m_k). Then, at each step, the scheme is ∇ V(x_k+1) + ∇log(μ_k+1(x_k+1)) = ∇ V(x_k) + ∇log(μ_k(x_k)) - τ(∇ U(x_k) + ∇log(μ_k(x_k)) Λ^-1 x_k+1 - Σ_k+1^-1(x_k+1 - m_k+1) = Λ^-1 x_k - Σ_k^-1(x_k-m_k) - τ(Σ^-1(x_k-m) - Σ_k^-1(x_k-m_k)) (Λ^-1 - Σ_k+1^-1) x_k+1 + Σ_k+1^-1 m_k+1 = (Λ^-1 - (1-τ)Σ_k^-1 - τΣ^-1) x_k + (1-τ)Σ_k^-1m_k + τΣ^-1 m. Thus, we get for the expectation that (Λ^-1 - Σ_k+1^-1) m_k+1 + Σ_k+1^-1 m_k+1 = (Λ^-1 - (1-τ)Σ_k^-1 - τΣ^-1) m_k + (1-τ)Σ_k^-1m_k + τΣ^-1 m Λ^-1 m_k+1 = (Λ^-1 - τΣ^-1) m_k + τΣ^-1m m_k+1 = (I_d - τΛΣ^-1)m_k + τΛΣ^-1m. We note that it is the same scheme as for the forward Euler method in the forward-backward scheme. The entropy does not affect the convergence towards the mean, which can be done simply by (preconditioned) gradient descent. For the variance part, we get (Λ^-1-Σ_k+1^-1)^TΣ_k+1(Λ^-1 - Σ_k+1^-1) = (Λ^-1- τΣ^-1 - (1-τ)Σ_k^-1)^T Σ_k (Λ^-1- τΣ^-1 - (1-τ)Σ_k^-1). Now, we suppose that all matrices commute,artifice de calcul ? Juste pour se simplifier la vie ?Ce calcul fait penser à Woodbury <https://en.wikipedia.org/wiki/Woodbury_matrix_identity> then Λ^-2Σ_k+1 - 2 Λ^-1 + Σ_k+1^-1 = (Λ^-1 - τΣ^-1)^2 Σ_k - 2 (1-τ) Λ^-1 + 2 τ(1-τ)Σ^-1 + (1-τ)^2Σ_k^-1 Λ^-2Σ_k+1 + Σ_k+1^-1 = (Λ^-1 - τΣ^-1)^2 Σ_k + 2τΛ^-1 + 2 τ(1-τ)Σ^-1 + (1-τ)^2Σ_k^-1 Σ_k+1 + Λ^2 Σ_k+1^-1 = (I_d - τΛΣ^-1)^2 Σ_k + 2τΛ + 2τ(1-τ)Λ^2 Σ^-1 + (1-τ)^2 Λ^2 Σ_k^-1. Noting C = (I_d - τΛΣ^-1)^2 Σ_k + 2τΛ + 2τ(1-τ)Λ^2 Σ^-1 + (1-τ)^2 Λ^2 Σ_k^-1, then the equation is equivalent with Σ_k+1^2 - CΣ_k+1 + Λ^2 = 0. Thus, Σ_k+1 = 1/2(C ± (C^2 - 4Λ^2)^1/2). §.§ Mirror scheme with non-pushforward compatible Bregman potentials We study in this Section schemes for which the Bregman potential ϕ is not pushforward compatible, and thus for which we cannot apply <Ref> and thus <Ref> does not hold a priori. An example of such potential is ϕ_μ()=⟨, P_μ⟩_L^2(μ) where P_μ:L^2(μ)→ L^2(μ) is a linear autoadjoint and invertible operator. Since ∇ϕ_μ() = P_μ, taking the first order conditions, we obtain the following scheme: ∀ k≥ 0, _k+1 = 𝕀 - P_μ_k^-1(μ_k). In particular, this includes SVGD <cit.> (if we pose P_μ^-1 = ι S_μ with S_μ:L^2(μ)→ defined as S_μ = ∫ k(x,·)(x)μ(x) which maps functions from L^2(μ) to the reproducing kernel Hilbert space with kernel k, and with ι:→ L^2(μ) the inclusion operator that is the adjoint of S_μ <cit.>) or the Kalman-Wasserstein gradient descent <cit.> (for which P_μ^-1 = ∫(x-m(μ))⊗(x-m(μ)) μ(x) is the covariance matrix, where m(μ)=∫ x μ(x)). More generally, for ϕ_μ()=∫ P_μ (V∘)dμ, we can recover their mirrored version, including mirrored SVGD <cit.>, i.e. _k+1 = ∇ V^* ∘(∇ V - τ P_μ_k^-1(μ_k)). Kalman-Wasserstein. We focus now on a particular choice of linear operator P_μ. Namely, we take P_μ = C(μ) with C(μ) = ( ∫(x-m(μ))⊗(x-m(μ)) dμ(x))^-1 the inverse of the covariance matrix. In this case, (<ref>) corresponds to the discretization of the Kalman-Wasserstein gradient flow <cit.>. We now show that it satisfies <Ref>. First, let us compute the Bregman divergence associated to ϕ: ∀,∈ L^2(u), _ϕ_μ(, ) = 1/2⟨, C(μ) ⟩_L^2(μ) + 1/2⟨, C(μ)⟩_L^2(μ) - ⟨ C(μ) , ⟩_L^2(μ) = 1/2(⟨, C(μ)(-)⟩_L^2(μ) + ⟨-, C(μ) ⟩_L^2(μ)) = 1/2C(μ)^1/2 (-)_L^2(μ)^2. For γ = (, )_#μ, we can write _ϕ_μ(, ) = 1/2∫ C(μ)^1/2 (x-y)_2^2 dγ(x,y). Moreover, the problem inf_γ∈Π(α,β) ∫ C(μ)^1/2(x-y)_2^2 dγ(x,y) is equivalent with inf_γ∈Π(α,β) -∫ x^T C(μ) y dγ(x,y), which is a squared OT problem. Thus, it admits an OT map if C(μ) is invertible and μ or ν is absolutely continuous. Second point of view. Another point of view would be to use the linearization with the gradient corresponding to the associated generalized Wasserstein distance, which is of the form ∇_(μ) = P_μ^-1(μ) <cit.>, i.e. considering _k+1 = _∈ L^2(μ) _ϕ_μ(,𝕀) + ⟨∇_(μ), -𝕀⟩_L^2(μ), where we assume that (μ)∈ L^2(μ). In that case, using the first order conditions, ∇(_k+1)=0ϕ((_k+1)_#μ_k)∘_k+1 = ϕ(μ_k) - τ P_μ_k^-1(μ_k). Then, for ϕ_μ satisfying <Ref>, the convergence will hold under relative smoothness and convexity assumptions similarly as for the analysis derived in <Ref>. § RELATIVE CONVEXITY AND SMOOTHNESS §.§ Relative convexity and smoothness between Fenchel transforms In this Section, we show sufficient conditions to satisfy the inequalities assumed in <Ref> and <Ref> under the additional assumption that, for all k≥ 0, _μ_k is superlinear, lower semi continuous and strictly convex. In this case, we can show that _μ_k^* is Gâteaux differentiable, and thus we can use the Bregman divergence of _μ_k^*. Let ϕ:L^2(μ)→ be a superlinear, lower semi continuous and strictly convex function. Then, ϕ^* is Gâteaux differentiable. Fix g∈ L^2(μ). Notice that f̅∈∂ϕ^*(g) ϕ^*(f̅)=⟨f̅,g ⟩ - ϕ(f̅)=sup_f ∈ L^2(μ)⟨ f,g ⟩ - ϕ(f) So to prove there is a unique element in ∂ϕ^*(g), we just need to show that, setting ϕ_g(f):=- ⟨ f,g ⟩ + ϕ(f), the problem inf_f ∈ L^2(μ)ϕ_g(f) has a unique solution. Because of our assumptions, ϕ_g is lower semi continuous and strictly convex. Since ϕ is superlinear, ϕ_g is coercive, i.e. lim_f→∞ϕ_g(f)=+∞. There thus exists a solution <cit.>, which is unique by strict convexity. Hence ∂ϕ^*(g) is reduced to a point, which is necessarily the Gâteaux derivative of ϕ^* at g. This allows us to relate the Bregman divergence of ϕ^* with the Bregman divergence of ϕ. Let ϕ:L^2(μ)→ be a proper superlinear and strictly convex differentiable function, then for all ,∈ L^2(μ), _ϕ^*(∇ϕ(),∇ϕ()) = _ϕ(,). By <cit.>, we have ϕ^*(∇ϕ()) = ⟨, ∇ϕ()⟩_L^2(μ) - ϕ() for all ∈ L^2(μ) since ϕ is convex and differentiable. By <Ref>, ϕ^* is invertible and by <cit.> , since ϕ is proper, lower semi-continuous and convex, then (∇ϕ)^-1 = ∇ϕ^*. Thus, for all ,∈ L^2(μ), _ϕ^*(∇ϕ(), ∇ϕ()) = ϕ^*(∇ϕ()) - ϕ^*(∇ϕ()) - ⟨∇ϕ^*(∇ϕ()), ∇ϕ()-∇ϕ()⟩_L^2(μ) = ϕ^*(∇ϕ()) - ϕ^*(∇ϕ()) - ⟨, ∇ϕ() - ∇ϕ()⟩_L^2(μ) = ⟨∇ϕ(), ⟩_L^2(μ) - ϕ() - ⟨∇ϕ(), ⟩_L^2(μ) + ϕ() - ⟨, ∇ϕ() - ∇ϕ()⟩_L^2(μ) = ϕ() - ϕ() - ⟨∇ϕ(), -⟩_L^2(μ) = _ϕ(,). Finally, we can relate the relative convexity of ϕ relative to ψ^* by using an inequality between the Bregman divergences of ϕ and ψ. In particular, we recover the assumptions of <Ref> for ϕ_μ_k^ -smooth and -convex relative to _μ_k^*. Let ϕ,ψ:L^2(μ)→ proper, superlinear, strictly convex and differentiables. ϕ is -smooth (resp. -convex) relative to ψ^* if and only if ∀,∈ L^2(μ), _ϕ(∇ψ(), ∇ψ()) ≤_ψ(, ) (resp. _ϕ(∇ψ(), ∇ψ()) ≥_ψ(, )). First, suppose that ϕ is -smooth relative to ψ^*. Then, by definition, ∀,∈ L^2(μ), _ϕ(,) ≤_ψ^*(,). In particular, _ϕ(∇ψ(), ∇ψ()) ≤_ψ^*(∇ψ(), ∇ψ()) = _ψ(, ), using <Ref> in the last line. On the other hand, by <Ref>, for all ,∈ L^2(μ), _ψ^*(∇ψ(), ∇ψ()) = _ψ(,) ≥_ϕ(∇ψ(),∇ψ()). Likewise, we can show that ϕ is -convex relative to ψ if and only if _ϕ(∇ψ(),∇ψ())≥_ψ(,) for all ,∈ L^2(μ). Links with the conditions of <Ref> and <Ref>. <Ref> allows to translate the inequality hypothesis of <Ref> and <Ref>. Assume that for all k, _μ_k is strictly convex, differentiable and superlinear. We note first that it implies that _μ_k is convex along t↦((1-t)_k+1+t𝕀)_#μ_k. Moreover, by <Ref>, ∇_μ_k^* is differentiable. Note that this assumption is satisfied, e.g. by ϕ_μ()=∫ V∘ dμ for V η-strongly convex and differentiable. Indeed, in this case, ϕ_μ is also η-strongly convex, and satisfies for all ,∈ L^2(μ), _ϕ_μ(,) = ϕ_μ()-ϕ_μ() - ⟨∇ϕ_μ(),-⟩_L^2(μ)≥η/2-_L^2(μ)^2 ϕ_μ() ≥ϕ_μ() + ⟨∇ϕ_μ(),-⟩_L^2(μ) + η/2-_L^2(μ)^2. For =0, and dividing by _L^2(μ) the right term diverges to +∞ when _L^2(μ)→ +∞, and thus lim__L^2(μ)→∞ϕ_μ()/_L^2(μ) = +∞, and ϕ_μ is superlinear. This assumption is also satisfied for interaction energies ϕ_μ^W() = ∬ W((x)-(y)) dμ(x)dμ(y) with W η-strongly convex, even and differentiable. Indeed, by strong convexity of W in 0, we have for all x,y∈^d, W((x)-(y)) - W(0) - ⟨∇ W(0), (x)-(y)⟩ ≥η/2(x)-(y)_2^2 ≥η/2inf_z∈^d (x)-z_2^2. Integrating w.r.t. μ⊗μ, we get ϕ_μ^W() - W(0) ≥η/2inf_z∈^d ∫(x) -z_2^2 dμ(x), and dividing by _L^2(μ), we get that ϕ_μ^W is superlinear. For a curve t↦μ_t, we define _μ^* on μ_t as _μ^*(μ_t) := _μ^*(_t) with _μ^* the convex conjugate of _μ in the L^2(μ) sense. Then, we can apply <Ref>, and we obtain that the inequality hypothesis of <Ref> is equivalent to the -smoothness of ϕ^ relative to _μ_k^* along t↦((1-t) (μ_k) + t(μ_k+1)∘_k+1)_#μ_k since _ϕ_μ_k^((μ_k+1)∘_k+1, (μ_k)) ≤__μ_k(𝕀, _k+1) =_^*_μ_k((μ_k+1)∘_k+1, (μ_k)). Similarly, the condition of <Ref> _ϕ_μ_k^((_#μ_k)∘, (μ_k)) ≥__μ_k(𝕀, ) =_^*_μ_k((_#μ_k)∘, (μ_k)) is equivalent to the -convexity of ϕ^ relative to _μ_k^* along t↦((1-t)(μ_k) + t(_#μ_k)∘)_#μ_k. Convergence towards the minimizer in <Ref>. We add an additional result justifying the convergence towards the minimizer in <Ref>. Let (X,τ) be a metrizable topological space, and f:X→∪{+ ∞} be strictly convex, τ-lower semicontinuous and with one τ-compact sublevel set. Let x_0∈ X be the minimizer of f and take a sequence (x_n)_n∈ such that f(x_n)→ f(x_0) then (x_n)_n∈ τ-converges to x_0. The existence of the minimum is given by <cit.>. For N large enough, (x_n)_n≥ N lives in the τ-compact sublevel set, since x_0 belongs to it and f(x_0) is minimal. We can then consider a subsequence τ-converging to some x^*. By τ-lower semicontinuity, we have f(x_0)≤ f(x^*)≤lim inf f(x_σ(n))=f(x_0), so f(x_0)= f(x^*) and by strict convexity x_0=x^*. Since all subsequences of (x_n)_n≥ N converge to x^* and the space is metrizable, (x_n)_n∈ τ-converges to x_0. The typical case is when X is a Hilbert space and τ is the weak topology. One could wish to have strong convergence under a coercivity assumption, however “In infinite dimensional spaces, the topologies which are directly related to coercivity are the weak topologies” <cit.>. Nevertheless Gâteaux differentiability implies continuity, which paired with convexity gives weak lower semicontinuity <cit.>. We cannot hope for convergence of the norm of x_n to come for free, as the weak convergence would then imply the strong convergence. §.§ Relative convexity and smoothness between functionals Let U,V:^d→ be differentiable and convex functions. We recall that V is -convex relative to U if <cit.> ∀ x,y∈^d, _V(x,y) ≥_U(x,y). Likewise, V is -smooth relative to U if _V(x,y)≤_U(x,y). Relative convexity and smoothness between potential energies. By <Ref>, for Bregman potentials of the form ϕ_μ()=∫ V∘ dμ, the Bregman divergence can be written as ∀,∈ L^2(μ), _ϕ_μ(,) = ∫_V((x), (x)) dμ(x). Thus, leveraging this result, we can show that relative convexity and smoothness of ϕ_μ^V relative to ϕ_μ^U is inherited by the relative convexity and smoothness of V relative to U. Let μ∈_2(^d), ϕ_μ()=∫ V∘ dμ and ψ_μ() = ∫ U∘ dμ where V:ℝ^d→ℝ is C^1. If V is -convex (resp. -smooth) relative to U:^d→, then ϕ_μ is -convex (resp -smooth) relative to ψ_μ. First, observe (<Ref>) that ∀μ∈_2(^d), ,∈ L^2(μ), _ϕ_μ(, ) = ∫_V((x), (x)) dμ(x). Let μ∈𝒫_2(ℝ^d), ,∈ L^2(μ). If V is -convex relatively to U, we have for all x,y∈ℝ^d, _V((x), (y)) ≥_U((x), (y)), and hence by integrating on both sides with respect to μ, _ϕ_μ(, )≥_ψ_μ(, ). Likewise, we have the result for the -smoothness. Relative convexity and smoothness between interaction energies. Similarly, by <Ref>, for Bregman potentials obtained through interaction energies, i.e. ϕ_μ() = 1/2∬ W((x)-(x')) dμ(x)dμ(x'), then ∀,∈ L^2(μ), _ϕ_μ(,) = 1/2∬_W((x)-(x'), (x)-(x')) dμ(x)dμ(x'). It also allows to inherit the relative convexity and smoothness results from ^d. Let μ∈_2(^d), W, K:ℝ^d→ℝ be symmetric, C^1 and convex. Let ϕ_μ() = ∬ W((x)-(x')) dμ(x)dμ(x') and ψ_μ() = ∬ K((x)-(x')) dμ(x)dμ(x'). If W is -convex relative to K, then ϕ_μ is -convex relatively to ψ_μ. Likewise, if W is -smooth relatively to K, then ϕ_μ is -smooth relatively to ψ_μ. We use first <Ref> and then that W is -convex relatively to K: _ϕ_μ(,) = ∬_W((x)-(x'), (x)-(x')) dμ(x)dμ(x') ≥∬_K((x)-(x'), (x)-(x')) dμ(x)dμ(x') = _ψ_μ(, ). Likewise, we have the result for the -smoothness. Thus, in situations where the objective functional and the Bregman potential are of the same type and either potential energies or interaction energies, we only need to show the convexity and smoothness of the underlying potentials or interaction kernels. For instance, let V:^d→ be a twice-differentiable convex function, such that ∇^2 V_op≤ p_r(x_2) with p_r a polynomial function of degree r and ·_op the operator norm. Then, by <cit.>, V is -smooth relative to h where for all x∈^d, h(x) = 1/r+2x_2^r+2 + 1/2x_2^2. Relative convexity and smoothness between functionals of different types. When the functionals do not belong to the same type, comparing directly the Bregman divergences is less straightforward in general. In that case, one might instead leverage the equivalence relations given by <Ref> and <Ref>, and show that - or - is convex in order to show respectively the -smoothness and -convexity of relative to . For instance, we can use the characterization through Hessians, and thus we would aim at showing d^2/dt^2(μ_t) ≤d^2/dt^2(μ_t), d^2/dt^2(μ_t) ≥d^2/dt^2(μ_t), along the right μ_t. For instance, suppose (μ) = 1/2∬ W(x-y) dμ(x)dμ(x') and (μ) = ∫ Vdμ. Then, by <Ref> and <Ref>, we have, for μ_t=(_t)_#μ and _t= + t v, d^2/dt^2(μ_t) = ∫⟨∇^2 V(_t(x)) v(x), v(x) ⟩ dμ(x), and d^2/dt^2(μ_t) = ∬⟨∇^2 W(_t(x)-_t(y)) (v(x)-v(y)), v(x)⟩ dμ(x)dμ(y). To show the conditions of <Ref>, we need to take =𝕀 and v=_k+1 - 𝕀, and to verify for t=0 the inequality, i.e. d^2/dt^2(μ_t)|_t=0≤d^2/dt^2(μ_t)|_t=0 ∬⟨∇^2 W(x-y) (v(x)-v(y)), v(x) ⟩ dμ_k(x)dμ_k(y) ≤∫⟨∇^2 V(x) v(x), v(x) ⟩ dμ_k(x) ∫⟨ v(x), ∫((∇^2 W(x-y)-∇^2V(x))v(x) - ∇^2 W(x-y)v(y) )dμ_k(y) ⟩dμ_k(x) ≤ 0. For example, choosing W(x)=1/2x_2^2, then ∇^2W=I_d and is -smooth relative to as long as ∇^2 V ≽1/I_d. § BREGMAN PROXIMAL GRADIENT SCHEME In this section, we are interested into minimizing a functional of the form (μ) = (μ) + (μ) where is smooth relative to some function ϕ and is convex on L^2(μ). Different strategies can be used to tackle this problem. For instance, <cit.> restrict the space to particular directions along which is smooth while <cit.> use Proximal Gradient algorithms. We focus here on the latter and generalize the Bregman Proximal Gradient algorithm <cit.>, also known as the Forward-Backward scheme. It consists of alternating a forward step on and then a backward step on , i.e. for k≥ 0, {[ _k+1 = _∈ L^2(μ_k) _ϕ_μ_k(,𝕀) + τ⟨(μ_k), -𝕀⟩_L^2(μ_k), ν_k+1 = (_k+1)_#μ_k; _k+1 = _∈ L^2(ν_k+1) _ϕ_ν_k+1(,𝕀) + τ(_#ν_k+1), μ_k+1 = (_k+1)_#ν_k+1. ]. The first step of our analysis is to show that this scheme is equivalent with _k+1 = _∈ L^2(μ_k) _ϕ_μ_k(,𝕀) + τ(⟨(μ_k),-𝕀⟩_L^2(μ_k) + (_#μ_k)) μ_k+1 = (_k+1)_#μ_k. This is true under the condition that μ_k∈ implies that ν_k+1∈. Let ϕ_μ be pushforward compatible, μ_0∈ and assume that if μ_k∈ then ν_k+1∈. Then the schemes (<ref>) and (<ref>) are equivalent. See <Ref>. We are now ready to state the convergence results for the proximal gradient scheme. Let μ_0∈, τ≤1/ and (μ)=(μ)+(μ) with _μ_k convex on L^2(μ) and -smooth relative to ϕ along t↦((1-t)𝕀 + t _k+1)_#μ_k. Then, for all ∈ L^2(μ_k), (μ_k+1) ≤(_#μ_k) + (μ_k) + ⟨(μ_k), -𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(,𝕀) - 1/τ_ϕ_μ_k(,_k+1). Moreover, for =𝕀, (μ_k+1) ≤(μ_k) - 1/τ_ϕ_μ_k(𝕀, _k+1). Additionally, let ≥ 0, ν∈_2(^d) and suppose that ϕ_μ satisfies <Ref>. If is -convex relative to ϕ along t↦((1-t)𝕀 + t _ϕ_μ_k^μ_k,ν)_#μ_k, then for all k≥ 1, (μ_k)-(ν) ≤/(1-τ)^-k - 1_ϕ(ν,μ_0) ≤1-τ/kτ_ϕ(ν, μ_0). See <Ref>. We verify now that <Ref> can be applied for mirror schemes of interest. <cit.> showed that it holds for the Wasserstein proximal gradient of potentials, i.e. with ϕ(μ) = ∫1/2·_2^2 dμ and (μ) = ∫ U dμ with U (strictly) convex. We extend their result for (μ) = ∫ U dμ and ϕ(μ)=∫ V dμ for V strictly convex and U -smooth relative to V. Let μ∈, (μ)=∫ U dμ, ϕ_μ() = ∫ V∘ dμ with V strongly convex and U -smooth relative to V, and =∇ V^*∘ (∇ V - τ∇ U). Assume τ<1/, then _#μ∈. The proof of the lemma is inspired from <cit.>. The goal is to apply <cit.>, which requires to show that is injective almost everywhere and that |∇|>0 almost everywhere. See <Ref> for the proof. To apply <Ref>, we also need to be convex. Let μ∈, and denote ρ its density w.r.t the Lebesgue measure. For (μ) = ∫ f(ρ(x)) dx where f:→ is C^1 and satisfies f(0)=0, lim_x→ 0 x f'(x) = 0 and x↦ f(x^-d)x^d is convex and non-increasing on _+, then by <cit.>, is convex along curves μ_t=((1-t) + t)_#μ obtained with and with positive definite Jacobians. This is the case e.g. for f(x)=xlog x, for which corresponds to the negative entropy. In what follows, we focus on (μ)=∫ Udμ with U(x)=1/2 (x-m)^TΣ^-1(x-m) for Σ∈ S_d^++(), the negative entropy and with a Bregman potential of the form ϕ(μ)=∫ Vdμ with V(x)=1/2 x^TΛ^-1 x. Moreover, we suppose μ_0=(m_0,Σ_0). In this situation, each distribution μ_k is also Gaussian, as the forward and backward steps are affine operations. By <Ref>, to be able to apply the three-point inequality to have the descent lemma, we need to be convex along ((1-t)_k+1 + t𝕀)_#μ_k and along ((1-t)_k+1 + t_ϕ_μ_k^μ_k, ν)_#μ_k for the convergence. Assuming the covariances matrices are full rank, _k+1 is affine and its gradient is invertible. Moreover, by <Ref>, _ϕ_μ_k^μ_k,ν = ∇ u ∘ϕ(μ_k) for ∇ u an OT map between ϕ(μ_k)_#μ_k and ν. Since everyone is Gaussian, and ϕ(μ_k)(x) = Λ^-1 x is affine, is has a positive definite Jacobian. Thus, using <cit.>, we can conclude that we can apply <Ref>. Closed-form for Gaussian. Let (μ) = ∫ Udμ with U(x)=1/2 (x-m)^TΣ^-1(x-m), Σ∈ S_d^++(), m∈^d, and (μ) = ∫log(ρ(x)) dμ(x) for dμ = ρ(x)dx. For the Bregman potential, we will choose ϕ(μ)=∫ Vdμ for V(x)=1/2⟨ x, Λ^-1x⟩. Recall that the forward step reads as _k+1 = ∇ V^*∘(∇ V - τ(μ_k)), ν_k+1 = (_k+1)_#μ_k. Since ∇ V(x) = Λ^-1x, and μ_k = 𝒩(m_k,Σ_k), we obtain for all x∈^d, _k+1(x) = Λ(Λ^-1x - τΣ^-1(x-m)) = x - τΛΣ^-1(x-m). Thus, the output of the forward step is still a Gaussian of the form ν_k+1=𝒩(m_k+1/2, Σ_k+1/2) with m_k+1/2 = (I_d - τΛΣ^-1)m_k + τΛΣ^-1m Σ_k+1/2 = (I_d - τΛΣ^-1)^TΣ_k (I_d-τΛΣ^-1). Since ∇ V is linear, the output of the backward step stays Gaussian. Moreover, the first order conditions give ∇ V ∘_k+1 + τ∇log(ρ_k+1∘_k+1) = ∇ V ∀ x, Λ^-1x = Λ^-1_k+1(x) - τΣ_k+1^-1(_k+1(x)-m_k+1) ∀ x, x = _k+1(x) - τΛΣ_k+1^-1(_k+1(x) - m_k+1). Thus, the output is a Gaussian 𝒩(m_k+1, Σ_k+1) with (m_k+1, Σ_k+1) satisfying m_k+1 = m_k+1/2 Σ_k+1/2 = (I_d - τΛΣ_k+1^-1)^TΣ_k+1(I_d-τΛΣ_k+1^-1). Moreover, if Λ and Σ_k+1 commute, this is equivalent with Σ_k+1^2 -(2τΛ + Σ_k+1/2)Σ_k+1 + τ^2Λ^2 = 0, which solution is given by Σ_k+1 = 1/2( Σ_k+1/2 + 2τΛ + (Σ_k+1/2(4τΛ + Σ_k+1/2))^1/2). To sum up, the update is ν_k+1 = 𝒩((I_d-τΛΣ^-1)m_k + τΛΣ^-1m, (I_d - τΛΣ^-1)^TΣ_k(I_d-τΛΣ^-1)) μ_k+1 = 𝒩(m_k+1/2, 1/2 (Σ_k+1/2 + 2τΛ + (Σ_k+1/2(4τΛ+Σ_k+1/2))^1/2)). For Λ=Σ, we call it the ideally preconditioned Forward-Backward scheme (PFB). § ADDITIONAL DETAILS ON EXPERIMENTS §.§ Solving the general scheme In general, for ϕ pushforward compatible, one needs to solve at each iteration k≥ 0, ϕ(μ_k+1)∘_k+1 = ϕ(μ_k) - τ(μ_k). Except for the case ϕ(μ) = ∫ V dμ where ϕ(μ) = ∇ V does not depend on μ, one cannot in general invert ϕ(μ_k+1) directly. It might be the case even for ϕ_μ()=∫ V∘ dμ if ∇ V does not have an analytical inverse. A practical workaround is to solve an implicit problem, see e.g. <cit.>. Here, we use the Newton-Raphson algorithm. Basically, suppose we have μ=1/n∑_i=1^n δ_x_i. Then, the scheme is equivalent with ∀ j∈{1,…,n}, G_j(x_1,…,x_n) = 0, for G_j(x_1,…,x_n) =ϕ(1/n∑_i=1^n δ_x_i)(x_j) - ϕ(μ_k)(x_j) + τ(μ_k)(x_j). Write 𝒢(x_1,…,x_n) = (G_1(x_1,…,x_n),…,G_n(x_1,…,x_n)). Then, we perform the following Newton iterations at each step: (x_1^k+1,…,x_n^k+1) = (x_1^k,…,x_n^k) - γ(J_𝒢(x_1^k,…,x_n^k))^-1𝒢(x_1^k,…,x_n^k). The Jacobian is of size nd× nd, which does not scale well with the dimension and the number of samples. We can reduce the complexity of the algorithm by relying on inverse Hessian vector products, see e.g. <cit.>. §.§ Mirror descent of interaction energies Details of <Ref>. We detail in this Section the first experiment of <Ref>. We aim at minimizing the interaction energy 𝒲(μ) = ∬ W(x-y) dμ(x)dμ(y) for W(z)=1/4z_2^4 - 1/2z_2^2. It is well-known that the stationary solution of its gradient flow is a Dirac ring <cit.>. Since the stationary solution is translation invariant, we project the measures to be centered. We study here two Bregman potentials which are also interaction energies. First, observing that ∇^2W(z) = 2zz^T + (z_2^2 - 1) I_d, we have for all z, ∇^2 W_op≤ 2z_2^2 + z_2^2 + 1 = 3 z_2^2 + 1 = p_2(z_2), with p_2(t) = 3 t^2 + 1. Thus, by <cit.>, W is -smooth relative to K_4(z) = 1/4z_2^4 + 1/2z_2^2 with = 4. Thus, using <Ref>, 𝒲_μ is -smooth relative to ϕ_μ() = ∬ K((x)-(x')) dμ(x)dμ(x') for all μ, and we can apply <Ref>. Under the additional hypothesis that the measures are compactly supported, and thus there exists M>0 such that x_2^2 ≤ M for μ-almost every x, we can also show that W is -smooth relative to K_2(z)=1/2z_2^2. Indeed, on one hand, ∇^2 K = I_d and ∇^2 W(z) = 2zz^T + (z_2^2 - 1) I_d. Thus, for all v,z∈^d, v^T ∇^2 W(z) v = 2 ⟨ z,v⟩^2 + (z_2^2 - 1) v_2^2 ≤ 3 z_2^2 v_2^2 ≤ 3 M v_2^2 = 3 M v^T ∇^2 K(z) v. In <Ref>, we plot the evolution of 𝒲 along the flows obtained with these two Bregman potential, starting from μ_0 = (0, 0.25^2 I_2) for n=100 particles, with a step size of τ=0.1 for 120 epochs. Ill-conditioned interaction energy. We also study the minimization of an interaction energy with an ill-conditioned kernel W(z) = 1/4 (z^T Σ^-1 z)^2 - 1/2 z^TΣ^-1z where Σ∈ S_d^++() but is possibly badly conditioned, i.e. the ratio between the largest and smallest eigenvalues is big. In this case, the stationary solution becomes an ellipsoid instead of a ring. In our experiments, we take Σ=diag(100, 0.1). For each scheme, we use μ_0 = (0, 0.25^2 I_2), n=100 particles and a step size of τ=0.1. On <Ref>, we use Bregman potentials which take into account this conditioning, namely we use K_2^Σ(z) = 1/2 z^TΣ^-1 z and K_4^Σ(z) = 1/4 (z^TΣ^-1 z)^2 - 1/2 (z^TΣ^-1 z), and we observe that the convergence is much faster compared to the same kernels without preconditioning. For K_2^Σ(z) = 1/2 z^TΣ^-1 z, the scheme becomes (∇ K ⋆μ_k+1)∘_k+1 = ∇ K ⋆μ_k - γ(μ_k) Σ^-1(_k+1 - m(μ_k+1)) = Σ^-1(𝕀 - m(μ_k)) - γΣ^-1(𝕀^T Σ^-1𝕀 - 1) 𝕀 _k+1 - m(μ_k+1) = 𝕀 - m(μ_k) - γ (𝕀^T Σ^-1𝕀 - 1)𝕀. Thus, we see that Σ^-1 has less influence which might explain the faster convergence. Similarly as in the not preconditioned case, using that ∇^2 W(z) = 2Σ^-1 z z^T Σ^-1 + (z^TΣ^-1 z - 1) Σ^-1, we can show that v^T ∇^2 W(z) v = 2 ⟨ z,v⟩_Σ^-1^2 + (z_Σ^-1^2 - 1) v_Σ^-1^2 ≤ 3 M v_Σ^-1^2 = 3 M v^T ∇^2 K(z) v. For the sake of comparison, we also report on <Ref> the trajectories of particles for the use of K_4 and K_4^Σ, as well as of the usual Wasserstein gradient descent and the preconditioned Wasserstein gradient descent obtained with h^*(x)=1/2 x^TΣ x (which is equivalent with the Mirror Descent with ϕ_μ^V as Bregman potential and V(x)=1/2 x^T Σ^-1 x). We observe almost the same trajectories as K_2, which would indicate that the target is also smooth compared to ϕ_μ^V. Runtime. These experiments were run on a personal Laptop with a CPU Intel Core i5-9300H. For the interaction energy as Bregman potential, running the algorithm with Newton's method for n=100 particles in dimension d=2 for 120 epochs took about 5mn for K_2 and K_2^Σ, and about 1h for K_4 and K_4^Σ. §.§ Mirror descent on Gaussians As the Mirror Descent scheme cannot be computed in closed-form for Bregman potential which are not potential energies, and thus are computationally costly, we propose here to restrain ourselves to the Gaussian setting. We choose as target distribution ν=(0,Σ) for Σ a symmetric positive definite matrix in ^10× 10, and thus the functional to be optimized is (μ)=∫ Vdμ + (μ) with V(x)=1/2 x^T Σ^-1 x. The initial distribution is always chosen as μ_0=(0,I_d). In all cases, the step size is chosen as τ=0.01, and we run the scheme for 1500 iterations. For the target distributions, we sample 20 random covariances of the form Σ=UDU^T with D evenly spaced in log scale between 1 and 100, and U∈^10 × 10 chosen as a uniformly random orthogonal matrix, as in <cit.>, and we report the averaged KL divergence over iterations in <Ref>. We add on <Ref> the same experiments with targets of the form (0,D) where D is a diagonal matrix on ^10× 10 sampled uniformly over [0,50]^10. We compare here the Forward-Backward (FB) scheme of <cit.>, the ideally preconditioned Forward-Backward scheme (PFB), which uses the closed-form (<ref>) derived in <Ref> with Λ=Σ, the Mirror Descent with negative entropy Bregman potential (NEM), whose closed-form was derived in <Ref> ∀ k≥ 0, Σ_k+1^-1 = ((1-τ)Σ_k^-1 + τΣ^-1)^T Σ_k ((1-τ)Σ_k^-1 + τΣ^-1). We also experiment with the KL divergence as Bregman potential (KLM) and the ideally preconditioned KL divergence (PKLM). We observe that, even though the objective is convex relative to the Bregman potential, this scheme does not always converge. It might be due to its gradient which might not always be invertible.à rediscuter, parce que c'est aussi possible que ce soit un pbm de schéma du type ss-gradient We note that using as Bregman potential ϕ_μ()=∫ψ∘dμ for ψ(x)=1/2 x^TΛ^-1 x is equivalent with using a preconditioner with (x) = 1/2 x^T Λ x. Analysis of the convergence. It is well-known that along the Wasserstein gradient flow of the KL divergence starting from a Gaussian and with a Gaussian target (Ornstein-Uhlenbeck process), the measures stay Gaussian <cit.>. Thus, the Forward-Backward scheme has Gaussian iterates at each step <cit.>. We also use a linearly preconditioned Forward-Backward scheme, which closed-form is derived in (<ref>) (<Ref>). For the Bregman potential, we choose ϕ_μ() = ∫ψ∘ dμ for ψ(x) = 1/2 x^T Σ x. In this situation, (μ) = ∫ Vdμ is 1-smooth and 1-convex relative to ϕ_μ. Thus, we can apply <Ref>. We refer to <Ref> for more details on the convexity of . For Bregman potentials whose gradient is not affine, the distributions do not necessarily stay Gaussian along the flows. Thus, we work on the Bures-Wasserstein space and use the Bures-Wasserstein gradient, i.e. we project the gradient on the space of affine maps with symmetric linear term, i.e. of the form (x)=b + S(x-m) with S∈ S_d() <cit.>. We refer to <cit.> for more details on this submanifold. This can be seen as performing Variational Inference. We derive the closed-form of the different schemes in <Ref>. Even though these procedures do not fit exactly the theory developed in this work, we show the relative smoothness of relative to along the curve μ_t = ((1-t)𝕀+t_k+1)_#μ_k under the hypothesis that the covariances matrices have bounded eigenvalues. Moreover, since __μ=_ϕ_μ^V + __μ≥__μ, is also 1-convex relative to . Let λ>0, (μ) = ∫ Vdμ + (μ) with V(x)=1/2 x^T Σ^-1 x where Σ∈ S_d^++() and Σ≼λ I_d. Suppose that for all k≥ 0, (1-τ) Σ_k+1Σ_k^-1 + τΣ_k+1Σ^-1≽ 0. Then, is smooth relative to along μ_t = ((1-t)𝕀 + t _k+1)_#μ_k where μ_k=𝒩(0,Σ_k) with Σ_k ∈ S_d^++(), Σ_k≼λ I_d. See <Ref>. §.§ Single-cell experiments First, we provide more details on the experiment on single cells of <Ref>. Then, we detail a second experiment comparing the method with using a static map. Details on the metrics. We show the benefits of using the polynomial preconditioner over the single-cell datasets for different metrics. The first one considered is the Sliced-Wasserstein distance <cit.>, defined as ∀μ,ν∈_2(^d), SW_2^2(μ,ν) = ∫_S^d-1_2^2(P^θ_#μ, P^θ_#ν) dλ(θ), where S^d-1={θ∈^d, θ_2=1}, λ denotes the uniform distribution on S^d-1 and for all θ∈ S^d-1, x∈^d, P^θ(x)=⟨ x,θ⟩. For (μ)=1/2SW_2^2(μ,ν), the Wasserstein gradient can be computed as <cit.> (μ) = ∫_S^d-1ψ_θ'(P^θ(x))θ dλ(θ), where, for t∈, ψ_θ'(t) = t - F_P^θ_#ν^-1(F_P^θ_#μ(t)) with F_P^θ_#μ the cumulative distribution function of P^θ_#μ. In practice, we compute SW and its gradient using a Monte-Carlo approximation by first drawing L uniform random directions θ_1,…,θ_L. The second one considered is the Sinkhorn divergence <cit.> defined as ∀μ,ν∈_2(^d), _ε,2^2(μ,ν) = OT_ε(μ,ν) - 1/2OT_ε(μ,μ) - 1/2OT_ε(ν,ν), with OT_ε(μ,ν) = inf_γ∈Π(μ,ν) ∫x-y_2^2 dγ(x,y) + εKL(γ||μ⊗ν). The Wasserstein gradient of _ε,2^2 is simply obtained as the potential <cit.>. Finally, we also consider the energy distance, defined as ∀μ,ν∈_2(^d), ED(μ,ν) = - ∬x-y_2 d(μ-ν)(x)d(μ-ν)(y). To compute its Wasserstein gradient, we use the sliced procedure of <cit.>. Parameters chosen. For all the metrics, we fixed the step size at τ=1. To choose the parameter a of the preconditioner h^*(x) = (x_2^a + 1)^1/a-1, we ran a grid search over a∈{1.25,1.5,1.75} for a random treatment, and used it for all the others. In particular, we used for the dataset 4i a=1.5 for the Sinkhorn divergence and for SW, and a=1.75 for the energy distance. For the scRNAseq dataset, we used a=1.25 for the Sinkhorn divergence and SW, and a=1.5 for the energy distance. We note that for the dataset 4i, the data lie in dimension d=48 and d=50 for scRNAseq. For all the metrics, we first sampled 4096 particles from the source (untreated) dataset, and used in average between 2000 and 3000 samples from the target dataset. For the test value, we also added 40% of unseen cells following <cit.>. Note that we reported the results in <Ref> for 3 different initializations for each treatment, and reported these results with their mean. We report the results using a fixed relative tolerance tol=10^-3, i.e. at the first iteration where |(μ_k)-(μ_k-1)|/(μ_k-1) ≤tol, with a maximum value of iterations of 10^4. For the Sinkhorn divergence, we chose ε as 10^-1 time the variance of the target. Finally, for SW and the computation of the gradient of the energy distance, we used a Monte-Carlo approximation with L=1024 projections. Comparison to an OT static map. We now compare the prediction of the response of cells to a perturbation using Wasserstein gradient descent, with and without preconditioning, to the one provided by a static estimator, the entropic map T_ε <cit.>. This experiment motivates the use of a dynamic procedure, iterating multiple steps to map the unperturbed population μ to the perturbed population ν, instead of a unique static step. We use the proteomic dataset <cit.> as the one considered in <ref>. We use the default OTT-JAX <cit.> of T_ε. The results are shown in Figure <ref>. Runtime. For this experiment, we used a GPU Tesla P100-PCIE-16GB. Depending on the convergence and on the metric considered, each run took in between 30s and 10mn. So in total, it took a few hundred of hours of computation time. §.§ Mirror descent on the simplex We can also leverage the mirror map to perform sampling in constrained spaces. This has received a lot of attention recently either through mirror Langevin methods <cit.>, diffusion methods <cit.>, mirror SVGD <cit.> or other MCMC algorithms <cit.>. The goal here is to sample from a Dirichlet distribution, i.e. from a distribution ν∝ e^-V where V(x) = - ∑_i=1^d a_i log(x_i) - a_d+1log(1-∑_i=1^d x_i). To sample from such a distribution, we minimize the Kullback-Leibler divergence, i.e. (μ) = (μ || ν)= ∫ V dμ + (μ). To stay on the (open) simplex Δ_d = {x∈ℝ^d+1, x_i>0, ∑_i=1^d+1 x_i < 1}, we use the mirror map ϕ(μ) = ∫ψdμ with ψ(x) = ∑_i=1^d x_i log(x_i) + (1-∑_i x_i) log (1-∑_i x_i) for which ∇ψ(x) = (log x_i - log(1 - ∑_j x_j) )_i, ∇ψ^*(y) = (e^y_i/1+∑_j e^y_j)_i. The scheme here is given by _k+1 = ∇ψ^*∘ (∇ψ - γ(μ_k)), where (μ_k) = ∇ V + ∇logμ_k, with the density of μ_k estimated through a Kernel Density Estimator (KDE). We plot on <Ref> the results obtained for d=2, a_1=a_2=a_3=6 and 100 samples. We also report the results for the Mirror Langevin Dynamic (MLD) algorithm, which provide iid samples, which are thus less ordered. We plot the evolution of the KL over iterations on <Ref> (where the entropy is estimated using the Kozachenko-Leonenko estimator <cit.>). The KDE used here will not scale well with the dimension, however, different methods have been recently propose to overcome this issue, such as using projection on lower dimensional subspaces <cit.>, or using neural networks to learn ratio density estimators <cit.>. § PROOFS §.§ Proof of <Ref> Let μ,ρ∈ and ν∈_2(^d). Define _ϕ_μ^μ,ν=__#μ=ν _ϕ_μ(,𝕀), _ϕ_ρ^ρ,ν=__#ρ =ν _ϕ_ρ(,𝕀) and let ∈ L^2(μ) such that _#μ=ρ. Then, noticing that γ=(_ϕ_μ^μ,ν, )_#μ∈Π(ν,ρ), we have _ϕ_μ(_ϕ_μ^μ,ν, ) = ϕ((_ϕ_μ^μ,ν)_#μ) - ϕ(_#ν) - ∫⟨ϕ(_#μ)(y), x-y⟩ d(_ϕ_μ^μ,ν,)_#μ(x,y) = ϕ(ν) - ϕ(ρ) - ∫⟨ϕ(ρ)(y), x-y⟩ dγ(x,y) ≥_ϕ(ν,ρ) = _ϕ_ρ(_ϕ_ρ^ρ, ν, 𝕀). In the last line, we used <Ref>, i.e. that the optimal coupling is of the form (_ϕ_ρ^ρ,ν, 𝕀)_#ρ. §.§ Proof of <Ref> Let _k+1=_∈ L^2(μ_k) τ⟨∇__2(μ_k), -𝕀⟩_L^2(μ_k) +_ϕ_μ_k(, 𝕀). Applying the 3-point inequality (<Ref>) with ψ() = τ⟨∇__2(μ_k), -𝕀⟩_L^2(μ_k) which is convex, _0=𝕀 and ^*=_k+1, we get for all ∈ L^2(μ_k), τ⟨∇__2(μ_k), - 𝕀⟩_L^2(μ_k) + _ϕ_μ_k(,𝕀) ≥τ⟨∇__2(μ_k), _k+1-𝕀⟩_L^2(μ_k) + _ϕ_μ_k(_k+1,𝕀) + _ϕ_μ_k(,_k+1), which is equivalent to ⟨∇__2(μ_k), _k+1-𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(_k+1, 𝕀) ≤⟨∇__2(μ_k), -𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(,𝕀) - 1/τ_ϕ_μ_k(,_k+1). By the -smoothness of _μ_k relative to ϕ_μ_k, we also have __μ_k(_k+1, 𝕀) = _μ_k(_k+1) - _μ_k(𝕀) - ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k)≤_ϕ_μ_k(_k+1,𝕀) _μ_k(_k+1) ≤_μ_k(𝕀) + ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k) + _ϕ_μ_k(_k+1,𝕀). Moreover, since ≤1/τ, this inequality implies (by non-negativity of _ϕ_μ_k), _μ_k(_k+1) ≤_μ_k(𝕀) + ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(_k+1,𝕀). Then, using the inequality (<ref>), we obtain for all ∈ L^2(μ_k), _μ_k(_k+1) ≤_μ_k(𝕀) + ⟨∇__2(μ_k), - 𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(,𝕀) - 1/τ_ϕ_μ_k(, _k+1) Observing that _μ_k(_k+1) = (μ_k+1) and _μ_k(𝕀) = (μ_k), we get (μ_k+1) ≤(μ_k) + ⟨(μ_k), -𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(,𝕀) - 1/τ_ϕ_μ_k(, _k+1). Finally, setting =𝕀, we obtain the result: (μ_k+1) ≤(μ_k) - 1/τ_ϕ_μ_k(𝕀, _k+1). §.§ Proof of <Ref> Let ν∈_2(^d), and =_, _#μ_k=ν _ϕ_μ_k(,𝕀). From the relative convexity hypothesis, we have __μ_k(,𝕀) ≥ _ϕ_μ_k(,𝕀) _μ_k()-_μ_k(𝕀)-⟨∇__2(μ_k), -𝕀⟩_L^2(μ_k)≥_ϕ_μ_k(,𝕀) _μ_k() - _ϕ_μ_k(,𝕀) ≥_μ_k(𝕀) + ⟨∇__2(μ_k), -𝕀⟩_L^2(μ_k) (ν) - _ϕ_μ_k(,𝕀) ≥(μ_k) + ⟨∇__2(μ_k), -𝕀⟩_L^2(μ_k). Plugging this into (<ref>), we get (μ_k+1) ≤(ν) + 1/τ(_ϕ_μ_k(,𝕀)-_ϕ_μ_k(, _k+1)) - _ϕ_μ_k(, 𝕀). Then, by definition of , note that _ϕ_μ_k(,𝕀) = _ϕ(ν,μ_k), and by <Ref>, we have _ϕ_μ_k(,_k+1) ≥_ϕ(ν, μ_k+1), since _#μ_k=ν and (_k+1)_#μ_k = μ_k+1. Thus, (μ_k+1) - (ν) ≤(1/τ-) _ϕ(ν,μ_k) - 1/τ_ϕ(ν,μ_k+1). Observing that (μ_k)≤(μ_ℓ) for all ℓ≤ k (by <Ref> and non-negativity of _ϕ for ϕ convex) and that _ϕ(ν,μ) ≥ 0, we can apply <Ref> with f=, c=(ν) and g=_ϕ(ν,·), and we obtain ∀ k≥ 1, (μ_k) - (ν) ≤/(1/τ/1/τ-)^k - 1_ϕ(ν,μ_0) ≤1/τ - /k_ϕ(ν, μ_0). For the second result, from (<ref>), we get for ν=μ^* the minimizer of , since (μ_k+1)-(μ^*)≥ 0, _ϕ(μ^*, μ_k+1) ≤(1-τ) _ϕ(μ^*, μ_k) ≤(1-τ)^k+1_ϕ(μ^*, μ_0). §.§ Proof of <Ref> Let k≥ 0, by the definition of _ϕ_μ_k^ and the hypothesis _ϕ_μ_k^((μ_k+1)∘_k+1, (μ_k)) ≤__μ_k(𝕀, _k+1) , we have ϕ_μ_k+1^((μ_k+1)) = ϕ_μ_k^((μ_k)) + ⟨∇∘(μ_k), (μ_k+1)∘_k+1-(μ_k)⟩_L^2(μ_k) + _ϕ_μ_k^(((_k+1)_#μ_k)∘_k+1, (μ_k)) ≤ϕ_μ_k^((μ_k)) + ⟨∇∘(μ_k), (μ_k+1)∘_k+1-(μ_k)⟩_L^2(μ_k) + __μ_k(𝕀, _k+1) ≤ϕ_μ_k^((μ_k)) + ⟨∇∘(μ_k), (μ_k+1)∘_k+1-(μ_k)⟩_L^2(μ_k) + 1/τ__μ_k(𝕀, _k+1), where we used in the last line that τ≤1/ and the non-negativity of the Bregman divergence since is convex along t↦((1-t)_k+1 + t 𝕀)_#μ_k and thus by <Ref>, __μ_k(𝕀, _k+1) ≥ 0. Let ∈ L^2(μ_k). Then, using the three-point identity (<Ref>) (with =𝕀, = and =_k+1), and remembering that _k+1 = 𝕀 - τ∇∘(μ_k), we get __μ_k(𝕀, _k+1) = __μ_k(𝕀, ) - __μ_k(_k+1,) - ⟨((_k+1)_#μ_k)∘_k+1, 𝕀 - _k+1⟩_L^2(μ_k) + ⟨(_#μ_k)∘,𝕀-_k+1⟩_L^2(μ_k) = __μ_k(𝕀, ) - __μ_k(_k+1,) + ⟨(_#μ_k)∘ - (μ_k+1)∘_k+1, 𝕀-_k+1⟩_L^2(μ_k) = __μ_k(𝕀, ) - __μ_k(_k+1,) + τ⟨(_#μ_k)∘ - (μ_k+1)∘_k+1, ∇∘(μ_k)⟩_L^2(μ_k). This is equivalent with ⟨∇∘ (μ_k), (μ_k+1)∘_k+1-(μ_k)⟩_L^2(μ_k) + 1/τ__μ_k(𝕀, _k+1) = 1/τ__μ_k(𝕀, ) - 1/τ__μ_k(_k+1,) + ⟨(_#μ_k)∘ - (μ_k), ∇∘(μ_k)⟩_L^2(μ_k). Then, using the definition of _ϕ_μ_k^((_#μ_k)∘, (μ_k)), we obtain ⟨∇∘(μ_k), (μ_k+1)∘_k+1-(μ_k)⟩_L^2(μ_k) + 1/τ__μ_k(𝕀, _k+1) = 1/τ__μ_k(𝕀, ) - 1/τ__μ_k(_k+1,) - _ϕ_μ_k^((_#μ_k)∘, (μ_k)) + ϕ_μ_k^((_#μ_k)∘) - ϕ_μ_k^((μ_k)). Plugging this into (<ref>), we get ϕ_μ_k+1^((μ_k+1)) ≤ϕ_μ_k^((_#μ_k)∘) + 1/τ__μ_k(𝕀, ) - 1/τ__μ_k(_k+1,) - _ϕ_μ_k^((_#μ_k)∘, (μ_k)). For =𝕀, we get ϕ_μ_k+1^((μ_k+1)) ≤ϕ_μ_k^((μ_k)) - 1/τ__μ_k(_k+1,𝕀). §.§ Proof of <Ref> Let μ^*∈_2(^d) be the minimizer of , k≥ 0 and =_∈ L^2(μ_k), _#μ_k=μ^* __μ_k(𝕀,). First, observe that since μ^* is the minimizer of , then (μ^*) = 0 (see e.g. <cit.>), and thus ϕ_μ_k^(0)=(0). Moreover, it induces that __μ_k(𝕀,) = (μ_k)-(μ^*) and __μ_k(_k+1,) = (μ_k+1) - (μ^*). Therefore, using (<ref>) and the hypothesis __μ_k(𝕀,)≤_ϕ_μ_k^(0, (μ_k)), we get ϕ_μ_k+1^((μ_k+1)) - (0) ≤1/τ__μ_k(𝕀,)- 1/τ__μ_k(_k+1, ) - _ϕ_μ_k^(0, (μ_k)) ≤1/τ__μ_k(𝕀,)- 1/τ__μ_k(_k+1, ) -__μ_k(𝕀, ) = (1/τ-)__μ_k(𝕀,) - 1/τ__μ_k(_k+1,) = (1/τ-) ((μ_k)-(μ^*)) - 1/τ((μ_k+1)-(μ^*)). Then, applying <Ref> with f=ϕ_·^∘ (which satisfies ϕ_μ_k+1^((μ_k+1)) ≤ϕ_μ_k^((μ_k)) by <Ref>), c=(0) and g = (·) - (μ^*)≥ 0, we get ϕ_μ_k^((μ_k)) - (0) ≤/(1/τ/1/τ-)^k - 1((μ_0)-(μ^*)) ≤1/τ-/k((μ_0)-(μ^*)). Concerning the convergence of (μ_k), if >0 and attains its minimum in 0, then necessarily ϕ_μ^()≥(0) for all μ∈_2(^d) and ∈ L^2(μ). Thus, using (<ref>), we get 0 ≤ϕ_μ_k+1^((μ_k+1)) - (0) ≤1/τ__μ_k(𝕀,)- 1/τ__μ_k(_k+1, ) - _ϕ_μ_k^(0, (μ_k)) ≤1/τ((μ_k) - (μ^*)) - 1/τ((μ_k+1)-(μ^*)) - __μ_k(𝕀,) = (1/τ - ) ((μ_k) - (μ^*)) - 1/τ((μ_k+1)-(μ^*)). Thus, for all k≥ 0, (μ_k+1) - (μ^*) = (1-τ) ((μ_k)-(μ^*)) ≤(1-τ)^k+1((μ_0)-(μ^*)). §.§ Proof of <Ref> Let us define _μ:L^2(μ)×^d →^d as for all ∈ L^2(μ), x∈^d, _μ(, x) = (_#μ)(x) = [ ∂/∂ x_1δ/δμ(_#μ)(x); ⋮; ∂/∂ x_dδ/δμ(_#μ)(x) ] = [ G_μ^1(, x); ⋮; G_μ^d(, x) ], with for all i, G_μ^i:L^2(μ)×^d→, G_μ^i(,x)=∂/∂ x_iδ/δμ(_#μ)(x). Using the chain rule, for all x∈^d, dG_μ^i/ds(_s, _s(x)) = ⟨∇_1G_μ^i(_s, _s(x)), d_s/ds⟩_L^2(μ) + ⟨∇_2G_μ^i(_s, _s(x)), d_s/ds(x)⟩. On one hand, we have ∇_2G_μ^i(_s, _s(x)) = ∇∂/∂ x_iδ/δμ((_s)_#μ)(_s(x)). On the other hand, let us compute ∇_1G_μ^i(,x). First, we define the shorthands g_μ^x,i() = G_μ^i(,x) = ∂/∂ x_iδ/δμ(_#μ)(x) and g^x,i(ν) = ∂/∂ x_iδ/δμ(ν)(x). Since g_μ^x,i() = g^x,i(_#μ), applying <Ref>, we know that ∇_1G_μ(,x) = ∇g_μ^x,i() = g^x,i(_#μ)∘ T. Now, let us compute g^x,i(ν) = ∇δ g^x,i/δμ(ν). Let χ be such that ∫dχ = 0, then using the hypothesis that δ/δμ∇δ/δμ = ∇δ^2/δμ^2 and the definition of g^x,i, ∫δ g^x,i/δμ(ν) dχ = ∫∂/∂ x_iδ^2/δμ^2(ν)(x,y) dχ(y). Thus, g^x,i(ν) = ∇_y∂/∂ x_iδ^2/δμ^2(ν)(x,y). Putting everything together, we obtain dG_μ^i/ds(_s,_s(x)) = ⟨∇_y ∂/∂ x_iδ^2/δμ^2((_s)_#μ)(_s(x), _s(·)), d_s/ds⟩_L^2(μ) + ⟨∇∂/∂ x_iδ/δμ((_s)_#μ)(_s(x)), d_s/ds(x) ⟩ = ∫⟨∇_y ∂/∂ x_iδ^2/δμ^2((_s)_#μ)(_s(x), _s(y)), d_s/ds(y) ⟩ dμ(y) + ⟨∇∂/∂ x_iδ/δμ((_s)_#μ)(_s(x)), d_s/ds(x) ⟩, and thus d/ds_μ(_s, _s(x)) = ∫∇_y∇_xδ^2/δμ^2((_s)_#μ)(_s(x), _s(y)) d_s/ds(y) dμ(y) + ∇^2δ/δμ((_s)_#μ)(_s(x)) d_s/ds(x). §.§ Proof of <Ref> First, recall that by using the chain rule and <Ref>, d/dt(μ_t) = ⟨(μ_t)∘_t, d_t/dt⟩_L^2(μ). Thus, since d^2_t/dt^2=0, d^2/dt^2(μ_t) = d/dt⟨(μ_t)∘_t, d_t/dt⟩_L^2(μ) = ⟨d/dt((μ_t)∘_t), d_t/dt⟩_L^2(μ). By <Ref>, d^2/dt^2(μ_t) = ∬⟨∇_y∇_xδ^2/δμ^2((_t)_#μ)(_t(x), _t(y)) d_t/dt(y), d_t/dt(x)⟩ dμ(y)dμ(x) + ∫⟨∇^2δ/δμ((_t)_#μ)(_t(x)) d_t/dt(x), d_t/dt(x)⟩ dμ(x) = ∬⟨∇_y∇_xδ^2/δμ^2((_t)_#μ)(_t(x), _t(y)) v(y), v(x)⟩ dμ(y)dμ(x) + ∫⟨∇^2δ/δμ((_t)_#μ)(_t(x)) v(x), v(x)⟩ dμ(x) = ∫⟨∫∇_y∇_xδ^2/δμ^2((_t)_#μ)(_t(x), _t(y)) v(y) dμ(y) + ∇^2δ/δμ((_t)_#μ)(_t(x)) v(x), v(x)⟩ dμ(x). §.§ Proof of <Ref> * <ref> <ref>. Let t>0, t_1,t_2∈ [0,1], ℱ(μ_t^t_1→ t_2) ≤ (1-t)ℱ((_t_1)_#μ) + tℱ((_t_2)_#μ) ℱ(μ_t^t_1→ t_2)-ℱ((_t_1)_#μ)/t≤ℱ((_t_2)_#μ)-ℱ((_t_1)_#μ). Passing to the limit t→ 0 and using <Ref>, we get ⟨∇_W_2ℱ((_t_1)_#μ)∘_t_1, _t_2-_t_1⟩_L^2(μ)≤ℱ((_t_2)_#μ) - ℱ((_t_1)_#μ). * <ref> <ref>. Let t_1, t_2 ∈ [0,1], then by hypothesis, ⟨∇_W_2ℱ((_t_1)_#μ)∘_t_1, _t_2-_t_1⟩_L^2(μ)≤ℱ((_t_2)_#μ) - ℱ((_t_1)_#μ) ⟨∇_W_2ℱ((_t_2)_#μ)∘_t_2, _t_1-_t_2⟩_L^2(μ)≤ℱ((_t_1)_#μ) - ℱ((_t_2)_#μ) Summing the two inequalities, we get ⟨∇_W_2ℱ((_t_2)_#μ)∘_t_2 - ∇_W_2ℱ((_t_1)_#μ)∘_t_1, _t_2-_t_1⟩_L^2(μ)≥ 0 * <ref> <ref>. Let t_1, t_2∈ [0,1]. First, we have, ∫_0^1 d^2/dt^2ℱ(μ_t^t_1→ t_2) dt = d/dtℱ(μ_t^t_1→ t_2)|_t=1 - d/dtℱ(μ_t^t_1→ t_2)|_t=0 = ⟨∇_W_2ℱ((_t_2)_#μ)∘_t_2 - ∇_W_2ℱ((_t_1)_#μ)∘_t_1, _t_2-_t_1⟩_L^2(μ) ≥ 0. Let ϵ∈ (0,1) and define t↦ν_t^ϵ = μ_ϵ t^t_1→ 1 the interpolation curve between (_t_1)_#μ and (_t_1 + ϵ (-_t_1))_#μ. Then, noting that _t_1 + ϵ (-_t_1) = _t_1 + ϵ (1-t_1), so ν_t^ϵ=μ_ϵ t^t_1→ 1=μ_t^t_1→ t_1 + ϵ (1-t_1) and we have that ∫_0^1 d^2/dt^2ℱ(ν_t^ϵ) dt ≥ 0. Moreover, by continuity, d^2/dt^2ℱ(ν_t^ϵ) d^2/dt^2ℱ((_t_1)_#μ) = d^2/dt^2ℱ(μ_t_1). Then, since t↦^2/ t^2(ν_t^ϵ) is continuous on [0,1], it is bounded, and we can apply the dominated convergence theorem. This implies that for all t_1∈ [0,1], Hess_μ_t_1ℱ = d^2/dt^2ℱ(μ_t)|_t=t_1 = lim_ϵ→ 0∫_0^1 ^2/ t^2(ν_t^ϵ) dt ≥ 0. * <ref> <ref>. Let t_1,t_2∈ [0,1] and φ(t) = ℱ(μ_t^t_1→ t_2) for all t∈ [0,1]. From <cit.>, ∀ t∈ [0,1], φ(t) = (1-t)φ(0) + t φ(1) - ∫_0^1 d^2/dt^2φ(s) G(s,t) ds, where G is the Green function defined as G(s,t) = s(1-t)1_{s≤ t} + t(1-s)1_{t≤ s}≥ 0 <cit.>. Then, d^2/dt^2ℱ(μ_t) ≥ 0 implies that ∫_0^1 d^2/dt^2φ(s) G(s,t) ds ≥ 0, and thus φ(t) = ℱ(μ_t^t_1→ t_2) ≤ (1-t)φ(0) + tφ(1) = (1-t)ℱ((_t_1)_#μ) + tℱ((_t_2)_#μ). §.§ Proof of <Ref> Let () = _ϕ_μ_k(,𝕀) + τ((μ_k), -𝕀⟩_L^2(μ_k) + (_#μ_k)). Taking the first variation, we get ∇(_k+1) = ∇ϕ_μ_k(_k+1) - ∇ϕ_μ_k(𝕀) + τ((μ_k) + ((_k+1)_#μ_k)∘_k+1) = ∇ϕ_μ_k(_k+1) + τ((_k+1)_#μ_k)∘_k+1 - (∇ϕ_μ_k(𝕀) - τ(μ_k)) = ∇ϕ_μ_k(_k+1) + τ((_k+1)_#μ_k)∘_k+1 - ∇ϕ_μ_k(_k+1). Thus, ∇(_k+1)=0 _k+1∈_∈ L^2(μ_k) _ϕ_μ_k(,_k+1) + τ(_#μ_k). Now, we aim at showing that _k+1 = _k+1∘_k+1 or min_∈ L^2(μ_k) _ϕ_μ_k(,_k+1) + τ(_#μ_k) = min_∈ L^2(ν_k+1) _ϕ_ν_k+1(,𝕀) + τ(_#ν_k+1). First, by the change of variable formula, since ϕ_μ is pushforward compatible, observe that for ∈ L^2(ν_k+1), _ϕ_ν_k+1(,𝕀) + τ(_#ν_k+1) = _ϕ_μ_k(∘_k+1, _k+1) + τ((∘_k+1)_#μ_k). Since {∘_k+1 | ∈ L^2(ν_k+1)}⊂ L^2(μ_k), we have min_∈ L^2(ν_k+1) _ϕ_ν_k+1(,𝕀) + τ(_#ν_k+1) ≥min_∈ L^2(μ_k) _ϕ_μ_k(,_k+1) + τ(_#μ_k). By assumption, ν_k+1∈. Thus, applying <Ref>, there exists _ϕ_ν_k+1^ν_k+1, μ_k+1 such that (_ϕ_ν_k+1^ν_k+1, μ_k+1)_#ν_k+1 = μ_k+1 and _ϕ_ν_k+1^ν_k+1, μ_k+1 = _,_#ν_k+1 = μ_k+1 _ϕ_ν_k+1(,𝕀), and thus _ϕ_ν_k+1(_ϕ_ν_k+1^ν_k+1, μ_k+1, 𝕀) = _ϕ(μ_k+1, ν_k+1). By contradiction, we suppose that min_∈ L^2(ν_k+1) _ϕ_ν_k+1(,𝕀) + τ(_#ν_k+1) > _ϕ_μ_k(_k+1, _k+1) + τ((_k+1)_#μ_k). On one hand, we have (_ϕ_ν_k+1^ν_k+1, μ_k+1∘_k+1)_#μ_k = (_ϕ_ν_k+1^ν_k+1, μ_k+1)_#ν_k+1 = μ_k+1, and therefore ((_ϕ_ν_k+1^ν_k+1, μ_k+1∘_k+1)_#μ_k) = (μ_k+1) = ((_k+1)_#μ_k). On the other hand, (_k+1, _k+1)_#μ_k ∈Π(μ_k+1, ν_k+1), and thus _ϕ_μ_k(_k+1, _k+1) ≥_ϕ(μ_k+1, ν_k+1) = _ϕ_ν_k+1(_ϕ_ν_k+1^ν_k+1, μ_k+1, 𝕀). Thus, min_∈ L^2(ν_k+1) _ϕ_ν_k+1(,𝕀) + τ(_#ν_k+1) > _ϕ_μ_k(_k+1, _k+1) + τ((_k+1)_#μ_k) ≥_ϕ_ν_k+1(_ϕ_ν_k+1^ν_k+1, μ_k+1,𝕀) + τ((_ϕ_ν_k+1^ν_k+1, μ_k+1)_#ν_k+1). But _ϕ_ν_k+1^ν_k+1, μ_k+1∈ L^2(ν_k+1), so this is a contradiction. So, we can conclude that the two schemes are equivalent, and moreover, _k+1 = _ϕ_ν_k+1^ν_k+1, μ_k+1∘_k+1. §.§ Proof of <Ref> Let ψ() = τ(⟨(μ_k),-𝕀⟩_L^2(μ_k) + (_#μ_k)). Since _μ_k is convex, ψ is convex, and we can apply the three-point inequality (<Ref>) and for all ∈ L^2(μ_k), τ((_#μ_k) + ⟨(μ_k), -𝕀⟩_L^2(μ_k)) + _ϕ_μ_k(,𝕀) ≥τ((μ_k+1) + ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k)) + _ϕ_μ_k(_k+1, 𝕀) + _ϕ_μ_k(,_k+1), which is equivalent to (μ_k+1) + ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ(_k+1, 𝕀) ≤(_#μ_k) + ⟨(μ_k), -𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(,𝕀) - 1/τ_ϕ_μ_k(,_k+1). Since _μ_k is -smooth relatively to ϕ_μ_k along t↦((1-t)𝕀 + t _k+1)_#μ_k, and τ≤1/, we also have (μ_k+1) ≤(μ_k) + ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k) + _ϕ_μ_k(_k+1,𝕀) ≤(μ_k) + ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(_k+1,𝕀). Thus, applying first the smoothness of and then the three-point inequality, we get for all ∈ L^2(μ_k), (μ_k+1) + (μ_k+1) ≤(μ_k+1) + (μ_k) + ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(_k+1,𝕀) ≤(_#μ_k) + (μ_k) + ⟨(μ_k), -𝕀⟩_L^2(μ_k) + 1/τ_ϕ_μ_k(,𝕀) - 1/τ_ϕ_μ_k(,_k+1). Now, let ν∈_2(^d) and _ϕ_μ_k^μ_k,ν = _,_#μ_k=ν _ϕ_μ_k(,𝕀), and suppose that _μ_k is -convex relative to ϕ_μ_k along t↦((1-t)𝕀 + t T_ϕ_μ_k^μ_k,ν)_#μ_k. Thus, __μ_k(_ϕ_μ_k^μ_k,ν,𝕀)≥_ϕ_μ_k(_ϕ_μ_k^μ_k,ν,𝕀) (ν) - _ϕ_μ_k(_ϕ_μ_k^μ_k,ν, 𝕀) ≥(μ_k) + ⟨(μ_k), _ϕ_μ_k^μ_k,ν - 𝕀⟩_L^2(μ_k). Plugging this into (<ref>), we get (μ_k+1) ≤(ν) + (ν) - _ϕ_μ_k(_ϕ_μ_k^μ_k,ν, 𝕀) + 1/τ_ϕ_μ_k(_ϕ_μ_k^μ_k,ν, 𝕀) - 1/τ_ϕ_μ_k(_ϕ_μ_k^μ_k,ν, _k+1). Now, note that _ϕ_μ_k(_ϕ_μ_k^μ_k,ν, 𝕀) = _ϕ(ν, μ_k) and by <Ref>, _ϕ_μ_k(_ϕ_μ_k^μ_k,ν, _k+1) ≥_ϕ(ν, μ_k+1). Thus, (μ_k+1) - (ν) ≤(1/τ - ) _ϕ(ν, μ_k) - 1/τ_ϕ(ν,μ_k+1). Using =𝕀 in (<ref>), we observe that (μ_k) ≤(μ_ℓ) for all ℓ≤ k. Moreover, _ϕ(ν,μ_k)≥ 0. Thus, applying <Ref> with f=, c=(ν) and g=_ϕ(ν,·), we obtain ∀ k≥ 1, (μ_k) - (ν) ≤/(1/τ/1/τ-)^k - 1_ϕ(ν,μ_0) ≤1/τ-/k_ϕ(ν,μ_0). §.§ Proof of <Ref> First, ∇ V^* is bijective. Thus, we only need to show that h = ∇ V - τ∇ U is injective. Take u = V - τ U. Since U is -smooth relative to V, we have for all x,y, U(x) ≤ U(y) + ⟨∇ U(y), x-y⟩ + _V(x,y), which is equivalent with -U(y) ≤ -U(x) + ⟨∇ U(y), x-y⟩ + _V(x,y). Moreover, by definition of _V, V(y) = V(x)-⟨∇ V(y),x-y⟩ - _V(x,y). Summing the two inequalities, we get V(y) - τ U(y) ≤ V(x) - ⟨∇ V(y), x-y⟩ - _V(x,y) - τ U(x) + τ⟨∇ U(y),x-y⟩ + τ_V(x,y) = V(x)-τ U(x) - ⟨∇ V(y) - τ∇ U(y), x-y⟩ - (1-τ) _V(x,y). This is equivalent with u(y) ≤ u(x) - ⟨∇ u(y), x-y⟩ - (1-τ) _V(x,y), and thus with u being (1-τ)-convex relative to V (for τ≤ 1). For τ<1, it is equivalent with u-(1-τ)V convex, i.e. ⟨∇ u(x)-∇ u(y), x-y⟩≥ (1-τ) ⟨∇ V(x)-∇ V(y), x-y⟩≥ 0. Since V is strictly convex, ∇ u is injective. Moreover, |∇| = |(∇^2 V^* ∘ (∇ V-τ∇ U)) ∇^2 u| > 0 because on one hand u is (1-τ)-convex relative to V which is strictly convex, and on the other hand, V^* is also strictly convex. To conclude, applying <cit.>, _#μ is absolutely continuous with respect to the Lebesgue measure. §.§ Proof of <Ref> On one hand, is 1-smooth relative to , thus we only need to show that μ↦∫ Vdμ is smooth relative to . Using <Ref>, we need to show that d^2/dt^2(μ_t) = 1/2∫ (_k+1 - 𝕀)^T ∇^2 V (_k+1 - 𝕀) dμ_k ≤d^2/dt^2(μ_t). Recall from (<ref>) that _k+1(x) =((1-τ) Σ_k+1Σ_k^-1 + τΣ_k+1Σ^-1) x + cst, thus ∇_k+1 is a constant. Using the computations of <cit.>, d^2/dt^2(μ_t) = ⟨ [∇_t]^-2, ∇_k+1 - I_d⟩. Assuming (1-τ) Σ_k+1Σ_k^-1 + τΣ_k+1Σ^-1≽ 0, _k+1 is the gradient of a convex function and μ_t is a Wasserstein geodesic. Thus, by <cit.>, d^2/dt^2(μ_t) ≥1/Σ_μ_t_op_k+1-𝕀^2_L^2(μ_k). Moreover, by <cit.>, μ↦Σ_μ_op is convex along generalized geodesics, and thus Σ_μ_t≼λ I_d and Σ_μ_t_op≤λ <cit.>. Hence, noting σ_max(M) the largest eigenvalue of some matrix M, d^2/dt^2(μ_t) ≥1/λ_k+1 - 𝕀_L^2(μ_k)^2 ≥1/λσ_max(∇^2V)∫ (_k+1-𝕀)^T∇^2V(T_k+1-𝕀)dμ_k = 2/λσ_max(∇^2 V)d^2/dt^2(μ_t). From this inequality, we deduce that λσ_max(∇^2 V)/2__μ_k(_k+1, 𝕀) = λσ_max(∇^2 V)/2( (μ_k+1) - (μ_k) - ⟨(μ_k), _k+1-𝕀⟩_L^2(μ_k)) = λσ_max(∇^2 V)/2∫ (1-t) d^2/dt^2(μ_t) dt ≥∫d^2/dt^2(μ_t) (1-t) dt = __μ_k(_k+1, 𝕀). So, __μ_k(_k+1,𝕀) = __μ_k(_k+1,𝕀) + _ℋ_μ_k(_k+1, 𝕀) ≤(1 + λσ_max(∇^2 V)/2) _ℋ_μ_k(_k+1, 𝕀). § ADDITIONAL RESULTS §.§ Three-point identity and inequality In this Section, we derive results which are useful to show the convergence of mirror descent or preconditioned schemes. Namely, we first derive the three-point identity which we use to show the convergence of the preconditioned scheme in <Ref> as well as the three-point inequality, which we use for the convergence of the mirror descent scheme in <Ref>. Let ϕ:L^2(μ)→ℝ be Gâteaux differentiable. For all ,,∈ L^2(μ), we have _ϕ(,) = _ϕ(, ) + _ϕ(, ) + ⟨∇ϕ(), -⟩_L^2(μ) - ⟨∇ϕ(),-⟩_L^2(μ). Let ,,∈ L^2(μ), then using the linearity of the Gâteaux differential, _ϕ(,) - _ϕ(,) - _ϕ(,) = ϕ() - ϕ() - ⟨∇ϕ(), -⟩_L^2(μ) -ϕ() + ϕ() + ⟨∇ϕ(), -⟩_L^2(μ) -ϕ() + ϕ() + ⟨∇ϕ(), -⟩_L^2(μ) = -⟨∇ϕ(), -⟩_L^2(μ) + ⟨∇ϕ(), -⟩_L^2(μ) + ⟨∇ϕ(), -⟩_L^2(μ) = ⟨∇ϕ(), -⟩_L^2(μ) - ⟨∇ϕ(), -⟩_L^2(μ) Let μ∈_2(^d), _0∈ L^2(μ) and ϕ_μ:L^2(μ)→ convex, and Gâteaux differentiable. Let ψ:L^2(μ)→ be convex, proper and lower-semicontinuous. Assume there exists ^*=_∈ L^2(μ) _ϕ_μ(,_0) + ψ(). Then, for all ∈ L^2(μ), ψ() + _ϕ_μ(,_0) ≥ψ(^*) + _ϕ_μ(^*, _0) + _ϕ_μ(,^*). Denote () = _ϕ_μ(,_0) + ψ(). Let ^* = _∈ L^2(μ) (), hence 0∈∂(^*). Since ϕ and ψ are proper, convex and lower-semicontinuous, and ↦_ϕ_μ(,_0) is continuous (since ϕ_μ is continuous), thus by <cit.>, ∂(^*) = ∂ψ(^*) + ∂_ϕ_μ(·, _0)(^*). Moreover, since ϕ_μ is differentiable, ∂_ϕ_μ(·,_0)(^*) = {∇__ϕ_μ(^*,_0)} = {∇ϕ_μ(^*)-∇ϕ_μ(_0)}, and thus ∇ϕ_μ(_0)-∇ϕ_μ(^*)∈∂ψ(^*) Finally, by definition of subgradients and by applying <Ref>, we get for all ∈ L^2(μ), ψ() ≥ψ(^*) - (⟨∇ϕ_μ(^*), -^*⟩_L^2(μ) - ⟨∇ϕ_μ(_0), -^*⟩_L^2(μ)) = ψ(^*) - _ϕ_μ(,_0) + _ϕ_μ(,^*) + _ϕ_μ(^*,_0). Actually we can restrict ψ to be convex along ((1-t)^* + t)_#μ. In that case, _ψ(,^*) = ψ() - ψ(^*) - ⟨φ, -^*⟩_L^2(μ)≥ 0 for φ∈∂ψ(^*) (by <Ref>) and we still have ∂ψ(^*) + ∂_ϕ_μ(·, _0)(^*) ⊂∂(^*) (see <cit.>) so that we can conclude. §.§ Convergence lemma We first provide a Lemma which follows from <cit.>, and which is useful for the proofs of <Ref>. Let f:X→ℝ, g:X→ℝ_+ and (x_k)_k∈ℕ a sequence in X such that for all k≥ 1, f(x_k) ≤ f(x_k-1). Assume that there exists > ≥ 0, c∈ℝ such that for all k≥ 0, f(x_k+1)-c≤ (-)g(x_k)- g(x_k+1), then ∀ k≥ 1, f(x_k) - c ≤/(/ - )^k - 1 g(x_0) ≤-/k g(x_0). First, observe the f(x_k)≤ f(x_ℓ) for all ℓ≤ k. Thus, for all k≥ 1, ∑_ℓ=1^k (/-)^ℓ·(f(x_k)-c) ≤∑_ℓ=1^k (/-)^ℓ(f(x_ℓ)-c)) ≤∑_ℓ=1^k (/-)^ℓ((-) g(x_ℓ-1) - β g(x_ℓ)) = ∑_ℓ=0^k-1(/-)^ℓ g(x_ℓ) - ∑_ℓ=1^k (/-)^ℓ g(x_ℓ) = g(x_0) - (/-)^k g(x_k) ≤ g(x_0) since g≥ 0. Now, note that /∑_ℓ=1^k (/-)^ℓ = /(/-)^k - 1 = /( 1 + /-)^k -1≤-/k since (1+/-)^k ≥ 1 + k/- (by convexity on _+ of x↦ (1+x)^k). Thus, f(x_k) - c ≤/∑_ℓ=1^k (/-)^ℓ g(x_0) = /(/ - )^k - 1 g(x_0) ≤ - /k g(x_0). §.§ Some properties of Bregman divergences We provide in this Section additional results on the Bregman divergences introduced in <Ref>. First, we focus on ϕ_μ()=∫ V∘ dμ. The following Lemma is akin to <cit.> which shows it only for OT maps. Let V:^d→ convex and ϕ_μ()=∫ V∘ dμ. Then, ∀,∈ L^2(μ), _ϕ_μ(, ) = ∫_V((x), (x)) dμ(x). Let ,∈ L^2(μ), then remembering that ∇__2(μ) = ∇ V, we have _ϕ_μ(, ) = ϕ_μ() - ϕ_μ() - ⟨∇ V∘, -⟩_L^2(μ) = ∫ V∘ - V∘ - ⟨∇ V∘, -⟩ dμ = ∫_V((x), (x)) dμ(x). Next, we focus on ϕ_μ() = 1/2∬ W((x)-(x')) μ(x)μ(x'), and we generalize the result from <cit.>. Let W:^d→ even (W(x)=W(-x)), convex and differentiable. Let ϕ_μ() = 1/2∬ W((x)-(x')) dμ(x)dμ(x'). Then, ∀,∈ L^2(μ), _ϕ_μ(,) = 1/2∬_W((x)-(x'), (x)-(x')) dμ(x)dμ(x'). Let ,∈ L^2(μ), remember that ∇__2𝒲(μ) = ∇ W ⋆μ, and thus ∇__2𝒲(_#μ)∘ = (∇ W ⋆_#μ)∘. Thus, _ϕ_μ(, ) = ϕ_μ() - ϕ_μ() - ⟨ (∇ W ⋆_#μ)∘, -⟩_L^2(μ) = 1/2∬ W((x)-(x')) dμ(x)dμ(x') - 1/2∬ W((x)-(x')) dμ(x)dμ(x') - ∫⟨ (∇ W ⋆_#μ)((x)), (x) - (x)⟩ dμ(x). Then, note that ∇ W(-x) = - ∇ W(x) and thus the last term can be written as: ∫⟨ (∇ W ⋆_#μ)((x)), (x) - (x)⟩ dμ(x) = ∬⟨∇ W( (x)-(x')), (x)-(x)⟩ dμ(x)dμ(x') = 1/2∬⟨∇ W( (x)-(x')), (x)-(x)⟩ dμ(x)dμ(x') + 1/2⟨∇ W((x')-(x)), (y)-(y)⟩ dμ(x)dμ(x') = 1/2∬⟨∇ W( (x)-(x')), (x)-(x)⟩ dμ(x)dμ(x') - 1/2⟨∇ W((x)-(x')), (x')-(x')⟩ dμ(x)dμ(x') = 1/2∬⟨∇ W((x)-(x')), (x)-(x')-((x)-(x'))⟩ dμ(x)dμ(x'). Finally, we get _ϕ_μ(,) = 1/2∬(W((x)-(x')) - W((x)-(x')) -⟨∇ W((x)-(x')), (x)-(x')-((x)-(x'))⟩) dμ(x)dμ(x') = 1/2∬_W((x)-(x'), (x)-(x')) dμ(x)dμ(x'). Now, we make the connection with the mirror map used by <cit.> and derive the related Bregman divergence. Let ϕ_μ() = 1/2_2^2(_#μ, ρ) for μ,ρ∈. Then, for all ,∈ L^2(μ), such that _#μ, _#μ∈, _ϕ_μ(, ) = 1/2__#μ^ρ∘ - __#μ^ρ∘ - (-)_L^2(μ)^2 + ⟨__#μ^ρ∘ - , __#μ^ρ∘ - __#μ^ρ∘⟩_L^2(μ), where __#μ^ρ denotes the OT map between _#μ and ρ. Let ,∈ L^2(μ) such that _#μ, _#μ∈. Remember that ∇__2_2^2(·, ρ)=𝕀 - _·^ρ, then _ϕ_μ(,) = ϕ_μ() - ϕ_μ() - ⟨∇__2ϕ(_#μ)∘, -⟩_L^2(μ) = 1/2_2^2(_#μ, ρ) - 1/2_2^2(_#μ, ρ) - ⟨ (𝕀 - __#μ^ρ)∘, -⟩_L^2(μ) = 1/2__#μ^ρ∘ - _L^2(μ)^2 - 1/2__#μ^ρ∘ - _L^2(μ)^2 + ⟨__#μ^ρ∘ - , -⟩_L^2(μ) = 1/2__#μ^ρ∘ - _L^2(μ)^2 - 1/2__#μ^ρ∘ - _L^2(μ)^2 + ⟨__#μ^ρ∘ - , -__#μ^ρ∘⟩_L^2(μ) + ⟨__#μ^ρ∘ - , __#μ^ρ∘ - ⟩_L^2(μ) = 1/2__#μ^ρ∘ - _L^2(μ)^2 + 1/2__#μ^ρ∘ - _L^2(μ)^2 - ⟨__#μ^ρ∘ - , __#μ^ρ∘ - ⟩_L^2(μ) = 1/2__#μ^ρ∘ - _L^2(μ)^2 + 1/2__#μ^ρ∘ - _L^2(μ)^2 - ⟨__#μ^ρ∘ - , __#μ^ρ∘ - __#μ^ρ∘⟩_L^2(μ) - ⟨__#μ^ρ∘ - , __#μ^ρ∘ - ⟩_L^2(μ) = 1/2__#μ^ρ∘ - __#μ^ρ∘ - (-)_L^2(μ)^2 + ⟨__#μ^ρ∘ - , __#μ^ρ∘ - __#μ^ρ∘⟩_L^2(μ). preprint § NEURIPS PAPER CHECKLIST * Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: Justification: In the abstract and introduction, we claim that we adapt mirror descent and the preconditioned gradient descent to the Wasserstein space, and that we provide guarantees of convergence. These results are presented in <Ref> and <Ref> for respectively mirror descent and preconditioned gradient descent. We also claim that we illustrate advantages of such schemes on ill-conditioned optimization tasks and to minimize discrepancies in order to align single-cells, which we do in <Ref>. Guidelines: * The answer NA means that the abstract and introduction do not include the claims made in the paper. * The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. * The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. * It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. * Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: Justification: On the two theoretical sections (<Ref> and <Ref>), we state all the hypotheses to obtain our convergence results. Then, in <Ref>, we discuss some couples of target functional and Bregman potential/preconditioner ϕ for which we can verify these assumptions. However, in general, verifying these assumptions is a hard problem, as stated in the Conclusion, and we leave for future works these investigations. Guidelines: * The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. * The authors are encouraged to create a separate "Limitations" section in their paper. * The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. * The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. * The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. * The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. * If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. * While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. * Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: Justification: The full set of assumptions are provided for each theoretical result, along with a complete proof in Appendix. Guidelines: * The answer NA means that the paper does not include theoretical results. * All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. * All assumptions should be clearly stated or referenced in the statement of any theorems. * The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. * Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. * Theorems and Lemmas that the proof relies upon should be properly referenced. * Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: Justification: The information to reproduce the main experimental results are described in <Ref> and <Ref>. We also plan to release the code. Guidelines: * The answer NA means that the paper does not include experiments. * If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. * If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. * Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. * While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example * If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. * If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. * If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). * We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. * Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: Justification: We provide a part of the code to reproduce the experiment on Gaussians and on interaction functionals in supplementary materials. Guidelines: * The answer NA means that paper does not include experiments requiring code. * Please see the NeurIPS code and data submission guidelines (<https://nips.cc/public/guides/CodeSubmissionPolicy>) for more details. * While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). * The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (<https://nips.cc/public/guides/CodeSubmissionPolicy>) for more details. * The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. * The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. * At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). * Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. * Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: Justification: All the training and test details for the experiments are provided in <Ref>. Guidelines: * The answer NA means that the paper does not include experiments. * The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. * The full details can be provided either with the code, in appendix, or as supplemental material. * Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: Justification: The experiment on Gaussians in <Ref> is run over 20 randomly sampled objective covariances, and the results are plotted with the standard deviation. For the single-cell experiment of <Ref>, we reported the results with 3 different initialization for each treatment, along their mean. For the mirror interaction experiment, the results are deterministic. Guidelines: * The answer NA means that the paper does not include experiments. * The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. * The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). * The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) * The assumptions made should be given (e.g., Normally distributed errors). * It should be clear whether the error bar is the standard deviation or the standard error of the mean. * It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. * For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). * If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. * Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: Justification: We report in <Ref> the computer resources and approximate runtime for each experiment. Namely, the Gaussian and mirror interaction experiments were done on a personal laptop on CPU, and took only few hours to run. The single-cell experiment was performed on GPU as the data are of higher dimension with about 4000 samples, and took a few hundred of hours of computational time. Guidelines: * The answer NA means that the paper does not include experiments. * The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. * The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. * The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). * Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics <https://neurips.cc/public/EthicsGuidelines>? Answer: Justification: The research conducted in this paper is conform with the NeurIPS Code of Ethics. Guidelines: * The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. * If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. * The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). * Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: Justification: This work is mostly theoretical and not tied to particular applications. Guidelines: * The answer NA means that there is no societal impact of the work performed. * If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. * Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. * The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. * The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. * If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). * Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: Justification: Guidelines: * The answer NA means that the paper poses no such risks. * Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. * Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. * We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. * Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: Justification: The datasets used are properly cited. Guidelines: * The answer NA means that the paper does not use existing assets. * The authors should cite the original paper that produced the code package or dataset. * The authors should state which version of the asset is used and, if possible, include a URL. * The name of the license (e.g., CC-BY 4.0) should be included for each asset. * For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. * If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, <paperswithcode.com/datasets> has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. * For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. * If this information is not available online, the authors are encouraged to reach out to the asset's creators. * New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: Justification: Guidelines: * The answer NA means that the paper does not release new assets. * Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. * The paper should discuss whether and how consent was obtained from people whose asset is used. * At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. * Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: Justification: Guidelines: * The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. * Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. * According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. * Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: Justification: Guidelines: * The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. * Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. * We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. * For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
http://arxiv.org/abs/2406.08038v1
20240612093932
Interference Analysis for Coexistence of UAVs and Civil Aircrafts Based on Automatic Dependent Surveillance-Broadcast
[ "Yiyang Liao", "Ziye Jia", "Chao Dong", "Lei Zhang", "Qihui Wu", "Huiling Hu", "Zhu Han" ]
eess.SP
[ "eess.SP" ]
UTF8gbsn Interference Analysis for Coexistence of UAVs and Civil Aircrafts Based on Automatic Dependent Surveillance-Broadcast Yiyang Liao, Ziye Jia, Member, IEEE, Chao Dong, Member, IEEE, Lei Zhang, Qihui Wu, Fellow, IEEE, Huiling Hu, and Zhu Han, Fellow, IEEE Yiyang Liao, Chao Dong, Lei Zhang and Qihui Wu are with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China (e-mail: liaoyiyang@nuaa.edu.cn; dch@nuaa.edu.cn; Zhang_lei@nuaa.edu.cn; wuqihui@nuaa.edu.cn). Ziye Jia is with the College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China, and also with the National Mobile Communications Research Laboratory, Southeast University, Nanjing, Jiangsu, 211111, China (e-mail: jiaziye@nuaa.edu.cn). Huiling Hu is with the Middle-south Regional Air Traffic Management Bureau of CAAC (e-mail: hhl@atmb.org). Zhu Han is with the Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77004 USA, and also with the Department of Computer Science and Engineering, Kyung Hee University, Seoul 446-701, South Korea (e-mail: hanzhu22@gmail.com). Corresponding author: Ziye Jia and Chao Dong. June 17, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty § ABSTRACT Due to the advantages of high mobility and easy deployment, unmanned aerial vehicles (UAVs) are widely applied in both military and civilian fields. In order to strengthen the flight surveillance of UAVs and guarantee the airspace safety, UAVs can be equipped with the automatic dependent surveillance-broadcast (ADS-B) system, which periodically sends flight information to other aircrafts and ground stations (GSs). However, due to the limited resource of channel capacity, UAVs equipped with ADS-B results in the interference between UAVs and civil aircrafts (CAs), which further impacts the accuracy of received information at GSs. In detail, the channel capacity is mainly affected by the density of aircrafts and the transmitting power of ADS-B. Hence, based on the three-dimensional poisson point process, this work leverages the stochastic geometry theory to build a model of the coexistence of UAVs and CAs and analyze the interference performance of ADS-B monitoring system. From simulation results, we reveal the effects of transmitting power, density, threshold and pathloss on the performance of the ADS-B monitoring system. Besides, we provide the suggested transmitting power and density for the safe coexistence of UAVs and CAs. UAV, ADS-B, civil aviation, interference analysis, poisson point process, stochastic geometry. § INTRODUCTION As low-altitude aerial technology advances, unmanned aerial vehicles (UAVs) play increasingly significant roles across various domains such as collaborative reconnaissance, precision agriculture, disaster rescue, and environmental monitoring <cit.>. Besides, the growing multitude applications necessitate abundant UAVs, raising concerns about the flight safety. Furthermore, the absence of on-board pilots poses potential risks to the safe operation of civil aircrafts (CAs) <cit.>. Hence, enhancing the airspace management for UAVs and guaranteeing the flight safety emerge as crucial imperatives <cit.>. In order to obtain exact aerial location information, UAVs can be equipped with automatic dependent surveillance-broadcast (ADS-B) systems<cit.>. Working at 1090MHz, ADS-B is beneficial to both the flight safety and air traffic management <cit.>. In detail, an aircraft equipped with ADS-B can automatically broadcast its flight information to nearby aircrafts and ground stations (GSs)<cit.>. However, due to the limited channel capacity, if multiple UAVs utilize the same channel, the interference between UAVs and CAs cannot be neglected. Consequently, the accuracy of received information at GSs is deteriorated, which further impairs the performance of the monitoring system. In particular, the density of UAVs and the transmitting power of ADS-B are key factors on the performance of the monitoring system. As a result, this work aims to analyze the interference for the coexistence of UAVs and CAs based on ADS-B with respect to the transmitting power and the density of UAVs. There exist a couple of related works about UAVs equipped with ADS-B. For instance, <cit.> points out that the performance of ADS-B system is affected by the density of UAVs, ADS-B transmitting power and the number of GSs. <cit.> demonstrates that the performance of the ADS-B system is affected by channel characteristics and minimum updating interval required by aircrafts. Besides, there are a few works utilizing stochastic geometry (SG) and poisson point process (PPP) to analyze UAVs networks, for example, <cit.> states that UAV wireless networks have natural spatial random characteristics and the channel has fading and shadowing characteristics. Therefore, SG can be utilized to analyze the performance of UAV wireless networks. <cit.> develops a tractable framework for signal-to-interference-plus-noise ratio (SINR) analysis in downlink heterogeneous cellular networks with flexible cell association policies. <cit.> compares the model of Rician channel, Rayleigh channel and Nakagami-m channel in wireless network, and analyzes the coverage rate of UAV assisted cellular network. The authors in <cit.> utilize SG to analyze the response delay and the successful transmission probability for a single link and a group of links based on the three-dimensional (3D) distribution of UAV swarms. However, the above works equip UAVs with ADS-B to enhance the flight safety, but they seldom utilize SG to examine the system performance considering the interference based on ADS-B. Besides, to construct the model, we distinguish the interference taking account of the coexistence of UAVs and CAs, which is a challenging problem. In, short, the contributions of this paper are summarized as follows. * We propose the model of the coexistence of UAVs and CAs based on ADS-B techniques. * Based on SG theory, we employ 3D-PPP to reveal the interference performance and deduce the analytic form of the received probability. * Extensive simulations are conducted to verify the effects of transmitting power, density, threshold and pathloss on the performance of ADS-B monitoring system. § SYSTEM MODEL §.§ Network Model Fig. <ref> depicts the system model of the coexistence of UAVs and CAs. The fixed-wing UAVs, rotary-wing UAVs and CAs are randomly distributed in space V. All aircrafts transmit flight information to the GS via ADS-B. The fixed-wing UAVs and the rotary-wing UAVs follow a 3D-PPP <cit.> with density λλ_1 in the finite space V, and the number of UAVs is N_U=λλ_1V. The CAs follow a 3D-PPP with density λλ_2 in the finite space V, and the number of CAs is N_C=λλ_2V. Denote the UAV set as 𝒰={U_1,...U_i,...,U_N_U}, and the CA set as 𝒞={C_1,...C_j,...,C_N_C}. It is assumed that there is only one GS in space V, and all ADS-B packets from UAVs and CAs are received by the GS. The GS is located in the center of the ground with the coordinate of O(0, 0, 0). In detail, the coordinate of the i-th UAV in set 𝒰 is (x_U_i, y_U_i, z_U_i) and the coordinate of the j-th CA in set 𝒞 is (x_C_j, y_C_j, z_C_j). The X-axis coordinates for all aircrafts range within [-L_x, L_x], the Y-axis coordinates range within [-L_y, L_y], and the Z-axis coordinates are [0, L_z]. The euclidean distance between UAV U_i and the GS is d_U_i=√(x^2_U_i+y^2_U_i+z^2_U_i), and the euclidean distance between CA C_j and the GS is d_C_j=√(x^2_C_j+y^2_C_j+z^2_C_j). The transmitting power of ADS-B from UAV U_i and CA C_j are set as P_U and P_C, respectively. §.§ Channel Model G_U_t and G_C_t respectively represent the transmitter gain of UAVs and CAs. G_r denotes the receiver gain at the GS. Therefore, the total air-ground (AG) channel gain at the GS from UAVs is G_U=G_U_tG_r, and the AG channel gain between CAs and the GS is G_C=G_C_tG_r. The pathloss from the GS to UAVs and CAs are respectively proportional to d_U_i^-α and d_C_j^-α, where d_U_i and d_C_j represent the distance between the aircrafts and GS. α indicates the pathloss index. h_U_i represents the gain of small scale fading channel between UAV U_i and the GS. h_C_j represents the gain of small scale fading channel between CA C_j and the GS. h_U_i and h_C_j are two random variables following an exponential distribution with mean value of 1. Gaussian white noise N is added to the model, i.e., N=n_0× B, where n_0 is noise power density and B is the system bandwidth. We leverage γ to represent SINR. Then, the γ_U^m of the desired signal sent by the m-th UAV U_m in set 𝒰 is γ_U^m=P_UG_Uh_U_md_U_m^-α/N+P_UI_𝒰\{U_m}+P_CI_𝒞, in which I_𝒰\{U_m}=∑_U_i∈𝒰\{U_m}G_Uh_U_id_U_i^-α, and I_𝒞=∑_C_j∈𝒞G_Ch_C_jd_C_j^-α. § PERFORMANCE ANALYSIS It is supposed that all the aircrafts send the flight information via ADS-B to the GS within space V. In particular, UAVs follow the nearest neighbor association strategy<cit.>, i.e., no other GSs outside space V has less distance to the target UAV P{d_U_i R}= exp(-λλ_1V)= exp(-4/3πλ_1d^3_U_i), where d_U_i≥0, and R is the radius of the 3D euclidean space ℝ^3. Therefore, the cumulative distribution function (CDF) of the distance d_U_i from UAV U_i to the GS is F_U(d_U_i)= P{d_U_i≤ R}=1- exp(-4/3πλ_1d^3_U_i), and the probability density function (PDF) of d_U_i is f_U(d_U_i)=dF_U(d_U_i)/d(d_U_i)=4πλ_1d^2_U_i exp(-4/3πλ_1d^3_U_i). The successful received probability P_suc at the GS is introduced to measure the transmission quality. If the distance between the UAV and GS is d, and γ is greater than the received threshold θ, the successful received probability of the GS is denoted as P_suc=𝔼[P(γ≥θ|d)]. Since γ is also a function of d, the P_suc of the m-th UAV in set 𝒰 is further expressed as P_suc =∫_0^∞P(γ_U^i≥θ|d_U_m)f_U(d_U_m) d(d_U_m) =∫_0^∞4πλ_1d^2_U_mP(γ_U^i≥θ|d_U_m) exp(-4/3πλ_1d^3_U_m) d(d_U_m). It is assumed that the average gain of small scale fading channel is a random variable following the Gamma distribution with mean value of 1<cit.>, which is depicted as f(h)=β^β/Γ(β ) h^β-1e^-β h. When β equals 1, the channel is considered as Rayleigh fading. h follows an exponential distribution with mean value of 1. The PDF of h is f(x)=e^-x, i.e., h_U_i∼exp(1) and h_C_j∼exp(1). Hence, P(γ_U^i≥θ|d_U_m) in (<ref>) is further represented as P(γ_U^i≥θ|d_U_m) =P(h_U_m≥θd^α_U_m(N+P_UI_𝒰\{U_m}+P_CI_𝒞) /P_UG_U) = exp(-θd^α_U_m(N+P_UI_𝒰\{U_m}+P_CI_𝒞)/P_UG_U) = exp(-θd^α_U_mN/P_UG_U)𝕃_I_𝒰\{U_m}(θd^α_U_m/G_U)𝕃_I_𝒞(θd^α_U_m/G_U×P_C/P_U). Let θd^α_U_m/G_U=s_1, and we have 𝕃_I_𝒰\{U_m}(θd^α_U_m/G_U)=𝕃_I_𝒰\{U_m}(s_1)=𝔼[e^-s_1(I_𝒰\{U_m})], which is the Laplace transform of I_𝒰\{U_m}, and is further derived as 𝕃_I_𝒰\{U_m}(s_1) =𝔼[ exp(-s_1∑_U_i∈𝒰\{U_m}G_Uh_U_id_U_i^-α)] (a)=𝔼[∏_U_i∈𝒰\{U_m}1/1+s_1G_Ud_U_i^-α] (b)= exp[-λ_1∫_V(1-1/1+s_1G_Ud_U_i^-α)d(d_U_i)] (c)= exp[-λ_1∫_-L_x^L_x∫_-L_y^L_y∫_0^L_z1- 1/1+s_1G_Ud_U_i^-αdxdydz] (d)= exp(-λ_1H_1), where (a) is obtained by the moment generating function, (b) follows the probability generating function (PGFL) of the PPP <cit.>, d_U_i in (c) can be further expressed as √(x_U_i^2+y_U_i^2+z_U_i^2), and H_1 in (d) represents the triple integral in (c). Moreover, let θd^α_U_m/G_U×P_C/P_U=s_2, and we have 𝕃_I_𝒞(θd^α_U_m/G_U×P_C/P_U)=𝕃_I_𝒞(s_2)=𝔼[e^-s_2I_𝒞], following the Laplace transformation of I_𝒞. Therefore, (<ref>) is further simplified as 𝕃_I_𝒞(s_2) =𝔼[ exp(-s_2∑_C_j∈𝒞G_Ch_C_jd_C_j^-α)] =𝔼[∏_C_j∈𝒞1/1+s_2G_Cd_C_j^-α] (e)= exp(-λ_2H_2), where H_2 in (e) symbolizes the triple integral. By substituting (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), P_suc is calculated as P_suc =∫_0^∞4πλ_1d^2_U_m exp(-θd^α_U_mN/P_UG_U-λ_1H_1 -λ_2H_2-4/3πλ_1d^3_U_m)d(d_U_m). The influences of H_1 and H_2 on P_suc have different weights, which are proportional to λ_1 and λ_2, respectively. According to (<ref>), H_2 is mainly affected by the ratio of P_C to P_U. Hence, following (<ref>), P_suc is mainly related with λ_1, P_U, P_C, θ and α. Analyzing the performance on UAVs or CAs is a similar reasoning process, i.e., the SINR and successful received probability have similar analytic forms. The detailed difference between UAVs and CAs lies in the altitude, density and transmitting power of ADS-B. Hovover, in this paper, our research mainly focuses on the analysis of the performance of GS on UAVs, and CAs are regarded as a interfering factor. Hence, we only provide the deductions and simulations concentrated on UAVs. Further, we analyze the detailed influences in Section <ref>. § SIMULATION RESULTS AND ANALYSES To further evaluate the detailed performance, MATLAB is employed to simulate the 3D-PPP distribution scenario of the UAVs and CAs, as shown in Fig. <ref>. Space V is set as 20km× 20km×10km. The CAs are randomly distributed at altitudes H_C of [6 km, 10 km], and the UAVs are randomly distributed at altitudes H_U of [1 km, 6 km] due to the limited flight ability. λλ_1 is set within [1, 100], and λλ_2 is fixed as 15. In the civil aviation system, the channel interval within [962 MHz, 1213 MHz] is 1MHz. Since we focus on ADS-B operating at 1090MHz, the channel bandwidth B between the GS and all aircrafts are set as 1MHz<cit.>. Besides, the pathloss index α is set within [2, 5]. Moreover, the total gain G_U on AG channel of UAVs is 23dBi, and the total gain G_C on AG channel of CAs is 20dBi. The detailed parameters in simulations are summarized in TABLE <ref>. Fig. <ref> shows the distance between the GS and UAVs or CAs. In detail, X-axis represents the generated aircrafts, including UAVs and CAs, and Y-axis denotes the distance between the aircrafts and the GS, corresponding to the simulation scenario in Fig. <ref>. The distance of each point is randomly generated according to PPP. It is classified that if d_U_i15km, U_i is considered as a short-range UAV. Otherwise, if d_U_i≥15km, U_i is deemed as a long-range UAV. Fig. <ref> demonstrates the impact of P_U on the received probability under different threshold θ, considering UAVs as targets. P_C, α and λλ_1 are respectively fixed at 30W, 2 and 30. The solid lines represent the performance of short-range UAVs, while the dashed lines demonstrate the performance related to long-range UAVs. Besides, the simulation results coincide with the theoretical results, which validates the accuracy of theoretical analysis. As for the short-range UAVs, with the increment of P_U, the received probability from the desired UAVs at the GS increases. Moreover, as θ grows, the corresponding successful received probability decreases. When θ is 7dB and P_U is 16W, the received probability is 84.77%, where the received probability curve starts to smooth. If θ increases to 11dB, the received probability descends to 68.63%. On the other hand, the dashed lines manifest the performance of the long-range flying UAVs. In particular, P_U has little influence on the received probability during the initial increment. Compared with short-range UAVs, with the increment of the distance, the received probability corresponding to the same θ and P_U declines. When θ is 7dB and P_U is 16W, the received probability is only 49.40%, which decreases by 41.72% on the basis of short-range UAVs. If the threshold escalates, more transmitting power is needed to compensate for the received probability. In conclusion, for short-range UAVs, setting P_U to be greater than 30W is energy-consuming, since the received probability is already approaching flat. Furthermore, in terms of long-range UAVs, the received probability improves significantly when P_U falls between 30W and 70W<cit.>. Additionally, the curve of θ=7dB and θ=10dB show smooth trends when P_U exceeds 65W. To evaluate the density of UAVs, λλ_1 in Fig. <ref> is set within [0, 100]. We explore the impact of λλ_1 on received probability under different threshold θ. P_U, P_C and α are respectively fixed at 11dB, 30W and 2. The simulation results coincide with the theoretical results, which validates the accuracy of theoretical analysis. As the density of UAVs ascends, the received probability descends. Supposing λλ_1 is 30, as we set in Fig. <ref>, θ=7dB, θ=10dB, θ=11dB, θ=13dB, θ=14dB correspond to the received probability of 83.68%, 75.05%, 67.69%, 60.61%, 54.86%, respectively. Besides, when θ is higher 11dB and λλ_1 exceeds 60, the GS can no longer support the monitoring of the airspace. Fig. <ref> discusses the impact of pathloss α on the received probability under different threshold θ. P_U, P_C and λλ_1 are respectively fixed at 25W, 30W and 30. The simulation results coincide with the theoretical results, which validates the accuracy of theoretical analysis. Primarily, α has little influence on the received probability during the initial increment. When α exceeds 3, the received probability drops sharply. Considering θ is 7dB and α is 3, the corresponding received probability is 85.1%. When α rises to 4.5, the corresponding received probability declines to 24.1%. If θ increases at this point, the GS can no longer support the monitoring of the airspace. In short, when α is greater 3, the increment cause the received probability to plummet, making the channel performance deteriorated. By increasing P_U or decreasing θ, we can compensate for the effect of the increment of α on the received probability. Fig. <ref> illustrates the impact of P_U and P_C on received probability. We set P_C and P_U as variables, aiming to simultaneously examines the influence of P_U and P_C on the received probability at short-range UAVs. θ, α and λλ_1 are respectively fixed at 7dB, 2 and 30. With the increment of P_U, the received probability increases. On the contrary, as P_C enlarges, the received probability diminishes. Assuming P_U is 24W and P_C is 40W, the corresponding received probability is 75.26%. If P_C stays constant, the corresponding received probability is 70.98% when P_U decreases to 15W. If P_U stays constant, the corresponding received probability is 70.49% when P_C increases to 73W. Minishing P_C improves the received probability of the GS towards UAVs. However, the monitoring performance of the GS towards CAs is undermined. In addition, P_U is unadvisable to be magnified indefinitely, which intensifies the signal interference, impairs the ability of the GS to monitor CAs and wastes energy. Hence, the transmitting power of both types of aircrafts should be balanced according to actual demands. § CONCLUSIONS This work analyzes the interference for the coexistence of UAVs and CAs based on ADS-B. We build a 3D-PPP model and deduces the explicit analytic form of the received probability targeting UAVs by SG theory. When analyzing the signal of a UAV, the interference signals are distinguished between UAVs and CAs. Moreover, based on the AG channel, we reveal the effects of transmitting power, density, threshold and pathloss on the performance of the ADS-B monitoring system via simulations. Additionally, the transmitting power of UAVs and CAs are both taken as variables to analyze the received probability of UAVs. In terms of raising the received probability, the transmitting power of ADS-B and the density of UAVs are contrarious, which can be set appropriately according to the requirements of the surveillance performance. In short, this work contributes to the appropriate deployment of ADS-B equipment on the UAVs, which helps improve the airspace safety and enhances the air traffic flow management. 1 IEEEtran ref1H. Xu, L. Wang, W. Han, Y. Yang, J. Li, Y. Lu and J. Li, "A Survey on UAV Applications in Smart City Management: Challenges, Advances, and Opportunities," IEEE J-STARS, vol. 16, pp. 8982-9010, 2023. ref2L. Gupta, R. Jain and G. Vaszkun, "Survey of Important Issues in UAV Communication Networks," IEEE Commun. Surveys Tuts., vol. 18, no. 2, pp. 1123-1152, Second quarter 2016. ref3Y. Zhu, Z. Jia, Q. Wu, C. Dong, Z. Zhang, H. Hu and Q.Cai, "UAV Trajectory Tracking via RNN-enhanced IMM-KF with ADS-B Data," in IEEE Wireless Commun. Networking Conf., Dubai, United Arab Emirates, Apr. 2024. ref4M. Strohmeier, V. Lenders and I. Martinovic, "On the Security of the Automatic Dependent Surveillance-Broadcast Protocol," IEEE Commun. Surveys Tuts., vol. 17, no. 2, pp. 1066-1087, Second quarter 2015. ref5M. Strohmeier, M. Schafer, V. Lenders and I. Martinovic, "Realities and challenges of nextgen air traffic management: the case of ADS-B," IEEE Commun. Mag., vol. 52, pp. 111-118, May 2014. ref6X. Deng, F. Wang and G. Yang, "A Survey on Airborne ADS-B Technology and Its Development Trend," Advances in Aeronautical Science and Engineering, vol. 12, no. 1, pp. 121-128, Feb. 2021. ref7P. Jonas, M. Jancik, S. Holoda and J. Bodart, "Impact of SUAS equipped with ADS-B on 1090MHz environment," in New Trends in Civil Aviation (NTCA), Prague, Czech Republic, Nov. 2020. ref8H. Liu, S. Wu, D. Qin and D. Li, "Performance analysis of surveillance capacity of satellite-based ADS-B receiver," in Acta Aeronautica Et Astronatica Sinica, vol. 39, no. 5, pp. 8-15, May 2018. ref9J. G. Andrews, F. Baccelli and R. K. Ganti, "A Tractable Approach to Coverage and Rate in Cellular Networks," IEEE Trans. Commun., vol. 59, no. 11, pp. 3122-3134, Nov. 2011. ref10H. -S. Jo, Y. J. Sang, P. Xia and J. G. Andrews, "Heterogeneous Cellular Networks with Flexible Cell Association: A Comprehensive Downlink SINR Analysis," IEEE Trans. Wireless Commun., vol. 11, no. 10, pp. 3484-3495, Oct. 2012. ref11L. Zhou, Z. Yang, S. Zhou and W. Zhang, "Coverage Probability Analysis of UAV Cellular Networks in Urban Environments," in IEEE ICC Workshops, Kansas City, MO, May 2018. ref12Q. Zhang, J. Chen, L. Ji, Z. Feng, and Z. Chen, "Response delay optimization in mobile edge computing enabled UAV swarm," IEEE Trans. Veh. Technol., vol. 69, no. 99, pp. 3280-3295, Mar. 2020. ref13X. Liu, "Closed-Form Coverage Probability in Cellular Networks With Poisson Point Process," IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 8206-8209, Aug. 2019. ref14Y. Guo, X. Jia, S. Cao and Z. Hao, "Analysis of Downlink Coverage and Capacity for 3D Mobile UAV Networks," in 7th ISMII, Zhuhai, China, Jan. 2021. ref15H. E. Sawy, A. Sultan-Salem, M. Alouini and M. Z. Win, "Modeling and Analysis of Cellular Networks Using Stochastic Geometry: A Tutorial," IEEE Commun. Surveys Tuts., vol. 19, no. 1, pp. 167-203, First quarter 2017. ref16Y. Bai, X. Zhang and L. Zhao, "Co-channel CCK transmission overlapped with DME in aeronautical communication," in IEEE ChinaSIP, Xi'an, China, Jul. 2014. ref17RTCA, "Minimum Operational Performance Standards (MOPS) for 1090 MHz Extended Squitter Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Information Services-Broadcast (TIS-B)," [Online]. Available: https://www.rtca.org/standards/
http://arxiv.org/abs/2406.08678v1
20240612223930
Harnessing Plasmonic Interference for Nanoscale Ultrafast Electron Sources
[ "Alimohammed Kachwala", "Mansoure Moeini Rizi", "Christopher M Pierce", "Daniele Filippetto", "Jared Maxson", "Siddharth Karkare" ]
physics.acc-ph
[ "physics.acc-ph", "physics.optics" ]
AIP/123-QED Authors to whom correspondance should be addressed: akachwal@asu.edu Department of Physics, Arizona State University, Tempe, AZ 85287, USA Department of Physics,University of Chicago, Chicago, IL 60637, USA Advanced Light Source Accelerator Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Department of Physics,Cornell University, Ithaca, NY 14850, USA karkare@asu.edu Department of Physics, Arizona State University, Tempe, AZ 85287, USA § ABSTRACT In this paper we demonstrate the use of plasmonic focusing in conjunction with non-linear photoemisison to develop geometrically flat nanoscale electron sources with less than 40 pm-rad root mean squared (rms) normalized transverse emittance. Circularly polarized light is incident on a gold Archimedean spiral structure to generate surface-plasmon-polaritons which interfere coherently at the center resulting in a 50 nm rms emisison area. Such a nanostructured flat surface enables simultaneous spatio-temporal confinement of emitted electrons at the nanometer and femtosecond level and can be used as an advanced electron source for high-repetition-rate ultrafast electron diffraction and microscopy experiments as well as next-generation of miniaturized particle accelerators. Harnessing Plasmonic Interference for Nanoscale Ultrafast Electron Sources Siddharth Karkare June 17, 2024 ========================================================================== High repetition rate (>100 kHz) sub-picosecond pulsed electron beams are critical to the studying of the ultrafast structural dynamics of atomic lattices as well as molecular species through techniques like stroboscopic ultrafast electron diffraction and microscopy (UED/M) <cit.>. Even though field emission tips can generate brighter electron beams resulting in sub-angstrom scale spatial resolutions in electron microscopes, they cannot be switched at sub-microsecond timescales, making femtosecond-laser triggered photoemission of electrons a preferred way of generating such sub-picosecond scale electron bunches <cit.>. For UED/M applications, the transverse coherence length, L_c = _cσ_x,s/ϵ_n,x determines the the spatial as well as momentum resolution of the instrument <cit.>. Here, _c is the reduced Compton wavelength of electron, σ_x,s is the root mean square (rms) size of the electron beam at the sample and ϵ_n,x is the normalized transverse emittance in the one of the two transverse directions (x). Thus, to enhance the resolution of UED/M apparatus, it is imperative to increase the transverse coherence length of the electron bunches. Further, obtaining high resolution diffraction patterns from regions of solids having dimensions at the nanometer scale calls for the need of nano-scale electron emitters with sub-nanometer scale normalized transverse emittance of the electron bunch <cit.>. The normalized transverse emittance of the electron bunch in the one of the two transverse directions (x) is expressed in terms of the following equation: ϵ_n,x = √(⟨ x^2 ⟩⟨ p_x^2 ⟩ - ⟨ xp_x ⟩^2)/m_ec , where √(⟨ x^2⟩)≡σ_x is the rms electron spot size and √(⟨ p_x^2⟩)≡σ_p_x is the rms electron momentum spread in the x-direction. ⟨ xp_x ⟩ is the correlation term between the location of emission and the transverse momentum, m_e is the mass of an electron and c is the speed of light <cit.>. In addition to the smallest possible emittance UED/M instruments require a high enough current (or number of electrons per bunch) to achieve a good signal to noise ratio and collect data in a reasonable amount of time. Often pinholes can be used to collimate the electron beam to reduce the emittance at the cost of the current <cit.>. The emittance and current (or electrons/bunch) can be combined into one figure of merit, the 4D-Brightness given by: B_4D = Q/ϵ_n,rms^2 , where Q denotes the total bunch charge and ϵ_n,rms is the geometric mean value of the emittance along the x-and y-directions <cit.>. Another important factor that often determines the temporal resolution of UED/M apparatus is the rms length of the electron bunches (σ_t) <cit.>. To achieve the best temporal resolutions and mitigate the effects of electron-electron repulsion in a bunch, many UED/M setups rely on photoemisison from flat, large area (several mm scale) cathodes placed in an accelerating electric field in conjunction with radio frequency (RF) bunching cavities <cit.>. Such large area flat cathodes are also required for UED/M instruments based on RF guns used to obtain mega electron volt (MeV) scale energy electron bunches for reduced jitter owing to their relativistic speeds and larger signal in higher-order diffraction peaks due to the shorter electron wavelength <cit.>. Electron beams from such flat cathodes have been collimated using pinholes <cit.> to result in an emittance of 120 pm-rad with a current of 100-200 fA (≈1 electron/shot) and rms electron bunch length of <100 fs. Further improvements in emittance or brightness in such setups requires improvement of brightness at the cathode. For high repetition-rate UED/M experiments where only single to few electrons per bunch are enough, emittance can be reduced either by σ_x or σ_p_x or both. The rms momentum spread (σ_p_x) depends on the cathode materials, its surface and the laser fluence <cit.>. The rms momentum spread (σ_p_x) as low as 50 eV/c has been achieved by cryo-cooling of a copper photocathode with an atomically ordered surface and operating it at the photoemission threshold <cit.>. The rms electron spot size (σ_x) is limited by the diffraction limit of light and the ability to focus the laser to a small spot size <cit.>. σ_x as small as 1 μm has been achieved by operating the cathode in the transmission mode geometry and placing the final focusing lens very close ∼1 cm behind the cathode resulting in an rms normalized transverse emittance of about 250 pm-rad <cit.>. At the cathode, the smallest emittance that can be achieved is limited by the Heisenberg's uncertainty principle to ħ/2m_ec = 0.2 pm-rad. In order to approach this quantum limit, we need nanoscale electron emission area. Nanoscale electron emisison areas can be achieved by using nanostructures that focus light plasmonically <cit.>. In such structures, surface-plasmon-polaritons (SPP) excited at the metal-dielectric interface by the incident laser, interfere constructively enhancing the optical field intensity in localized areas on the surface of the metal <cit.>. Plasmonic Archimedean spiral is one such structure suitable for low emittance, ultrafast nanoscale photoemission. Femtosecond SPP pulses are resonantly excited at the groves of the spiral using a circularly polarized pulsed femtosecond laser. By selecting opposite helicities for the incident circularly polarized light and the spiral, plasmonic focusing at the spiral center can be optimized, achieving nanoscale confinement of the optical field intensity. For instance, compensating the ASP helicity of L = 1 with circularly polarized light of spin angular momentum S = -1 results in an SPP pulse with zero orbital angular momentum (OAM) J = L + S = 0 at the spiral center <cit.>. At the center of the spiral the electric field is dominated by its out-of-plane component given by a zeroth-order Bessel function E_z(r)∝ J_0(k_sppr), where k_spp = 2π/λ_spp and λ_spp = 783 nm, while the magnetic field is in the azimuthal direction creating a magnetic vortex <cit.>. In this work we use a plasmonic Archimedean spiral photocathode (ASP) to focus light to nanometer scales <cit.> and use non-linear photoemisison to demonstrate an emission spot (σ_x) of ≈50 nm rms resulting in an emittance of less than 40 pm-rad - nearly an order of magnitude smaller compared to the best emittance previously demonstrated from a geometrically flat photocathode <cit.>. The ASP consists of a single groove that completes nine revolutions around the spiral center with the central radius (R_0) of 12.5 μm and the helecity of L = 1. Femtosecond SPP pulses were resonantly excited at the groves of the ASP using circularly polarized (S = -1) femtosecond laser pulses with central excitation energy of 1.55 eV (λ = 800 nm). The SPPs propagate to the center of the ASP where they interfere constructively, resulting in the nanoscale electron emission area. The isolated and azimuthally symmetric nano-focus at the center of the spiral has a theoretical full width at half maximum (FWHM) which is smaller than half the SPP wavelength at the metal-vacuum interface <cit.>. Figure <ref> (a) shows the intensity enhancement due to constructive interference of the SPP's following the zeroth order Bessel function at the center of ASP calculated using finite difference time domain (FDTD) simulation using a commercial software suite (Lumerical) <cit.> (see Supplementary Information). The spatial extent of the SPP focused intensity [I_spp (r)] has full width half maximum (FWHM) of ≈260 nm. The 5^th order non-linearity in the electron emission process is proportional to I_spp^5(r) <cit.>. This further shrinks the electron emission spot to ≈120 nm FWHM or σ_x≈50 nm as shown in Fig. <ref> (b). The inset shows the temporal response of the ASP (σ_t ≈30 fs) assuming the 5^th order non-linearity in the electron emission process. The simulated ASP geometry was fabricated by electron-beam lithography (EBL) (see Supplementary Information) and was transferred after a UHV bake-out at 120 °C for 1 day into a commercially available photoemission electron microscope (PEEM) with a 4° angle of incidence <cit.>. Initially, the ASP emitted only 0.001 electrons/shot with ≈2.5 kW of peak laser power from a ≈150 fs, 500 kHz, 800 nm pulsed laser focused down to ≈50 μm FWHM on the ASP. However, the electron's yield increased by several orders of magnitude in less than 5 minutes with peak laser power in the range of 0.7-2.5 kW incident on the ASP. After this initial `activation' step, the enhanced electron emission stayed without any degradation for several days until the ASP was removed from the PEEM UHV environment of 10^-10 torr into air. Upon re-insertion into the UHV PEEM after the UHV bake, the ASP required reactivation to get the previously enhanced emission. This indicates that the above laser activationprocess leads to cleaning of adsorbates from the atmosphere that were settled on the Au emission surface. After activation, the spatial distribution of non-linear electron emission spot size was measured as shown in Fig. <ref> (a) at the peak pulse power of ≈1.8 kW. As we can see, the rms electron emission spot size is ≈50 nm suggesting 5^th order non-linearity in the photoemission process. The 5^th order non-linearity is further corroborated by measuring electrons/shot as a function of peak laser pulse power plotted on a double logarithmic scale as shown in Fig. <ref> (b). Considering the work function (ϕ) of Au to be ≈5.4 eV, at least four quanta of SPP, each with energy ħω = 1.58 eV are required to overcome the the work function making 4^th order as the lowest possible order of photoemission <cit.>. However, for the case of Au, the large density of d-band initial states about 2 eV below the Fermi energy may result in enhanced contribution and thereby lead to enhanced non-linearities in the photoemission process. Inspection of the Au band structure indicates a sharp increase of the joint density of states for total transition energies above 6.4 eV <cit.>. At least five quanta of SPP are required to overcome the total transition energies ≈6.4 eV. Such 5^th order non-linearity in the electron emission process has also been observed previously from Au nano-tips in above-threshold photoemission regime <cit.>. Figure <ref> (c) shows evolution of rms electron emission spot size as a function of peak laser pulse power. The rms electron emission spot size is ≈50 nm over a wide range of peak pulse power. Figure <ref> (d) shows evolution of the experimentally determined rms momentum spread of the emitted electrons as a function of peak laser pulse power. The rms momentum spread of the emitted electrons is ≈500 eV/c over a wide range of peak pulse power. An average of 0.1 electron/shot are emitted up to a peak pulse power of ≈2.5 kW. According to Poisson statistics, this implies <1% probability of 2 electrons being emitted per shot. However, beyond 2.5 kW, 0.5-1 electron/shot is emitted on average, resulting in an 8-20% probability of emission of 2 electrons/shot. Coulomb interaction between electrons in pulses with 2 or more electrons leads to increased electron emission spot size and rms momentum spread beyond the peak pulse power of ≈2.5 kW, as shown in Fig. <ref> (c) and (d). Ignoring space-phase correlations, i.e. taking ⟨ xp_x ⟩ = 0, we calculate the emittance from equation <ref>. This is plotted as a function of peak laser pulse power in Fig. <ref> (a). Beyond a peak pulse power of ≈2.5 kW, the increase in emittance is attributed to the Coulomb interaction between the emitted electrons. Below ≈2 kW with less than 0.1 electron/shot the emittance is nearly constant at ≈50 pm-rad, with the smallest emittance of 40 pm-rad measured at ≈0.001 electron/shot. These are the smallest emittances achieved from a geometrically flat photocathode. At 0.5-1 electrons/shot, the normalized transverse emittance measured was in the range of 70-200 pm rad respectively. It was not possible to reliably measure the σ_x and σ_p_x beyond 3 kW due to increased Coulomb interactions. Hence we used a cubic fit to extrapolate the emittance beyond these values as shown by the red curve in Fig. <ref> (a). Schottky emitters triggered with an ultraviolet pulsed laser (λ = 400 nm and pulse energy ≈10 nJ) have demonstrated a normalized transverse emittance of ≈13.5 pm rad at ≈2 electrons/shot and σ_t∼128 fs after collimation in the TEM gun and column <cit.>. Such tip emitters have been used in electron microscopes, however, face challenges in RF guns due to low lifetime under high fields and emission of dark electrons via field emission asynchronous with the laser pulse that adds background noise to experiments. The ASP is geometrically flat. Hence these issues related to operation in large electric field RF guns are significantly mitigated. Thus the ASP can enable the use of picometer-scale emittance in RF gun applications like MeV scale UED/UEM. To characterize ASP further, we compute the 4D-Brightness of the electron bunch using equation 2 and plot it against the electrons per shot as shown in Fig. <ref> (b). At the cathode, the correlation term is ⟨ xp_x ⟩ = 0. For bunch charges with <0.1 electron/shot the ⟨ x^2⟩⟨ p_x^2⟩ is nearly constant (less than 20% increase) and the brightness increases proportionately with electrons per shot as expected. However, beyond that both x and p_x increases due to Coulomb interactions and ⟨ x^2⟩⟨ p_x^2⟩ blows up. In this region the brightness calculated by assuming ⟨ xp_x⟩ = 0 reaches a maximum of ≈85 electrons/(nm^2Sr) and then reduces due to the increase in ⟨ x^2⟩⟨ p_x^2⟩. This is shown by the blue curve in Fig. <ref> (b). The red curve corresponds to the brightness obtained from the extrapolated values of ⟨ x^2⟩⟨ p_x^2⟩ in Fig. <ref> (a). The Coulomb increase in ⟨ x^2⟩ and ⟨ p_x^2⟩ also results in increased x and p_x correlations causing ⟨ xp_x ⟩ to be greater than 0. The exact calculations or measurements of the correlations in x and p_x are complex and beyond the scope of this paper. If we assume a fully linearly correlated growth of x and p_x, the emittance of 40 pm-rad corresponding to the zero-coulomb-interactions-case (very low charge per bunch) could be recovered even for larger bunch charges using aberration free electron lenses. This gives us an upper limit to the brightness that can be obtained from ASP as shown by the green curve. The purple curve shows the brightness with the electrons/shot extrapolated in the range beyond 3 kW laser peak power. The gray area shows the region in which the brightness from ASPs could lie depending on the nature of the correlations developed in x and p_x due to the Coulomb interactions. Fig. <ref> (b) also shows the black cross (×) symbol indicating the 4D-Brightness of the electron bunch obtained in the HiRES UED beamline from a flat cathode after using collimating apertures <cit.>. It should be noted that this point indicates the brightness at the sample whereas the performance of the ASP is at the source. To-date all electron sources using flat photocathodes have been operated with large μm-mm sized emission areas putting them in the regime where electron-electron Coulomb interactions can be modeled by assuming the electron bunch to be a continuous distribution of charge <cit.>. In such cases, the accelerating electric field (ℰ) at the cathode limits the maximum charge density (σ_max) in pancake regime to σ_max = ϵ_0 ℰ <cit.>. Here, ϵ_0 is vacuum permittivity. With an electric field of 6 MV/m as obtained in the PEEM with the small rms spot size of ≈50 nm, we get the maximum charge that can be extracted from ASPs to be ≈2 electrons/shot within this space charge assumption. However, the space charge assumption breaks down for a few electrons and it is imperative to consider individual Coulomb interactions between each pair of electrons to a properly account for the Coulomb effects <cit.>. As a result, extraction of more than two electrons may be possible even at these small electric fields. Furthermore extraction of higher charges per bunch could be possible with larger electric fields in the range of 20-100 MV/m that are typically found in RF guns <cit.>. For instance, operating ASP in the HiRES UED beamline with the peak RF field of ≈20 MV/m, we can extract ≈6 electrons/shot within the space charge assumption discussed in the previous paragraph. Taking ϵ_n,x = 40 pm-rad with 6 electrons/shot implies a maximum 4D-Brightness of ≈3700 electrons/(nm^2Sr) which is ∼40 times better than the current maximum 4D-Brightness achieved in the HiRES UED beamline at the sample plane <cit.>. It is also worth noting that the maximum laser fluence used in this experiment, after accounting for a factor of ≈300 in intensity enhancement is 5 mJ/cm^2. This is 20 times lower than the damage threshold of gold which is typically in the range of 100-1000 mJ/cm^2 <cit.>. Even with 3 times higher laser power, due to the 5^th order emission process one can expect 200 times more charge per shot and potentially an increased brightness. The actual increase in charge and the brightness will then be determined by the electron-electron interaction dynamics. Harnessing plasmonic focusing for the development of low emittance ultrafast ASP is the first step to develop next-generation state of the art ultra-bright electron sources for next-generation accelerator applications. ASP can be designed for an operating wavelength such that it can further reduce σ_p_x and thereby ϵ_n,x of the emitted electron bunch. In addition, ASP can be combined with high quantum efficiency semiconductor thin films <cit.> to further enhance the 4D-brightness of the emitted electron bunch. Beyond the UED/M applications, ASPs can also be used as the electron source for dielectric laser accelerators for the development of next generation compact accelerators which require <1 nm-rad normalized transverse emittance <cit.>. Additionally, the ASP and other types of plasmonic apertures <cit.> can be arranged in an array to generate transversely shaped electron beams for powering novel coherent x-ray light sources <cit.> as well as the next generation of beam driven dielectric wakefield accelerators <cit.>. In summary, we have fabricated and characterized a plasmonics based, geometrically flat, low emittance and ultrafast source of electron pulses suitable for UED/M experiments. We achieved a record low rms normalized transverse electron emittance of less than 40 pm-rad from a geometrically flat photocathode – nearly an order of magnitude lower than the best the emittance that has been achieved from a geometrically flat photocathodes <cit.>. Such a plasmonics based low emittance electron source operating with IR light can impact a wide range of applications ranging from high repetition rate UED/M to next generation particle accelerators. § ACKNOWLEDGEMENT This work is supported by the NSF Center for Bright Beams under award PHY-1549132 and Department of Energy Office of Science under awards DE-SC0021092, and DE-SC0021213. C.M.P. acknowledges support from the US DOE SCGSR program. J.M was partially supported by U.S Department of Energy, Grant No. DE-SC0020144. § SUPPLEMENTARY INFORMATION Supplementary Information Consists of: Mathematical equation for designing tilt compensated ASP, details about FDTD simulation, experimental details and details about real-space measurement of ASP using mercury (Hg) lamp and pulsed femtosecond laser in conjunction. *
http://arxiv.org/abs/2406.09281v1
20240613161938
Computing congruences of finite inverse semigroups
[ "Luna Elliott", "Alex Levine", "James D. Mitchell" ]
math.GR
[ "math.GR", "cs.DS", "20M18, 20-08, 20M30" ]
§ ABSTRACT In this paper we present an algorithm for computing a congruence on an inverse semigroup from a collection of generating pairs. This algorithm uses a myriad of techniques from computational group theory, automata, and the theory of inverse semigroups. An initial implementation of this algorithm outperforms existing implementations by several orders of magnitude. Hands-free teleoperation of a nearby manipulator through a virtual body-to-robot link Alexis Poignant^1, Nathanaël Jarrassé^1,2, Guillaume Morel^1 ^1Sorbonne Université, CNRS, INSERM, Institute for Intelligent Systems and Robotics (ISIR), Paris, France. ^2Email: jarrasse@isir.upmc.fr June 17, 2024 =========================================================================================================================================================================================================== § INTRODUCTION In this paper we are concerned with the question of computing two-sided congruences of a finite inverse semigroup. The class of inverse semigroups lies somewhere between the classes of groups, and semigroups, with more useful structure than semigroups in general, and possibly less structure than groups. A semigroup is a set, usually denoted S, with an associative binary operation, usually indicated by juxtaposing elements of S. An inverse semigroup is a semigroup S such that for every element s∈ S there exists a unique s'∈ S with ss's=s and s'ss'=s'. The element s' is usually denoted s^-1, a choice which is at least partially justified by the fact that if S is a group and s∈ S, then s^-1 is just the usual group theoretic inverse of s. On the other hand, if S is not a group and s∈ S, then neither ss^-1 nor s^-1s is necessarily equal to the identity of S, not least because S need not have an identity. Two-sided congruences are to semigroups what normal subgroups are to groups. However, a two-sided congruence on a semigroup S is not a subsemigroup of S but rather an equivalence relation ρ⊆ S× S with the property that if (x, y)∈ρ and s∈ρ, then (xs, ys), (sx, sy)∈ρ. Although there is a definition of one-sided congruences also, akin to the notion of subgroups of groups, we will be solely concerned with two-sided congruences, and so we will drop the “two-sided” and henceforth refer to “congruences” to exclusively mean “two-sided congruences”. Equivalently, ρ is a congruence on S if (xs, yt)∈ρ whenever (x, y), (s, t)∈ρ, making ρ a subsemigroup of S× S rather than S. Although congruences on semigroups and normal subgroups on group are analogous notions, the definition for semigroups is a special case of the notion of congruences in universal algebra; see, for example, <cit.>. For a congruence ρ, it will be convenient for us to write x=_ρ y instead of (x,y)∈ρ. Like the normal subgroups of a group, the congruences of a semigroup form a lattice with the meet ρ∧σ of congruences ρ and σ being ρ∩σ, and join ρ∨σ being the least congruence containing ρ and σ. If G is a group and A⊆ G, then algorithms for determining the least normal subgroup A of G containing A are one of the core components of computational group theory. Following the nomenclature of we refer to such algorithms as normal closure algorithms. For example, normal closure algorithms for permutation groups are considered in <cit.>; for groups in general, in <cit.>; or for computing all normal subgroups in <cit.>. Congruences of inverse semigroups have also been studied extensively in the literature. For example, the lattices of congruences of various semigroups, including many inverse semigroups, have been completely described, for example in <cit.>; and from the perspective of computation in <cit.>. With the single exception of <cit.>, the existing algorithms <cit.>, and their implementations in  <cit.>, for computing individual congruences on an inverse semigroup do not use any of the specific structure of inverse semigroups. We will say more about the exception below. The following notions have been, and will be here, indispensable for the study of congruences on inverse semigroups. Let S be an inverse semigroup. We denote the set of idempotents of S by E(S). The kernel of a congruence ρ on an inverse semigroup S is the inverse subsemigroup (ρ) = s∈ Sthere exists e∈ E(S), s=_ρ e≤ S and the trace of ρ is the restriction of ρ to the idempotents of S: (ρ) = ρ∩(E(S)× E(S)). If S is an inverse semigroup and ρ is a congruence on E(S), then ρ is said to be normal in S if s^-1xs =_ρ s^-1ys whenever s∈ S and x=_ρ y. Similarly, if T is a subsemigroup of S, then T is normal in S if s^-1ts∈ T for all s∈ S and all t∈ T. If T is a normal subsemigroup of S and τ is a normal congruence on E(S), then (T, τ) is (rather unimaginatively) called a congruence pair if the following conditions hold: * ae∈ T and e=_τ a^-1a implies that a∈ T; * a∈ T implies that aa^-1 =_τ a^-1a; for all a∈ S and all e∈ E(S). It is well-known that the congruences of an inverse semigroup S are in one-to-one correspondence with the congruence pairs on S; see, for example, <cit.>. The kernel-trace description originates in <cit.>, and is described in almost books about semigroup theory; in addition to <cit.>, see <cit.>, <cit.>, <cit.>. In this paper, we present various mathematical results that can be combined into an algorithm for computing a congruence on a finite inverse semigroup. The aim of these results is to allow the efficient computation of the least congruence R^♯ on an inverse semigroup S from a collection of generating pairs R ⊆ S× S. By “compute” a congruence ρ we mean that we have a representation of the congruence that is amenable to computation (i.e. that is not larger than necessary, and can be computed relatively quickly), and that can be used to answer questions about ρ such as whether or not (x,y)∈ S× S belongs to ρ; what is the number of classes in ρ; and what are the elements of x/ρ? A preliminary implementation of this algorithm in <cit.> and <cit.>, indicates that there is, for some examples, a 1000 times speedup when compared to the existing implementation of <cit.> in <cit.> (which uses the kernel and trace); a 1500 times speed up in comparison to the implementation in <cit.> and <cit.> (for semigroups in general). Although significantly faster than existing implementations, it is worth mentioning that neither the time nor space complexity of the algorithms we present is polynomial in the size of the input. For instance, one key step in the algorithm we present is computing the trace of a congruence on an inverse semigroup S. If S is the symmetric inverse monoid on the set {1, …, n}, then S can be represented using O(n) space. However, |E(S)|=2^n, and computing the idempotents in this case has complexity O(2 ^n). The complexity of the other steps in the algorithm are somewhat harder to describe; but they also depend on |E(S)|. It seems unlikely to the authors that there is a sub-exponential algorithm for computing a congruence on an inverse semigroup. The paper is organized as follows. In <ref> we provide some details of the prerequisite notions from semigroup theory that we require. In <ref> we describe data structures for inverse semigroups, and their quotients, that uses the theory of Green's relations, the action of an inverse semigroup on its idempotents by conjugation, and an analogue of Schreier's Lemma. The data structure consists of a generating set X for the inverse semigroup S, a certain automata-like graph Γ_X encoding the action (of the previous sentence) and its strongly connected components, and a finite sequence G_1, …, G_m of groups. For a quotient of S, the data structure consists of the generating set X for S, a quotient of the graph Γ_X, and a sequence of normal subgroups N_1, …, N_n of the groups in the data structure for S. In <ref> we describe how to compute the trace of a congruence using Γ_X and a (guaranteed to terminate) variant of the Todd-Coxeter Algorithm from <cit.>. In <ref>, we show how to obtain relatively small collections of elements Y_i of each group G_i such that the normal closure Y_i is the required normal subgroup N_i. In <ref> we show how to obtain the elements of the kernel of a congruence as a translate of the preimage of a coset of a normal subgroup under a homomorphism of groups. In the final section <ref>, we describe a completely separate algorithm for computing the maximum idempotent separating congruence on an inverse subsemigroup of a finite symmetric inverse monoid. § PRELIMINARIES Let S be an inverse semigroup. We denote the set of idempotents of S by E(S). If s, t ∈ S, then we write s ≤ t if there exists e∈ E(S) such that s = te. The relation ≤ is a partial order on S (see, for example, <cit.>), usually referred to as the natural partial order on S. This definition may appear to be inherently “right-handed”, but it is not, since s ≤ t if and only if there exists f∈ E(S) such that s = ft <cit.>. Similarly, if s ≤ t and u ≤ v, then su ≤ tv <cit.>. We define a word graph Γ = (N, E) over the alphabet A to be a directed graph with nodes N and edges E⊆ N × A × N. Word graphs are just finite state automata without initial or terminal states. If (α, a, β)∈ E is an edge in a word graph Γ = (N, E), then α is the source, a is the label, and β is the target of (α, a, β). A word graph Γ is complete if for every node α and every letter a∈ A there is at least one edge with source α labelled by a. A word graph Γ = (N, E) is finite if the sets of nodes N and edges E are finite. A word graph is deterministic if for every node α∈ N and every a∈ A there is at most one edge with source α and label a. Complete deterministic word graphs are just unary algebras with universe N and operations f_a: N → N defined by (α)f_a = β whenever (α, a, β) is an edge in Γ; see <cit.> for more details. The perspective of unary algebras maybe helpful, for those familiar with this notion, when we define word graph quotients and homomorphisms, these are identical to the notions of quotients and homomorphisms of the associated unary algebras. If α, β∈ N, then an (α, β)-path is a sequence of edges (α_0, a_0, α_1), …, (α_n - 1, a_n - 1, α_n)∈ E where α_0 = α and α_n = β and a_0, …, a_n - 1∈ A. If α, β∈ V and there is an (α, β)-path in Γ, then we say that β is reachable from α. If α is a node in a word graph Γ, then the strongly connected component of α is the set of all nodes β such that β is reachable from α and α is reachable from β. If Γ_1= (N_1, E_1) and Γ_2= (N_2, E_2) are word graphs over the same alphabet A, then ϕ:N_1→ N_2 is a homomorphism if (α, a, β)∈ E_1 implies ((α)ϕ, a, (β)ϕ)∈ E_2. If κ is an equivalence relation on the nodes of a word graph Γ = (N, E), then we define the quotient Γ/κ of Γ by κ to be the word graph with nodes α/κα∈ N and edges (α /κ, a, β/κ)(α, a, β) ∈ E. Of course, even if Γ is deterministic, the quotient Γ/κ is not necessarily deterministic. If Γ is deterministic, then Γ/κ is deterministic if and only if κ is a congruence on the unary algebra associated to Γ. The kernel of a congruence ρ on an inverse semigroup S is the inverse subsemigroup (ρ) = s∈ Sthere exists e∈ E(S), s=_ρ e≤ S and the trace of ρ is the restriction of ρ to the idempotents of S: (ρ) = ρ∩(E(S)× E(S)). If ρ is a congruence on an inverse semigroup S, then ρ is said to be normal if s^-1xs =_ρ s^-1ys whenever s∈ S and x=_ρ y. Similarly, if T is a subsemigroup of S, then T is normal if sts^-1∈ T for all s∈ S and all t∈ T. If T is a normal subsemigroup of S and τ is a normal congruence on E(S), then (T, τ) is (rather unimaginatively) called a congruence pair if the following conditions hold: * ae∈ T and e=_τ a^-1a implies that a∈ T; * a∈ T implies that aa^-1 =_τ a^-1a for all a∈ S and all e∈ E(S). It is well-known that the congruences of an inverse semigroup S are in one-to-one correspondence with the congruence pairs on S; see, for example, <cit.>. If S is a semigroup, then we denote by S^1 either: S∪{1_S} with an identity 1_S∉S adjoined; or just S in the case that S already has an identity. The final ingredient that we require in this paper is that of Green's relations. If s, t∈ S, then Green's ℛ-relation is the equivalence relation on S defined by (s, t)∈ if and only if sS ^1 = sxx∈ S^1 = tS^1. Green's Ł-relation is defined analogously; Green's $̋-relation is justŁ∩; and Green's-relations is defined to beŁ∘. IfSis finite, then(s, t)∈if and only ifS^1sS^1 = S^1tS^1. Green's relations are fundamental to the study of semigroups; we refer the reader to any of Howie, Lawson1998, Grillet2017, petrich_book for further details. IfTis a subsemigroup ofS(denotedT≤S), then we may write𝒦^Sand𝒦^Tto distinguish the Green's relations onSandTwhen𝒦∈{Ł, , ,̋ }. Ifs∈S, then we denote the equivalence class of Green's𝒦-relation containingsbyK_sorK_s^Sif we want to indicate the semigroup containing the class. Let S be a finite semigroup and let a, b∈ S be such that (a, b)∈. Then the following are equivalent * (ab, a), (ab, b)∈; * (a, ab)∈ and (ab, b)∈Ł; * there exists an idempotent (e, a)∈Ł and (e, b)∈. We will make repeated use of the following straightforward result also. If S is a finite inverse semigroup, e, f∈ E(S) are such that e≤ f, and (e, f) ∈𝒟^S, then e = f. § A DATA STRUCTURE FOR INVERSE SEMIGROUPS AND THEIR QUOTIENTS In this section, we describe the data structure for inverse semigroups given in <cit.>. If S is an inverse semigroup, then the data structure for S consists of the following: * a generating set X for S; * the word graph Γ_X with nodes E(S) and edges {(e, x, x^-1ex)| e∈ E(S), x∈ X}; * the strongly connected components of Γ_X; * a generating set for one group ℋ-class per strongly connected component of Γ_X. GivenX, the word graphΓ_Xcan be found inO(|E(S)||X|)time and space (assuming that products inScan be found in constant time). The strongly connected components ofΓ_Xcan be found fromΓ_Xusing algorithms from graph theory (such as those of Gabow <cit.> or Tarjan <cit.>). Given the strongly connected components ofΓ_X, the groups from <ref> can be determined using the analogue of Schreier's Lemma given in <cit.>. This data structure can be used to answer many of the fundamental questions aboutSthat arise in a computational setting, such as membership testing inS, determining the Green's structure, and the size ofS; see <cit.> for more details. IfSis an inverse semigroupS,R⊆S ×S, andR ^♯ = ρ, we will show how to compute a data structure for the quotientS/ρfrom the data structure forS. This data structure consists of: * the generating set X for S; * the quotient word graph Γ_X/(ρ) with nodes E(S)/(ρ) and edges {(e/(ρ), x, (x^-1ex)/(ρ))| e∈ E(S), x∈ X}; * the strongly connected components of Γ_X/ (ρ); * a generating sets for one group ℋ-class per strongly connected component of Γ_X/(ρ). Clearly for <ref> we must compute(ρ); and given <ref> we can compute the strongly connected components as we did forSitself. Without a representation ofρ(beyondR) we have no means of representingX/ρ, and hence we cannot determine the generating sets for the groupℋ-classes required in <ref>. We show how to compute(ρ)fromRin <ref>; and show how to compute the required groupℋ-classes in <ref>. The quotient data structure is sufficient for representing the inverse semigroupS/ρ, and can be used to compute various aspects ofρ, such as the number of classes. But it does not suffice for other purposes, such as checking membership inρ, or computing the elements ofs/ρ. For the latter, we require a means of computing the kernel(ρ)ofρfrom the data structure. We do this in <ref>. Throughout this paper we will use the notation from this section forS, the congruenceρ, and the associated data structures. § COMPUTING THE TRACE In this section we show how to compute the trace of a congruence on the inverse semigroupSfrom the set of generating pairsR⊆S ×S. If S is an inverse semigroup and ρ = R ^ ♯ is a congruence of S, then the trace (ρ) of ρ is the least normal congruence of E(S) in S containing {(aea^-1, beb^-1)| e∈ E(S), (a, b)∈ R}. Let N denote the set of pairs in the statement, and let ν be the least normal congruence on E(S) containing N. We must show that ν=(ρ). If (a, b)∈ R and e∈ E(S) are arbitrary, then certainly a =_ρ b and so ae =_ρ be and a^-1 =_ρ b^-1. Hence aea^-1 =_ρ beb^-1 and so aea^-1 =_(ρ) beb ^-1. Therefore ν⊆(ρ). For the converse containment, suppose that e=_(ρ) f. Then e=_ρ f, and hence e=_R ^ ♯ f. So there exist s_0 = e, s_1, …, s_n = f where s_i = p_iu_iq_i and s_i +1 = p_iv_iq_i for some p_i, q_i ∈ S^1 and (u_i, v_i)∈ R for all i. We set e_i = s_is_i^-1 for every i. Then e_0 = e and e_n= f. For every i, e_i = s_is_i ^-1 = p_iu_iq_iq_i^-1u_i ^-1p_i^-1 and e_i + 1 = s_i + 1 s_i + 1 ^-1 = p_iv_iq_iq_i^-1v_i ^-1p_i^-1. Since q_iq_i ^-1∈ E(S) and (u_i, v_i)∈ R, it follows that (u_iq_iq_i^-1u_i ^-1, v_iq_iq_i^-1v_i ^-1) ∈ N⊆ν by definition. Hence, since ν is normal, e_i= p_iu_iq_iq_i^-1u_i ^-1p_i^-1=_ν p_iv_iq_iq_i^-1v_i ^-1p_i^-1=e_i+1 for all i. Thus e=e_0=_ν e_n= f, as required. For the remainder of this section we require S to be a monoid, by adjoining an identity1_Sif necessary. Ifσis any equivalence relation onE(S), then we defineΓ_X/σto be the word graph with nodesE(S)/σand edges(e/σ, x, (x^-1ex)/σ)for alle∈E(S)and allx∈X. It is routine to verify thatσis a normal congruence onE(S)with respect toSif and only ifΓ_X/σis deterministic. In this case,(σ)is completely determined byΓ_X/σ. Ife=x_1…x_n∈E(S)wherex_i∈Xlabels a path from1_S/σtof/σinΓ_X/σ, thene=_(σ) f. Conversely, ife=_(σ) fande=x_1⋯x_nandf = y_1⋯y_mwherex_i, y_j∈X, thenx_1⋯x_nandy_1⋯y_mboth label(1_S, e/σ)-paths inΓ_X/σ. We have established the following result. There is a one-to-one correspondence between the normal congruences of E(S) and the deterministic quotients of Γ_X. The trace(ρ)of a congruenceρ= R ^♯on an inverse semigroupScan therefore be computed by: * computing the set from <ref>; * find the greatest quotient of Γ_X containing R using the variant of the Todd-Coxeter Algorithm described in Section 5 of <cit.>. Next we consider an example to illustrate the steps <ref> and <ref>. Each element of a finite symmetric inverse monoid is expressible as a product of chains and disjoint cycles. So we write (i_1, …, i_n) for a cycle and [i_1, …, i_n] for a chain. When points are fixed we write (i) to denote that i is fixed as omitted points are not in the domain of the described partial permutation. In this example we show how to compute the trace of the least congruence ρ on the symmetric inverse monoid I_4 (consisting of the partial permutations on the set {1, 2, 3, 4}) containing the pair: (a, b) := ( (1)(2)(3), (1 2 3) ). We use the following generating set for I_4: X := { x_1 := (1 2 3 4), x_2 := (1 2)(3)(4), x_3 := [4 3 2 1] }. If N is the set of generating pairs for (ρ) from <ref>, then a maximal subset M of N such that M∩ M ^-1= ∅ is: M := { ((1), (2)), ((1),(3)), ((2), (3)), ((1)(2), (2)(3)), ((1)(2), (1)(3)), ((2)(3), (1)(3))}. Obviously, M also generates (ρ). A diagram of the word graph Γ_X in this example can be seen in <ref>. A diagram of the greatest quotient of Γ_X containing (a, b) is shown in <ref>. § COMPUTING THE GROUP ℋ-CLASSES OF THE QUOTIENT In this section we show how to compute the groupℋ-class component <ref> of the quotient data structure. We will repeatedly make use of the following simple lemma, which we record for the sake of completeness. If S is finite, e, f∈ E(S), x,y∈ S, and fxey f, then fxey=fxy. We may assume without loss of generality that S is an inverse subsemigroup of the symmetric inverse monoid I_X for some finite set X. Since f fxey, it follows that (f) = (fxey) ≤(fxy)≤(f), yielding equality throughout. In particular, (fxey) = (fxy), and since e is an idempotent and S is finite, it follows that fxey = fxy, as required. Ifs/ρis an idempotent inS/ρ, then by Lallement's Lemma there existse∈E(S)such thate/ρ= s/ρ, and soE(S) ∩s/ρ= e/(ρ). We definef∈E(S)to be the meet ofe/(ρ), that is, f = ⋀ e/(ρ), and we denote the groupℋ-class offinSbyG. The following lemma describes the group -classes in the quotient S / ρ in terms of the group -classes in S and a normal subgroup. If N = H_f^S∩(f/ρ), then the following hold: * N is a normal subgroup of G = H_f^S; * f/ρ is an inverse subsemigroup of S and N is the minimum non-empty ideal of f/ρ; * the group ℋ-class H_e/ρ^S/ρ is isomorphic to G/N. (a) Let g, h ∈ N. Since f is an idempotent h^-1∈ f / ρ and so gh^-1∈ f^2 / ρ = f / ρ. Thus N is a subgroup of G = H_f^S. If n ∈ N and g ∈ G, then g^-1 n g =_ρ g^-1 f g = f and so g^-1 n g ∈ f / ρ and N is normal. (b) We first show that f / ρ is an inverse subsemigroup of S. Let s, t ∈ f / ρ. Then st =_ρ f^2 = f and s^-1 =_ρ f^-1 = f and so st, s^-1∈ f / ρ. Thus f / ρ is an inverse subsemigroup of S. Next we show that N is the minimum non-empty ideal of f / ρ. Clearly, since f ∈ N, N is non-empty. Let n ∈ N and s ∈ f / ρ. Then sn ∈ N if and only if (sn, f) ∈ℋ^S if and only if sn(sn)^-1 = (sn)^-1sn = f. It follows that sn(sn)^-1, (sn)^-1sn∈ f/ρ, so sn(sn)^-1≥ f and (sn)^-1sn≥ f. On the other hand, nf=f so (sn)^-1sn=(sn)^-1snf≤ f. Thus (sn)^-1sn=f. As sn(sn)^-1≥ (sn)^-1sn and (sn(sn)^-1, (sn)^-1sn) ∈𝒟, it follows from <ref> that (sn)^-1sn=f also. We have shown that N is a left ideal, by symmetry it is a right ideal also. Every non-empty ideal of f/ρ contains fs for some s∈ f/ρ. Thus f = fs(fs)^-1∈ I since this is the unique idempotent in N. Therefore the ideal I contains all of N and so N is the minimum ideal. (c) We define ψ G / N → H^S / ρ_e / ρ by Ng ↦ g / ρ. To show that ψ is well-defined, we must show that ψ maps into H^S / ρ_e / ρ and that ψ does not depend on the choice of coset representative. Let Ng ∈ G / N. Then (Ng)ψ ((Ng)ψ)^-1 = (g / ρ) · (g^-1/ ρ) = f / ρ = e / ρ and by symmetry ((Ng)ψ)^-1 (Ng)ψ = e / ρ and so (Ng)ψ∈ H_e / ρ^S / ρ. If h ∈ Ng, then h = ng for some n ∈ N and so h = ng =_ρ fg = g. So (Nh)ψ = h / ρ = g /ρ = (Ng)ψ and ψ is well-defined. We will next show that ψ is a homomorphism. Let Ng, Nh ∈ G / N. Then (Ng· Nh)ψ = (Ngh)ψ = gh / ρ = (g / ρ) · (h / ρ) = (Ng) ψ· (Nh) ψ, and ψ is a homomorphism. To show ψ is injective, let Ng, Nh ∈ G / N be such that (Ng)ψ = (Nh)ψ. Then g / ρ = h / ρ and so gh^-1∈ f / ρ. Thus gh^-1∈ N, and it follows that Ng = Nh, as required. It remains to show that ψ is surjective. Let h / ρ∈ H_e / ρ^S / ρ. Then h ∈ G and so h / ρ = (Nh) ψ and ψ is surjective. The next lemma is the key result in this section, permitting us to express N in terms of R and the word graph Γ_X of S, and allowing for the efficient computation of N. If the strongly connected component of f in Γ_X is {e_1 = f, e_2, …, e_r} for some r, and for every i, we choose s_i∈ S to be the label of an (e_1, e_i)-path in Γ_X, then N=H_f^S∩(f/ρ) is the normal closure of {fs_iab^-1s_i^-1| (a,b)∈ R, i∈{1, …, r}}∩ H_f ^S in H_f^S. Let N' denote the normal closure of the set in the statement. To show that N'⊆ N, suppose that i∈{1, …, k} and (a, b)∈ R are such that fs_iab^-1s_i^-1∈ H_f^S. We must show that fs_iab^-1s_i^-1∈ N; that is, fs_iab^-1s_i^-1=_ρ f (this is sufficient because, by <ref><ref>, N is a normal subgroup of G. We begin by showing that f = fs_iaa^-1s_i^-1. Also s_iaa^-1s_i^-1∈ E(S) and so fs_iaa^-1s_i^-1≤ f On the other hand, since fs_iab^-1s_i^-1∈ H_f^S, which is a group, it follows that f = (fs_iab^-1s_i^-1)(fs_iab^-1s_i^-1)^-1 f is the identity of H_f^S = fs_iab^-1s_i^-1s_iba^-1s_i^-1f ≤ fs_iaa^-1s_i^-1f b^-1s_i^-1s_ib∈ E(S) = fs_iaa^-1s_i^-1 idempotents commute in S. It follows that f = fs_iaa^-1s_i^-1, and so, in particular, f=_ρ fs_iaa^-1s_i^-1. Since (a, b)∈ R we have a=_ρ b, it follows that a^-1=_ρ b^-1, and so fs_iab^-1s_i^-1=_ρ fs_iaa^-1s_i^-1. Therefore by the transitivity of ρ, fs_iab^-1s_i^-1=_ρ f, as required. For the converse containment (N⊆ N'), suppose that g∈ N = H_f^S∩ (f/ρ). Since f=_ρ g, there exists an elementary sequence a_0 = f, …, a_i = p_ib_iq_i, a_i + 1=p_ic_iq_i, …, a_n = g where p_i, q_i∈ S and (b_i, c_i) or (c_i, b_i)∈ R for all i. By assumption, a_i=_ρ a_0=f, or equivalently, a_i∈ f/ρ for every i. Thus, since f/ρ is a subsemigroup of S and N is the minimum non-empty ideal of f/ρ (<ref><ref>), fa_if∈ N for all i. We will show that fa_kf ∈ N' for every k by induction. Certainly, a_0 = f∈ N' since f is the identity of G and N'≤ G. Assume that fa_kf∈ N' for all k ≤ i To prove that fa_i + 1f∈ N', it suffices to show that (fa_if)(fa_i+1f)^-1∈ N'. But (fa_if)(fa_i+1f)^-1∈ N, and is thus ℋ-related to f, hence (fa_if)(fa_i+1f)^-1 = (fp_ib_iq_if)(fq_i^-1c_i^-1p_i^-1f) = fp_ib_ic_i^-1p_i^-1f by <ref>. If t = fp_ib_ic_i^-1p_i^-1f, then t = (fa_if)(fa_i+1f)^-1∈ N≤ G since fa_if, fa_i + 1f ∈ N. By assumption fa_if∈ N'≤ G, and f is the identity of the group G. Hence (fa_if)^-1(fa_if)=f and so f = (fa_if)^-1(fa_if) = (b_iq_if)^-1(p_i^-1fp_i)(b_iq_if). This shows that f and p_i^-1fp_i belong to the same strongly connected component of Γ_X, and so there exists j∈{1, …, k} such that s_j^-1fs_j = p_i^-1fp_i. Clearly, s_js_j^-1f = fs_js_j^-1f = fs_js_j^-1s_js_j^-1f = s_js_j^-1fs_js_j^-1f = f^ 2 = f, and fp_ip_i^-1 = f[We may assume without loss of generality that S is an inverse subsemigroup of I_n. Since fp_ip_i^-1 = f|_(p_i). But (f) = (s_js_j^-1fs_js_j^-1) = (s_j^-1fs_j) = (p_i^-1fp_i), and so (p_i) ∩(f)=(p_i^-1)∩(f) = (f) (otherwise (p_i^-1fp_i) < (f)). In other words (f)⊆(p_i) and so fp_ip_i^-1 = f|_(p_i) = f.]. If u = fp_is_j^-1f, then s_jp_i^-1· fp_is_j^-1f = s_js_j^-1 fs_js_j^-1f = s_js_j^-1f = f and fp_is_j^-1f· s_jp_i^-1 = fp_ip_i^-1f p_i p_i^-1 = fp_ip_i^-1 = f. In particular, (u, f)∈ℋ, and so u∈ G. Since t∈ G also, u^-1tu∈ N' if and only if t∈ N' since N' is a normal subgroup of G. But u^-1tu = (fp_is_j^-1f)^-1· (fp_ib_ic_i^-1p_i^-1f)· (fp_is_j^-1f) = fs_j· p_i^-1fp_i · b_ic_i^-1· p_i^-1fp_i · s_j^-1f = fs_j· s_j^-1fs_j · b_ic_i^-1· s_j^-1fs_j · s_j^-1f s_j^-1fs_j = p_i^-1fp_i = fs_j b_ic_i^-1s_j^-1f∈ N'. Hence t∈ N', and so (fa_if)(fa_i+1f)^-1∈ N', and so fa_i + 1f∈ N', as required. We have shown that fa_if ∈ N' for all i, and so, in particular, fa_nf = fgf = g∈ N'. The algorithm for computing the normal subgroups component <ref> of the quotient data structure is to * find one e∈ E(S) for every strongly connected component of Γ_X/(ρ); * for each representative e∈ E(S) from <ref>, set N to be the trivial group, and iterate through the generating set given in <ref> for H_f^S∩ (f/ρ) where f is the meet of e/(ρ), taking the normal closure of N and each generator. Next, we continue the example started in <ref>, and compute the generating sets for the normal subgroups in the quotient using <ref>. We compute the normal subgroup component <ref> of the quotient data structure for the the least congruence ρ on the symmetric inverse monoid I_4 containing the pair: (a, b) := ( (1)(2)(3), (1 2 3) ). Firstly the graph Γ_X/(ρ) given in <ref> clearly has 3 strongly connected components, and so we are attempting to compute 3 normal subgroups. If the representatives chosen in <ref> are 1_I_4 = (1)(2)(3)(4), (1)(2)(3), and ∅, then the normal subgroups found in <ref> are the trivial group, (1 3 2), and the trivial group, respectively. Thus the quotient groups are (up to isomorphism) the symmetric group on {1, 2, 3, 4}, the cyclic group of order 2, and the trivial group, respectively. At this point, the number of classes of the congruence ρ is just the size of the quotient semigroup which equals 4!·1^2 + 2 · 4 ^ 2 + 1 · 1 ^ 2 = 57 where each summand equals |G/N| multiplied by the size of the strongly connected component of e to the power of 2; see <cit.> for more details about how to use the quotient data structure to compute with the quotient inverse semigroup S/ρ. § COMPUTING A CLASS OF A CONGRUENCE In this section we consider the question of how to enumerate the elements of x/ρ for an arbitrary x∈ S. One consequence of this description will be a means of enumerating the elements of the kernel (ρ) of the congruence ρ. Throughout this section we fix x∈ S with the aim of enumerating x/ρ. We require the following definitions, which are the key ingredients in this section: U_x = ⋃ D_x / ρ^S / ρ = y∈ S(y/ρ, x/ρ)∈^S/ρ, μ, ν S → E(S) defined by (y) μ = min((yy^-1)/(ρ)) and (y)ν = min ((y^-1y)/ (ρ)), and, finally, ϕ_x U_x → S defined by (y)ϕ_x = (y) μ· y · (y) ν. The following lemma collects various properties of U_x and ϕ_x that are used repeatedly throughout this section. * If y=_ρ z, then y=_ρ (y)ϕ_x=_ρ (z)ϕ_x; * (ϕ_x)⊆ U_x and (y) ϕ_x = y for all y ∈(ϕ_x); * If y, z∈ U_x, then (y/ρ, z/ρ)∈ ^ S/ρ; * If y∈ U_x and (y, z)∈^S, then z ∈ U_x; * If y∈ U_x, then (y)ϕ_x ≤ y; * If y, z∈ U_x and y≤ z, then y=_ρ z; * If y∈ U_x, then (y)ϕ_x· ((y)ϕ_x)^-1 = (y)μ and ((y)ϕ_x)^-1· (y)ϕ_x = (y)ν. * If x, y∈ U_x are such that (x)μ =_ρ(y)μ, then (x)μ = (y)μ. Similarly, if (x)ν =_ρ(y)ν, then (x)ν = (y)ν. (a) Suppose that y=_ρ z. Then by definition (y)ϕ_x = (y)μ· y · (y)ν. Since (y)μ=_ρ yy^-1 and (y)ν=_ρ y^-1y, it follows that (y)ϕ_x=(y)μ· y · (y)ν=_ρ yy^-1· y · y^-1y = y Similarly (z)ϕ_x=_ρ z, hence (z)ϕ_x=_ρ z =_ρ y =_ρ (y)ϕ_x as required. (b) Let y∈(ϕ_x). Then there exists z∈ U_x such that (z)ϕ_x = y. By part (a), (z)ϕ_x=_ρ z and so y=(z)ϕ_x =_ρ z. In particular, y/ρ = z/ρ and so (y/ρ, x/ρ) = (z/ρ, x/ρ) ∈𝒟^S/ρ, since z∈ U_x. This shows that y∈ U_x. Next, y=_ρ z implies that (y)μ = (z)μ (<ref>(h)). A similar argument shows that (y)ν=(z)ν. In particular, since (y)μ=(z)μ and (y)ν= (z)ν are idempotents, (z)μ = (y)μ· (z)μ and (z)ν = (z)ν· (y)ν. Hence y=(z)ϕ_x =(z)μ· z · (z)ν = (y)μ· (z)μ· z · (z)ν· (y)ν = (y)μ· (z)ϕ_x · (y)ν = (y)μ· y · (y)ν = (y)ϕ_x. (c) If y, z∈ U_x, then (y/ρ, x/ρ), (z/ρ, x/ρ)∈ ^S/ρ by definition. Thus, since ρ is transitive, (y/ρ, z/ρ)∈ ^S/ρ. (d) If (y, z)∈ ^S, then (y/ρ, z/ρ)∈ ^ S/ρ. Since y∈ U_x, (y/ρ, x/ρ), and so, again by the transitivity of ρ, (z/ρ, x/ρ)∈ ^ S/ρ. Therefore z∈ U_x. (e) This follows immediately from the definition of ϕ_x, since (z)ϕ_x = (z)μ· z· (z)ν≤ z. (f) Since y≤ z, it follows that x/ρ≤ y/ρ. But by assumption y, z∈ U_x and so (y/ρ, z/ρ)∈ ^ S/ρ, by part (c). Therefore y/ρ =z/ρ, as required. (g) By part (a), (y)ϕ_x((y)ϕ_x)^-1=_ρ yy^-1 and so (y)ϕ_x((y)ϕ_x)^-1≥ (y)μ, since (y)μ is the minimum of its trace class. On the other hand, (y)μ· (y)ϕ_x = (y)ϕ_x and so (y)μ (y)ϕ_x((y)ϕ_x)^-1 = (y)ϕ_x((y)ϕ_x)^-1. Therefore (y)ϕ_x((y)ϕ_x)^-1≤ (y)μ. The proof for ν follows by symmetry. (h) By definition both (x)μ and (y)μ are the minimum elements in their trace classes. Since (x)μ =_ρ(y)μ, these trace classes coincide, and so (x)μ = (y)μ. The proof for ν is the same. For the next lemma it will be convenient to use the languages of groupoids. The set U_x naturally forms a groupoid with ∗ U_x× U_x → U_x defined by y ∗ z = yz whenever y, z, yz∈ U_x and yz ^S y ^ S z and where the inverse operation coincides with that on S. The connected components of U_x are just the -classes of S; for further details see <cit.>. If u∈ U_x, then ϕ_x|_D_u^S is a functor (or equivalently a groupoid morphism). Suppose that y, z∈ D_u^S are such that yz∈ D_u^S. It suffices to show that (y)ϕ_x· (z)ϕ_x = (yz)ϕ_x. Since yz ^S u, y^-1y = z z ^ -1 (by the Location <ref>), and so (z)μ= (y)ν. It follows that (y)ϕ_x· (z)ϕ_x = ((y)μ· y· (y)ν)((z)μ· z· (z)ν) = (y)μ· y· (y)ν· z· (z)ν since (y)ν = (z)μ∈ E(S) = (y)μ· yz· (z)ν by <ref>. Since (yz)(yz) ^-1 = yzz^-1y^-1 = yy^-1yy^-1 = yy ^-1, it follows that (y)μ = (yz)μ and similarly, (z)ν = (yz)ν. Therefore (y)ϕ_x· (z)ϕ_x = (y)μ· yz· (z)ν = (yz)μ· yz · (yz)ν = (yz)ϕ_x. * If y∈(ϕ_x) and z∈ U_x is such that z≤ y, then y = z. * If e=_(ρ) f, e∈(ϕ_x) is an idempotent, and f∈ U_x is also an idempotent, then e≤ f. Suppose that y∈(ϕ_x) and z∈ U_x are such that z≤ y. If z < y, then zz^-1 < yy ^-1. Hence it suffices to show that yy^-1 is minimal in U_x. Since y∈(ϕ_x), <ref>(a) shows that (y)ϕ_x = y∈ U_x. We begin by showing that yy^-1 is the minimum in its trace class; this will establish part (b). By the definition of μ, it suffices to show that yy^-1 = (u)μ. This follows from <ref>(g). By the definition of ϕ_x, y = (u)μ· u· (u)ν and so y y ^ -1 = ((u)μ· u· (u)ν)((u)μ· u· (u)ν)^-1 = (u)μ· u· (u)ν· ((u)ν)^-1· u^-1· ((u)μ)^-1≤ (u)μ· (u)μ ^ -1 = (u)μ, the last equality holds because (u)μ is an idempotent. For the converse inequality, by <ref>(a), y= (u)ϕ_x=_ρ u, and so yy^-1=_ρ uu^-1=_ρ(u)μ. So, yy^-1=_ρ (u)μ, and since (u)μ is the minimum in its trace class, yy^-1≥ (u)μ. We have shown that yy^-1 = (u)μ, meaning that yy^-1 is the minimum in its trace class. If z∈ U_x and z ≤ yy^-1, then z is an idempotent, and z/ρ≤ yy^-1/ρ (homomorphisms preserve the natural partial order). Since y, z∈ U_x, it follows that (y/ρ, z/ρ)∈^S/ρ, by <ref>(c). Since (y, yy^-1)∈^S, and homomorphisms preserve Green's -relation, (y/ρ, yy^-1/ρ)∈ ^S/ρ. Thus (yy^-1/ρ, z/ρ)∈^S/ρ and z/ρ≤ yy^-1/ρ, which implies that z/ρ = yy^-1/ρ. Hence z = yy^-1 by <ref>(h) and so yy^-1 is minimal in U_x, as required. The set (ϕ_x) is a ^S-class. Suppose that y∈(ϕ_x). We will show that D_y^S = (ϕ_x). (⊇) Suppose that z∈(ϕ_x). Then y, z∈ U_x (<ref>(b)) and so (y/ρ, z/ρ)∈^S/ρ (<ref>(c)). Hence there exists s∈ S such that (s/ρ, y/ρ)∈^S/ρ, zz^-1 =_ρ ss^-1, and yy^-1=_ρ s^-1s. This implies that s∈ U_x and so (s)ϕ_x ∈(ϕ_x). We will show that (s)ϕ_x ((s)ϕ_x)^-1 = zz^-1 and ((s)ϕ_x)^-1(s)ϕ_x = yy^-1. It will follow from this that (z, (s)ϕ_x), ((s)ϕ_x, y)∈ ^ S implying that (z, y)∈ ^S which will conclude the proof. By <ref>(a), s=_ρ (s)ϕ_x and so (s)ϕ_x((s)ϕ_x)^-1 =_ρ ss ^-1 =_ρ zz ^-1. Since z∈(ϕ_x) ⊆ U_x, it follows from the definition of U_x that zz ^ -1∈ U_x also. <ref>(b) implies that z = (z)ϕ_x and since ϕ_x is a functor (<ref>), (zz^-1)ϕ_x= (z)ϕ_x· (z^-1)ϕ_x = (z)ϕ_x· ((z)ϕ_x) ^-1 = zz^-1 (the second to last equality holds because functors preserve inverses). Hence zz^-1∈(ϕ_x), and similarly, (s)ϕ_x ((s)ϕ_x)^-1∈(ϕ_x). But (s)ϕ_x((s)ϕ_x)^-1 =_ρ zz ^-1, and so <ref>(g,h) implies that both zz^-1=(s)ϕ_x ((s)ϕ_x)^-1. By symmetry yy ^-1 = ((s)ϕ_x)^-1(s)ϕ_x, as required. (⊆) If z∈ D_y^S, then z∈ U_x = (ϕ_x), by <ref>(d). It follows that (z)ϕ_x≤ z (by <ref>(e)) and so (assuming without loss of generality that S is an inverse semigroup of partial perms) ((z)ϕ_x) ≤(z) = (y) = ((y)ϕ_x) = ((z)ϕ_x), (the last equality holds since (ϕ_x)⊆ D_y^S). Hence (z)ϕ_x = z and so z∈(ϕ_x). If y, z∈ S and (y, z)∈^S, then in the next theorem we will denote the intersection of the -class R_y^S of y and the Ł-class L_z^S of z by H_y, z. We can finally state and prove the main result in this section which will allow us to compute the elements in the congruence class x/ρ∩ H_e, f where e, f∈ E(S), as a translate of the preimage of a coset of a normal subgroup under the functor ϕ_x. If e, f∈ E(S) are such that (e, f)∈^S and x∈ S, then there exists s ∈ S such that H_e, f∩ x/ρ = ((H_(e)ϕ_x, (e)ϕ_x∩ e/ρ)xs^-1) ϕ_x|_H_e, e^-1· s, where e=_ρ xx^-1, f=_ρ x^-1x, and s^-1es = f. Before giving the proof of <ref> note that H_(e)ϕ_x, (e)ϕ_x∩ e/ρ is a normal subgroup of H_(e)ϕ_x, (e)ϕ_x by <ref>(a). Hence (H_(e)ϕ_x, (e)ϕ_x∩ e/ρ)xs^-1 is a coset of a normal subgroup. Since H_e, e is a group, and ϕ_x is a functor (<ref>), it follows that ϕ_x|_H_e, e H_e, e→ H_(e)ϕ_x, (e)ϕ_x is a group homomorphism. The ideas of the proof are visualised in <ref>. We start by noting that: fs^-1s = s^-1ess^-1s = s^-1es= f, which will be useful in both parts of the proof below. (⊆) Let t∈ H_e, f∩ x/ρ. Since t∈ H_e, f, ts^-1s = t (Green's Lemma <cit.>) and so ts^-1∈ H_e, e. In particular, (ts^-1, e)∈^S and e =_ρxx^-1∈ U_x and so e ∈ U_x. Thus ts^-1∈ U_x (<ref>(d)) and so it suffices to show that (ts^-1)ϕ_x ∈(H_(e)ϕ_x, (e)ϕ_x∩ e/ρ)xs^-1. We start by showing that (ts^-1)ϕ_x· sx^-1 =_ρ e: (ts^-1)ϕ_x · sx^-1 =_ρ (ts^-1)sx^-1 ts^-1 =_ρ (ts^-1)ϕ_x by <ref>(a) =_ρ (xs^-1)sx^-1 t=_ρ x by assumption =_ρ (xx^-1xs^-1)sx^-1 =_ρ (xfs^-1ss^-1)sx^-1 fs^-1s =_ρ x^-1x =_ρ xfs^-1sx^-1 =_ρ xfx^-1 fs^-1s = f =_ρ xx^-1xx^-1 f=_ρ x^-1x =_ρ xx^-1 =_ρ e. It remains to show that (ts^-1)ϕ_x ∈ H_(e)ϕ_x, (e)ϕ_xxs^-1, or, by Green's Lemma, equivalently that (ts^-1)ϕ_x sx^-1∈ H_(e)ϕ_x, (e)ϕ_x. In order to do this, we start by showing that sx^ -1∈ U_x. Since (sx^-1, sx^-1xs^-1)∈^S, it follows that (sx^-1/ρ, sx^-1xs^-1/ρ)∈ ^S/ρ. But sx^-1xs^-1=_ρsfs^-1= ss^-1ess^-1=ess^-1=e=_ρxx^-1 and so (sx^-1xs^-1/ρ, x/ρ)∈ ^S/ρ. By transitivity, (sx^-1/ρ, x/ρ)∈ ^S/ρ and so sx^-1∈ U_x (<ref>(d)). By the definitions of μ and ν: (ts^-1)ν =_ρ (ts^-1)^-1ts^-1 = st^-1ts =_ρ sx^-1xs^-1 = sx^-1(sx^-1)^-1=_ρ (sx^-1)μ so (ts^-1)ν=(sx^-1)μ (<ref>(h)). On the other hand, ts^-1st^-1≤ tt^-1 and tt^-1∈ U_x since t=_ρx. If ts^-1st^-1∈ U_x, then (ts^-1)μ = ts^-1st^-1=_(ρ) tt^-1=(t)μ (<ref>(f)). To show that ts^-1st^-1∈ U_x it suffices to show that ts^-1st^-1 is ρ-related to an element of U_x: ts^-1st^-1 = ts^-1st^-1tt^-1 =_ρ ts^-1sx^-1xt^-1 t^-1t =_ρx^-1x =_ρ ts^-1sft^-1 x^-1x=_ρ f = ts^-1ss^-1est^-1 s^-1es=f = ts^-1est^-1 =_ρ tft^-1 s^-1es=f =_ρ tx^-1xt^-1 x^-1x=_ρ f =_ρ xx^-1xx^-1 t=_ρx =_ρ xx^-1∈ U_x. Hence (ts^-1)μ=(t)μ (<ref>(h)). By the assumption at the start of the proof, t=_ρ x and so tt^-1=_ρxx^-1, and so (t)μ =(xx^-1)μ. Since e∈ E(S), (e)ϕ_x = (e)μ =_ρ(xx^-1)μ, and again by <ref>(h), (e)μ = (xx^-1)μ. We have shown that (ts^-1)μ = (e)ϕ_x. By a similar argument, (sx^-1)ν = (x^-1)ν = (x)μ = (e)ϕ_x. It follows that (ts^-1)ϕ_x · sx^-1 = (ts^-1)ϕ_x · (ts^-1)ν· sx^-1 by the definition of ϕ_x = (ts^-1)ϕ_x · (sx^-1)μ· sx^-1 by (<ref>) = (ts^-1)ϕ_x · (sx^-1)μ· sx^-1·(sx^-1)ν by <ref> = (ts^-1)ϕ_x · (sx^-1)ϕ_x sx^-1∈ U_x. We set a = (ts^-1)ϕ_x and b = (sx^-1)ϕ_x. By (<ref>), (ts^-1)ν = (sx^-1)μ and so by <ref>(g), a^-1a = bb^-1 and so (ab, a) ∈^S. The Location <ref> then implies that ab∈ H_aa^-1, b^-1b. But aa^-1 = (ts^-1)ϕ_x ((ts^-1)ϕ_x) ^-1 = (ts^-1)μ by <ref>(g) = (e)ϕ_x by (<ref>). Similarly, b^-1b = (sx^-1)ν = (e)ϕ_x. Whence (ts^-1)ϕ_x · sx^-1 = (ts^-1)ϕ_x · (sx^-1)ϕ_x = ab∈ H_aa^-1, b^-1b = H_(e)ϕ_x, (e)ϕ_x, as required. (⊇) Let t ∈((H_(e)ϕ_x, (e)ϕ_x∩ e/ρ)xs^-1) ϕ_x|_H_e, e^-1· s be arbitrary. We must show that t∈ H_e, f and t∈ x/ρ. Since t∈(ϕ_x|_H_e, e)s, there exists h∈ H_e, e such that t = hs∈ H_e, es = H_e, f. It remains to prove that t=_ρ x: t = hs where h∈((H_(e)ϕ_x, (e)ϕ_x∩ e/ρ)xs^-1) ϕ_x|_H_e, e^-1 =_ρ (h)ϕ_xs by <ref>(a) =_ρ exs^-1s by the choice of h = exx^-1xs^-1s =_ρ exfs^-1s f=_ρ x^-1x =exf fs^-1s = f =_ρ xx^-1xx^-1x =x, as required. The algorithm for iterating through the elements of the set x/ρ is then as follows: * determine the data structure for the semigroup S consisting of: the generating set X, the word graph Γ_X, the strongly connected components of Γ_X; and one group $̋-class per strongly connected component ofΓ_Xusing the algorithms described in <cit.>; * determine the data structure for the quotientS/ρconsisting of: the generating setX; the quotient word graphΓ_X/(ρ); the strongly connected components ofΓ_X/(ρ); and the quotient groupsG/Nusing the algorithms described in <ref> and <ref>; * for every pair{e, f}of idempotents where(e,xx^-1), (f, x^-1x) ∈(ρ)andeandfbelong to the same strongly connected component ofΓ_Xdetermine the setH_e, f∩ x/ρusing <ref>. We continue <ref> by computing x/ρ where x=[1 2 4] (3)∈ I_4. Steps <ref> and <ref> were covered in <ref> and <ref>, respectively. For step <ref>, we iterate though all the pairs of idempotents e, f such that e=_ρ xx^-1=(1)(2)(3) and f=_ρ x^-1x=(2)(3)(4). In this case, there is only one such pair when e = (1)(2)(3) and f = (2)(3)(4). We then compute ((H_(e)ϕ_x, (e)ϕ_x∩ e/ρ)xs^-1) ϕ_x|_H_e, e^-1· s where s∈ I_4 is any fixed element such that s^-1es=f; such as s=[1 2 3 4]. Since (ρ) equals Δ_D_x∩ E(S)=(d, d)d∈ D_x∩ E(S) when restricted to the idempotents of the 𝒟-class of x, it follows that ϕ_x is the identity function. Thus ((H_e, e∩ e/ρ)xs^-1) · s=(H_e, e∩ e/ρ)x. We calculated in the previous example that H_e, e∩ e/ρ is the alternating group on {1, 2, 3}, that is {(1)(2)(3), (1 2 3), (1 3 2)}. Translating this by x=[1 2 4] (3) gives x/ρ = {(1)(2)(3)·[1 2 4] (3), (1 2 3)·[1 2 4] (3), (1 3 2)·[1 2 4] (3)}={[1 2 4] (3), [1 4] (2 3), [1 3 4] (2)}. § THE MAXIMUM IDEMPOTENT-SEPARATING CONGRUENCE In this section we give a method for computing the maximum idempotent-separating congruence on a finite inverse subsemigroup of a symmetric inverse monoid. We achieve this using a description of the maximum idempotent-separating congruence via centralisers. We begin with the definition of a centraliser. IfSis a semigroup andAis a subset ofS, then the centraliser ofAinSis the set C_S(A) = s∈ S sa = as for all a ∈ A. Let μ be the congruence defined by a=_μ b if and only if a e a^-1 = b e b^-1 for all e ∈ E(S). The congruence μ is the maximum idempotent-separating congruence on S. We have that(μ) = C_S(E(S))and sinceμis the maximum idempotent-separating congruence on an inverse semigroupS,(μ) = Δ_E(S). For the remainder of this section, we discuss how to computeC_S(E(S))whenS≤ I_n. IfSis an inverse semigroup of partial permutations of degreen, andX⊆{1, …, n}, then the (setwise) stabiliser ofXwith respect toSis _S(X) = g∈ S(X)g = X≤ S. If S≤ I_n is an inverse semigroup, then C(E(S)) = ⋃_e∈ E(S)⋂_f≤ e_S∩Sym((e))((f)). Where Sym((e)) denotes the group of permutations of (e), and f is taken to be in S. (⊆) Let s∈ C(E(S)) and e= ss^-1. As se=es=s and (e)=(s) it follows that (e)=(s)=(se)=((e))s^-1. Thus s^-1 bijectively maps (e)=(s) to itself. So s does too, and so s∈ S∩Sym((e)). Let f≤ e. To conclude that s belongs to the right side of the equality in the statement, it suffices to show that ((f))s=(f). By assumption fs=sf, so (f)s=(fs)=(sf)=(s)∩(f)=(e)∩(f)=(f). (⊇) Let s be an element of the right hand side of the equality in the statement of the proposition. Then there exists e∈ E(S) such that for all f≤ e, we have s∈_S∩Sym((e))((f)). In particular, this holds when f=e, and so s∈ S∩Sym((e)). Let g∈ E(S) be arbitrary. We need to show that s g=g s. Since s is an element of a subgroup with identity e, it follows that ss^-1=s^-1s=e. If we define f= eg, then as f ≤ e, ((f))s=(f). This implies that ((f))s^-1=(f) and so g s = g es = f s = s |_(f) = s |_(f)s^-1 = sf = seg = sg. If A⊆𝒫(X) for some set X, then we say that A is a boolean algebra (on X) if A is closed under taking finite (possibly empty) unions, and is also closed under taking complements in X. Each boolean algebra is partially ordered by ⊆ and contains the empty set, which is called the 0 of the algebra. The complement of 0 (the universal set) is similarly called the 1. Note that this is consistent with standard meet semilattice notation. If Y ⊆ X we write Y^c to denote the complement of Y in X. We say that an element of a boolean algebra is an atom if it is a minimal non-zero element. If B is a boolean algebra, then we define A(B) to be the set of atoms of B. For any finite boolean algebra B, B= ∪ XX⊆ A(B). If S≤ I_n is an inverse semigroup, then we define B(S)≤𝒫({1, 2, …, n}) to be the least boolean algebra containing the set of domains (or equivalently images) of the elements of S, noting that such a boolean algebra exists as the intersection of two boolean algebras is always a boolean algebra. If S≤ I_n is an inverse semigroup, then C(E(S))=s∈ S (b)s=b for all b∈ A(B(S)) such that b⊆(s). (⊆) Let s∈ C(E(S)). We must show that (b)s = b for all b∈ A(B(S)) such that b⊆(s). Let b∈ A(B(S)) be such that b⊆(s) and let X = Y⊆{1, …, n}(Y)s⊆ Y and (Y^c)s⊆ Y^c X' =Y⊆{1, …, n}(Y)s⊆ Y and (Y)s^-1⊆ Y. We show that X=X'. Let Y ∈ X. Then (Y)s ⊆ Y and (Y^c)s ⊆ Y^c. So s moves nothing from Y to Y^c and nothing from Y^c to Y, and thus the same must hold for s^-1. In particular, (Y)s^-1⊆ Y and so Y ∈ X' and X ⊆ X'. Now suppose Y ∈ X'. Then (Y)s ⊆ Y and (Y)s^-1⊆ Y. The later implies that s cannot move anything from Y^c into Y and so (Y^c)s ⊆ Y^c and Y ∈ X. Thus X' ⊆ X and so X' = X. Note that X is a boolean algebra, as from the definition of X it is closed under complements, and from the definition of X' it is closed under unions. Let D = (t)t∈ S. By definition, the least boolean algebra containing D is B(S). We will show that D⊆ X. This will be sufficient because, together with fact that X is a boolean algebra, this implies that B(S)⊆ X, which in turn implies that (b)s⊆ b. Since b is an atom (b)s cannot be a proper subset of b. So let d∈ D be arbitrary, and let f_d∈ S be an idempotent with (f_d)=d. As s∈ C(E(S)), we have sf_d=f_ds. The image of sf_d is d∩(s) and the image of f_ds is (d)s. Thus (d)s=d∩(s) and so (d)s⊆ d. Since s^-1∈ C(E(S)), we similarly get that (d)s^-1⊆ d. It follows that d∈ X, as required. (⊇) Let s∈ S be such that for all b∈ A(B(S)) with b⊆(s), we have (b)s=b. Let e∈ S be an idempotent. We will show that se=es. Let b_1, …, b_k∈ A(B(S)) be distinct such that (e)∩(s)= b_1∪…∪ b_k. Note that, from the assumption on s, (b_i)s=b_i for all 1≤ i≤ k so (e)∩(s)= b_1∪…∪ b_k=(e)∩(s). For all x∈{1, …, n} we have that ({x})es ={[ ∅ if x∉ b_1∪…∪ b_k; {(x)s} if x∈ b_1∪…∪ b_k; ]. =({x})se. Therefore, se=es and so s∈ C(E(S)), as required. We compute the maximum idempotent-separating congruence μ of the semigroup I_4. The first step is to construct C(E(I_4)) using <ref>. The set of domains of elements of I_4 is just 𝒫(I_4), and so B(I_4) = 𝒫({1, 2, 3, 4}). It follows that A(B(I_4)) is the set of singleton subsets of {1, 2, 3, 4}. From <ref>, it follows that C(E(I_4)) = s ∈ I_4(i)s = i for all i ∈{1, 2, 3, 4} such that i ∈(s). This implies that C(E(I_4)) is precisely the set of elements of I_4 which act as the identity on their domains, which is just E(I_4) and so (μ) = C(E(I_4)) = E(I_4). Since μ is idempotent-separating, we already know that (μ) = Δ_E(S), and so we have computed the kernel and trace for μ, which fully describes the congruence. In this case, the kernel and trace equal those of the trivial congruence Δ_S, and so μ = Δ_S. § ACKNOWLEDGEMENTS The authors were supported by a Heilbronn Institute for Mathematical Research Small Grant during part of this work. The second named author was supported the Heilbronn Institute for Mathematical Research during this work. The authors would also like to thank the University of Manchester for hosting them during part of the work on this paper.
http://arxiv.org/abs/2406.08305v1
20240612150450
Large Language Model(LLM) assisted End-to-End Network Health Management based on Multi-Scale Semanticization
[ "Fengxiao Tang", "Xiaonan Wang", "Xun Yuan", "Linfeng Luo", "Ming Zhao", "Nei Kato" ]
cs.NI
[ "cs.NI", "eess.SP" ]
Large Language Model(LLM) assisted End-to-End Network Health Management based on Multi-Scale Semanticization Fengxiao Tang Central South University tangfengxiao@csu.edu.cn Xiaonan Wang Xinjiang University 107552304984@stu.xju.edu.cn Xun Yuan Central South University yuan.xun@csu.edu.cn Linfeng Luo Central South University luolinfeng@csu.edu.cn Ming Zhao Central South University meanzhao@csu.edu.cn Nei Kato Tohoku University kato@it.is.tohoku.ac.jp ========================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Network device and system health management is the foundation of modern network operations and maintenance. Traditional health management methods, relying on expert identification or simple rule-based algorithms, struggle to cope with the dynamic heterogeneous networks (DHNs) environment. Moreover, current state-of-the-art distributed anomaly detection methods, which utilize specific machine learning techniques, lack multi-scale adaptivity for heterogeneous device information, resulting in unsatisfactory diagnostic accuracy for DHNs. In this paper, we develop an LLM-assisted end-to-end intelligent network health management framework. The framework first proposes a Multi-Scale Semanticized Anomaly Detection Model (MSADM), incorporating semantic rule trees with an attention mechanism to address the multi-scale anomaly detection problem in DHNs. Secondly, a chain-of-thought-based large language model is embedded in downstream to adaptively analyze the fault detection results and produce an analysis report with detailed fault information and optimization strategies. Experimental results show that the accuracy of our proposed MSADM for heterogeneous network entity anomaly detection is as high as 91.31%. § INTRODUCTION With the development of communication technology and unmanned control technology towards B5G/6G, dynamic heterogeneous networks (DHNs) <cit.> play an increasingly important role in many key areas such as emergency communication, transportation, and military administration <cit.>. As shown in Fig. <ref>, DHNs consist of various types of communication devices such as base stations, drones, and mobile phones, which have been deployed in harsh and dynamically changing environments for long periods <cit.>, are prone to various anomalies and faults <cit.>. Therefore, to enhance the availability and reliability of DHNs, it is essential to perform timely health management to detect network anomalies and diagnose network faults <cit.>. Modern health management is a comprehensive analysis technique that not only presents and visualizes anomalous data but also digs the fault type and reasons behind the abnormal data in the whole network, thus a series of decisions can be made to mitigate the problem <cit.>. A typical health management life cycle includes at least three phases: (1) Anomaly Detection <cit.>: Here, a monitor performs anomaly detection of multivariate time series data ( e.g., packet loss, byte error, etc.). (2) Fault Detection <cit.>: network managers (NMs) assess various aspects of the event and engage in several rounds of communication to pinpoint the cause of the anomaly. (3) Mitigation <cit.>: the NMs implement several actions to mitigate the incident and restore the health of the communication service. The accuracy of anomaly detection and fault detection is the foundation of the health management life cycle, however, the increasing variety and dynamicity in DHNs result in two key challenges of health management of DHNs <cit.>: 1. How to accurately infer faults through local information when global information is difficult to obtain in real-time. 2. How to accurately locate faults in heterogeneous devices with differences in information scale and fault mechanisms. The traditional Bayesian-based health management methods are widely used in network fault detection, which establish connections between network anomalies and their root causes for performance diagnosis<cit.>. However, Bayesian methods rely on directed acyclic graphs that lack scalability, making them unsuitable for DHNs. Simultaneously, frequent changes in topology complicate the ability of traditional distributed anomaly detection algorithms to detect local or minor anomalies in DHNs<cit.>. Recently, machine learning-based health management methods have been widely researched and recognized as state-of-the-art algorithms for network fault detection <cit.>. However, those machine learning-based algorithms either relay on global network information or ignore the nonuniformed Key Performance Indicators (KPIs) and state information of heterogeneous nodes. Besides, Those diagnostic algorithms do not cover the complete health management life cycle and still rely on NMs to perform manual troubleshooting to mitigate anomalies after detection, which not only fails to utilize anomaly data efficiently but also significantly increases the time and complexity of anomaly handling. To address the above problems, we developed an LLM-assisted end-to-end intelligent network health management framework. In the framework, we first propose a Multi-Scale Semanticized Anomaly Detection Model (MSADM) to deal with uniformed KPIs and state information problems, and then integrate LLM to perform full life cycle end-to-end health management. Unlike existing models that can only handle specific faults of specific devices, the MSADM incorporates multi-scale semantic rule trees with Transformer to unify and standardize abnormal text reports based on the different abnormal degrees of various nodes. Thus, the MSADM can be implemented in differential entities to automatically identify abnormal communication entities and generate unified and standardized expressions of abnormal information. As shown in Fig. <ref>, to perform end-to-end health management, we integrate LLM in the health management framework to cover the full life cycle and employ MSADM as the facilitating agent for the LLM. This strategic integration facilitates the collection and initial processing of abnormal data, thereby effectively preventing diagnostic errors caused by inconsistent data representations. This preliminary processing also significantly reduces the computational demands on LLM. As shown in Fig. <ref>, the effectiveness of this approach is evident through the detailed diagnostic results generated by LLM. These results succinctly outline the abnormal status and potential causes for each network entity, underscoring the robust capability of our proposed health management program. The main contributions of this paper are summarized as follows: * We propose an end-to-end health management framework for DHNs. This framework manages network health through only local and neighboring information and covers the full stages of the health management life cycle, including anomaly detection, fault detection, and mitigation. * We propose a Multi-Scale Semanticized Anomaly Detection Model (MSADM) to deal with uniformed KPIs and state information problems. This model standardizes abnormal information from various DHNs equipment, addressing the inefficiencies inherent in traditional distributed anomaly detection information sharing. * We incorporate LLM into the network health management process to perform a full life cycle of end-to-end health management. By employing the thinking prompt method, LLM not only analyzes abnormal situations but also offers mitigation solutions. § BACKGROUND AND MOTIVATION In this section, we first review the current research status of anomaly detection models. We then identify the shortcomings and defects of existing methods in DHNs health management. Finally, we explore the potential benefits of integrating semantic work into the health management process of wireless heterogeneous networks. §.§ Related Work The traditional anomaly detection algorithm detects anomalies by monitoring wireless measurement data and comparing it with established norms <cit.>. However, this approach overly depends on expert annotations and proves both time-consuming and labor-intensive. Concurrently, researchers also attempt to validate their findings using both simulated and actual data. Yet, these studies typically rely on a single KPI, such as the call drop rate, to classify anomalies, thereby constraining diagnostic precision to a degree <cit.>. The Bayesian-based classification method, extensively explored in <cit.><cit.>, uses probability and graph theories to correlate network anomalies with their root causes. Despite its widespread application, the efficacy of this method significantly hinges on a substantial corpus of historical anomaly data since the causal graphs it generates demand extensive prior knowledge. Moreover, the Bayesian approach faces challenges in scalability and adaptability, struggling to perform well in dynamic, heterogeneous wireless network environments. Machine learning, recognized as a powerful analytical tool, can effectively mine and perceive potential information in data and sharply detect subtle changes in network status and KPIs, thus enabling faster and more precise network anomaly detection <cit.>. Researchers propose a diagnostic method based on a supervised genetic fuzzy algorithm <cit.>. This method employs a genetic algorithm to learn a fuzzy rule base from a combined dataset of simulated and real data containing 72 records. Its accuracy heavily relies on the labeled training set. The Deep Transformer-based temporal anomaly detection model, TranAD <cit.>, incorporates an attention sequence encoder and leverages broader temporal trend knowledge to swiftly conduct anomaly detection. DCdetector <cit.> masters the representation of abnormal samples using a dual attention mechanism and contrastive learning. While machine learning methods have advanced in feature learning and enhanced their generalization capabilities, they face challenges in wireless networks. Abnormalities are sporadic, and scarce abnormal samples make the models prone to overfitting. Moreover, modeling only the entire network fails to adapt to dynamic DHNs. Although research on distributed anomaly detection solutions is extensive <cit.>, practical applications suffer due to inconsistent network entity feature representation, weakening detection capabilities <cit.>. Additionally, using machine learning to model each device alone is both time-consuming and labor-intensive. The models also struggle to capture the interactive information of communication devices. Additionally, existing distributed fault detection methods often consider abnormal situations as a whole, which neglects the specific abnormal representation of individual communication entities, thereby complicating the rapid detection of abnormal nodes by NMs. §.§ Problem Statement and Our Objectives Within DHNs, the diverse range of communication devices poses challenges for domain experts in gathering data encompassing all device anomaly types for model training. Furthermore, these models typically lack autonomous learning capabilities. Consequently, the emergence of new communication devices or technologies within the network often detracts from the detection efficacy of the model, leading to performance degradation. In addition to the aforementioned shortcomings, existing anomaly detection research often emphasizes enhancing detection accuracy or model interpretability. However, the comprehensive coverage of the entire health management life cycle is seldom taken into account. For anomalies detected by the model, the prevalent approach involves NMs extracting information and experience from satisfactorily resolved and archived cases (i.e. marked cases) to alleviate the anomalies <cit.>. Undoubtedly, this significantly diminishes the efficiency of anomaly mitigation. We incorporate LLM into the health management life cycle, leveraging its reasoning capabilities to identify the root causes of abnormal situations, thereby furnishing NMs with end-to-end anomaly resolutions. Moreover, LLM's learning capability enables rapid adaptation to new abnormal information from communication entities. To facilitate LLM in gathering anomaly information, we devised MSADM, deployed on communication entities to execute anomaly detection and information collection. Given the distributed deployment of MSADM, our scheme offers entity-level visibility, contrasting with prior distributed anomaly detection models. In the subsequent section, we will elaborate on our solution scheme in detail. § SYSTEM ARCHITECTURE We have introduced an end-to-end health management scheme in DHNs. The Fig. <ref> displays the architecture of this scheme. An essential component of our solution involves processing time-series data from various devices through a rule base to generate a list of statuses with a uniform scale. We will further elaborate on the creation and use of rule base in (Section <ref>). Once we establish the status list with unified scales, our MSADM can pinpoint anomalies using a built-in rule-enhanced transformer time-series classification model (Section <ref>) and create anomaly descriptions by integrating semantic rule trees (Section <ref>). Additionally, we have developed a statement processing structure equipped with prompts to support the LLM in analyzing these anomaly descriptions. This structure aids the LLM in identifying the causes of anomalies and devising mitigation strategies. The LLM’s output will act as the anomaly report for the network, which NMs will use to swiftly address the anomalies and ensure network health (Section <ref>). Below, we provide a detailed introduction to each part of our scheme. §.§ Construction of Rule Base In this section, we present the packet loss rate (PLR) as an example to illustrate the shortcomings of existing distributed approaches. We compute a positively distributed interval for the average PLR over T for all devices. Next, we insert the average value of each device into the interval, and its distribution appears in Fig. <ref>. The distribution of PLR varies significantly across different devices, and if such a dataset is used for model training, the model will struggle to adapt to this scenario of anomalous performance with multi-scale devices. Fig. <ref> shows the change in anomaly detection accuracy for different devices before and after using the rule base. Next, we will provide a detailed description of the process for designing and using the rule base. We analyze the KPIs <cit.> common to multiple devices within the simulated network and construct the rule base accordingly. A comprehensive list of KPI types and contents is detailed in appendix <ref>. For each device type, we analyze the collected data to ascertain the distribution of each KPI across various dimensions. Subsequently, we compare the actual KPI changes for these devices against their respective distributions to pinpoint anomalous statuses. We represent the network background information within T under normal conditions as 𝒩_normal=(N_f, E_f, T).N_f denotes the attributes of the node itself, expressed as N_f={ f_N1, f_N2, …, f_Nn}, while E_f represents the attributes of the communication link, similar to the node, and is given by E_f={ f_E1, f_E2, …, f_En}. T indicates the period for recording network information. We collected a substantial number of 𝒩_normal for homogeneous entities to enhance our analysis. For each KPI, we calculate its average value (Avg, F_a), fluctuation value (Jitter, F_j), variance (Variance, F_v), and trend (Trend, F_t). The average represents the center or average of the dataset and aids in understanding the general performance level. The fluctuation value represents the dispersion or range of values in the dataset, calculated as the average of the differences between adjacent data points. Variance, the average of the squared differences of each data point from the mean, measures the extent to which individual data points deviate from the mean. Data trends describe the changes in data over time. We can readily compute the numerical distribution diagram of the first three dimensions, thereby getting a set of intervals Dist that depicts the abnormality of the performance indicator. According to its distribution, the interval closer to the peak indicates that the dimensional data aligns more closely with normal data and should be considered more normal. As trend falls into categories such as rise, fall, fluctuation, etc. Its calculation is different. We assess the instantaneous performance and overall trend of the network based on the number of extreme points obtained. The data within T is subdivided into n small periods t. By obtaining the average value within each t, the continuous time data is converted into discrete data values v={ v_1, v_2, …, v_n}. To mitigate noise interference and facilitate smoother data processing, we increase the threshold h during the identification of maximum and minimum values. If a value and its adjacent value differ by no more than one h, we do not classify it as an extreme point. The presence of multiple maximum and minimum values signifies a fluctuating trend. Conversely, a single minimum value suggests a sudden drop, whereas a single maximum value indicates a sudden rise. Regarding the threshold definition, we derive it from the distribution of fluctuation values among n discrete data points under normal conditions. Utilizing this methodology enables us to ascertain the trend status of performance indicators. We apply formula <ref> to determine the number of maximum and minimum values in this set of discrete data sets, taking the trend of PLRs as an illustrative example. The formula is expressed as follows: N_extrema = ∑_i=2^n-1 (ϕ_max(i)+ϕ_min(i)), where the formula for determining the extreme point is as follows: ϕ_max(i) =  1, (v_i>v_i-1)( v_i>v_i+1) ( min(|v_i-v_i-1|, |v_i-v_i+1|)>h) 0, otherwise, , ϕ_min(i)= 2, (v_i<v_i-1) (v_i<v_i+1) ( min(|v_i-v_i-1|, |v_i-v_i+1|)>h), 0, otherwise. As demonstrated in formula <ref>, when the absolute value of the difference between point t_i and its two neighboring points exceeds h, we classify the point as an extreme value point. The determination of the minima is shown in formula <ref>. The algorithm <ref> outlines the procedure for computing the four evaluation dimensions from our rule base and obtaining the KPIs status list: We have also explored the possibility of using a machine learning-based classification model to categorize data trends. However, suppose new features or wireless access technologies emerge in the future, affecting the performance evaluation data of KPIs. In that case, we will need to recollect and relabel the dataset to train the model. In contrast, with the rule-based method, we only need to gather sufficient data and update the threshold using the built-in script to refresh the rule base. Therefore, the rule base demonstrates superior scalability and adaptability. §.§ Anomaly Information Learning and Detection We have designed an anomaly detection architecture for KPIs time series data in MSADM. Fig. <ref> illustrates the structure of the anomaly detection model. In this framework, the time-series data initially passes through a convolutional layer that captures time-series features within a specific segment, followed by a two-layer converter to fully perceive changes in the KPIs. To enhance the model's robustness, we have embedded a rule-filtered states list prior to the model entering the fully connected layer. Because our goal is for MSADM to recognize the anomaly type while performing anomaly detection, a four-layer fully connected network is employed. The first two layers sense the data association, while the latter two layers handle the detection and classification tasks. The remainder of this section details specific model design concepts. For anomaly detection tasks, certain element fragments often harbor more anomaly information features. Convolutional Neural Networks (CNN) improve classification accuracy by extracting local features from time series <cit.>. However, the sequence of elements and their interdependencies are essential for time series analysis. While CNNs excel at focusing on local features, their capability to model global dependencies is comparatively limited <cit.>. In time series classification tasks that require a global perspective, this limitation may lead to a decrease in model accuracy. The Transformer, via its self-attention mechanism, can process sequences of any length <cit.>. This feature efficiently captures global dependencies within sequences, effectively overcoming CNN's limitations in global modeling. After applying the rule-embedded transformer, we get the attention output a. we incorporate the KPIs status list obtained through rule filtering into the model’s learning dimension. This status list aids the model in better distinguishing between abnormal and normal situations. Therefore, before inputting data into the FCL, we utilize the linear transformation function f_1 to combine the status representation s with the attention output a. The interactive representation of the KPIs statuses with the output of the attention mechanism I_sa can be denoted as: I_sa=f_1(W_1[s, a]+b), where W_1,b are trainable parameters. f_1 is the activation function, and we use ReLU. The fully connected layer gradually transforms the extracted features into classification probabilities that identify anomalies. Simultaneously, the model goes beyond merely outputting these probabilities; it also specifies the type of anomaly detection identifying the abnormal entity. Consequently, we have separated the fully connected layer at the end to acquire both anomaly detection results and anomaly types through distinct linear layers. During training, given the dual tasks of classification and detection, we formulate the actual loss function as the summation of two cross-entropy loss functions. The loss <ref> is as follows: loss=-∑_i^ny_cilog(p_ci)-∑_i^ny_dilog(p_di), where the log function is the softmax activation function, y_ci, y_di is the actual value, p_ci, p_di is the predicted value, and n is the size of the output. §.§ Semantic Rule Tree Structure In the initial section, we obtain a list of statuses S for the KPIs of the anomaly network entities, filtered according to predefined rules. Utilizing these status lists, MSADM generates detailed anomaly information reports for anomalous network entities via a semantic rule tree. We explored logical semantics, distributed semantics, hybrid semantics for the NLG model, and a Knowledge Graph-based replication mechanism for sentence generation<cit.><cit.>. These models necessitate a large amount of high-quality textual training datasets. However, since our method generates sentences from a list of statuses, training becomes highly inefficient following a significant number of events, and the utterances produced are overly slow and filled with superfluous information. Moreover, the dataset requires expansion to train the model whenever a new description of an anomaly manifestation arises. Our goal is to generate timely, accurate, and concise sentences. Therefore, we opted, after careful consideration, to employ a template-like approach to sentence generation. Given the limited variety of statuses in the status list, we chose to select words that correspond to the number of statuses for each KPIs evaluation dimension. Unlike traditional template-based approaches, we use a tree structure with a unique one-to-many configuration that effectively captures the abnormal statuses of KPIs under various evaluation metrics. This structure is not only highly flexible and extensible but also facilitates the future integration of new evaluation metrics and statuses. We employ this tree structure to generate sentences for each KPI, which are then compiled into the comprehensive anomaly reports. As shown in Fig. <ref>, we maintain a vocabulary describing KPIs performance metrics and KPIs status levels and a lexicalized tree adjoining grammar (LTAG) representing the lexicality of words. MSADM can utilize the evaluation dimensions of arbitrary KPIs as the root, connect syntactic trees to form the syntactic part of a sentence and construct a sentence tree by positioning fixed vocabulary in the leaves. Meanwhile, to further speed up the sentence generation, we added the pruning operation of words and LTAGs before sentence generation and tried to keep only the words related to the current KPIs. The specific build process is as follows: MSADM traverses the sentence tree starting from the root, categorized by a KPIs type with a list of evaluated dimensions and statuses. Each traversal from the root to the leaves yields a semanticized description corresponding to the current KPIs statuses. Considering that actual KPIs data may be more precise than the status description, we incorporate a judgment call in the sentence generation process. When a KPI exhibits significant abnormalities, we add its actual values, such as mean, variance, and jitter, within the timeframe T to enrich the information content of the sentence. The process is shown by algorithm <ref>. After compiling all abnormal sentence expressions from a node and considering the input constraints of the LLM, we strike a balance between the simplicity of the report and the completeness of the information. We then assess the need to further refine the entity information collected in the sentences based on the report's length and the severity of the KPIs anomalies. We use regular expressions to optimize the report content while ensuring that essential and critical anomaly information is retained. §.§ Information Integration The LLM's powerful natural language processing capabilities allow it to deeply understand semantic information and derive meaningful features and patterns <cit.>. Simultaneously, LLM’s continuous learning ability enables it to adapt and respond effectively to evolving event types, showcasing remarkable scalability and rapid adaptability in complex scenarios <cit.>. In the information integration phase, we compile the abnormal reports of communication entities within the DHNs and generate prompt text language that the LLM can understand, and tailor. LLMs often struggle with complex and in-depth reasoning due to their reliance on patterns in data rather than true understanding, leading to difficulties in consistently generating accurate, contextually appropriate responses that require deep domain knowledge or logical consistency <cit.>. In our integration process, we have bootstrapped the LLM to assist in generating anomaly reports that better align with the requirements of NMs, based on the life cycle of health management. The structure of the prompt is illustrated in Fig. <ref>. We provide the model with context, questions, and options. The context enables the LLM to comprehend network anomaly information. The question addresses the needs of NMs, specifically the types of abnormalities that may occur and the associated mitigation plans. The option constrains the LLM's inference results to the specified types of anomalies, thereby enhancing the accuracy of the inferences. Naturally, the options also include others. Given that large models face input length limitations, the anomaly context must encompass all relevant information of abnormal entities within the local network at the time of the anomaly, a requirement that significantly exceeds the input capacity of the existing model. Consequently, the anomaly context cannot be directly embedded within the prompt text. We collate the collected contextual information regarding entity anomalies, utilize the abnormal status to pinpoint KPIs exhibiting significant abnormalities within network entities and provide a detailed description of such KPIs. Conversely, KPIs exhibiting minor abnormalities are summarized in a consolidated manner. Furthermore, we incorporate the abnormal detection results obtained in section <ref> into the report, thereby enriching the LLM with additional dimensions of information focus. § EXPERIMENTATION We implemented MSADM using Python 3.7 and Torch 1.13.1. Due to resource constraints, we utilized eight RTX4090 with 24G RAM on Ubuntu 22.04 for data simulation, model training, and testing. We executed the techniques and algorithms by the system architecture (Fig. <ref>). We employ NS-3 <cit.> for network simulation. We simulated four different communication entities by varying the transmit power, bandwidth, and other configurations. Furthermore, we categorize network anomalies into six distinct categories and introduce these anomalies into the simulation. Additionally, we construct four diverse communication devices by adjusting parameters such as node bandwidth and movement speed (see appendix <ref> for anomaly types). Subsequently, based on these devices, we build a heterogeneous network, inject network anomalies, and capture KPI changes. We accumulated a total of nearly 20,000 data entries across seven network scenarios, all of which were labeled. We release an open-source demo and dataset [Demo and Dataset: https://github.com/SmallFlame/MSADM] of MSADM to illustrate this workflow. We will evaluate our scheme from two perspectives to demonstrate its effectiveness. Firstly, we will illustrate the superior accuracy and efficiency of MSADM in anomaly detection models. Secondly, we will present the anomaly report, along with the diagnostic results and scheme descriptions provided by LLM, to verify the feasibility of our approach. §.§ MSADM Evaluations We surveyed several popular time series classification models that utilize various technologies. CL-MPPCA employs both neural networks and probabilistic clustering to enhance anomaly detection performance <cit.>. SR-CNN integrates SR and CNN models to boost the accuracy of time series anomaly detection <cit.>. AnomalyBERT, built on the Transformer architecture, is designed to discern temporal contexts and identify unnatural sequences  <cit.>. LSTM-transformer introduces a novel hybrid architecture combining LSTM and Transformer, tailored for multi-task real-time prediction <cit.>. We compare these models with the anomaly detection module of MSADM. We will train these models using the same equipment and conduct a comprehensive comparison. In Fig. <ref>, the model's evolution in classification accuracy, detection accuracy, and cross-entropy loss function is depicted over increasing iterations. Notably, our model consistently achieves the highest accuracy, ultimately converging to 91.3%. This figure marks an approximately 3% lead over the runner-up model, LSTM-transformer. Additionally, the Cross-Entropy loss of our model substantially surpasses that of other models upon final convergence. In Table <ref>, we conducted a comparative analysis between MSADM and various other models concerning fault detection accuracy, anomaly detection accuracy, detection recall rate (Recall), detection false negative rate (FNR), and detection false positive rate (FPR). We meticulously assessed performance across these metrics. Notably, we highlight the superior performance of MSADM, as indicated by the bold data for each metric. The conclusive findings demonstrate that MSADM surpasses other models across most performance indicators. It's worth mentioning that the detection time, while marginally lower than the LSTM method lacking rule embedding, is attributed to the initial requirement of rule filtering. The ROC curves represent the true positive rate (TPR) and false positive rate (FPR) under different threshold settings <cit.>. To compare the robustness and reliability of the models. We plotted the ROC curve. As shown in Fig. <ref>, the ROC curve of the MSADM model is higher than other models most of the time, while the AUC of MSADM is 0.1 higher than the current hottest LSTM-transformer structure. Due to the anomaly's limited range of influence, enlarging the network size might result in overlooking the anomaly. Fig. <ref> illustrates the variation in model accuracy corresponding to changes in network size. In both scenarios with a small and large number of nodes, the MSADM model outperforms other models in both anomaly detection and classification accuracy. Fig. <ref> illustrates the confusion matrix analysis of the anomaly detection results produced by our MSADM model on the test set. Fig. <ref> (a) primarily assesses the model's accuracy in identifying various anomalies. The results underscore the model's high accuracy across most anomaly-type classification tasks. Fig. <ref> (b) depicts the accuracy of anomaly detection. Our identification accuracy for abnormal samples reaches as high as 95%, implying that we can analyze and collect information from almost all abnormal network entities within the network structure. For normal samples that are incorrectly detected, because we gauge the degree of abnormality in the generation of abnormal reports, a large amount of minor abnormal information will not excessively consume abnormal reporting resources. §.§ Semanticization Evaluations In this section, we present the text generation component of MSADM to showcase the quality of our semantic generation. We will also highlight segments of the LLM output to underscore the benefits of our thought prompts in guiding LLM reasoning. Due to space constraints, we display only a portion of the anomaly report and LLM output, with the complete textual content available as chapters in the appendix. In the event of a node application crash, the current node becomes unable to request and respond to data packets due to application anomaly, yet it retains its functionality as a packet forwarding relay. We use this scenario as an example to demonstrate the practicality of our generated statements. The results are depicted in Fig. <ref>. We show a partial anomaly report generated by a single network entity when an anomaly occurs. This section includes descriptions of packet rates, bit error rates, and latencies, while also providing anomalies diagnosed by the model. It is evident from the report that the PLRs and the bit error rate of the nodes are notably high, whereas the PLR and the bit error rate of the communication link remain relatively unaffected, aligning with the observed real-world scenario. See the appendix <ref> for the complete report. We input the analyzed data from the collected reports into the LLM to generate relevant reports and conclusions. The solution produced by the LLM appears in Fig. <ref>. By incorporating chain-of-thought-based prompts, the LLM assesses various factors that may have contributed to the anomaly, including software and hardware issues, as well as troubleshooting and resolution strategies. This exception report, enhanced by LLM's insights, significantly surpasses traditional operations and maintenance documentation by reducing empiricism that leads to incorrect exception handling. At the same time, the anomaly solution enables NMs to rapidly mitigate anomalies and maintain network health. The comprehensive exception analysis report is detailed in the appendix <ref>. § DISCUSSION We have illustrated the advantages of our scheme for assisting network operators with health management in DHNs. In this section, we explore potential future directions in conjunction with our scheme. Modeling Stateful Behaviors: To better adapt to the diverse communicating entities in the DHNs, we deliberately made trade-offs to enhance the model's scalability. Currently, we model KPIs commonly owned by each entity. However, this approach overlooks the intricate interactions between higher layers, such as the transport protocols they utilize, network layer TM mechanisms, and potential device interactions. A promising future direction involves leveraging MSADM to model the state behavior of higher-level network participants (e.g., Web Server, SQL Server), such as the application layer, and integrating them with our scheme to form a network for microservice architecture-based anomaly detection solutions. Self-evolution of the LLM: In this article, we utilize LLM to generate the final anomaly inference results. However, this process is one-way and cannot provide feedback to the large model itself. In the future, we posit that the self-evolution method of the learning model can be employed to aid LLM in learning, enhancing, and self-evolving from the experiences it generates. Simultaneously, the evolved LLM can assist MSADM in augmenting and maintaining semantic rule trees to enrich the vocabulary and enhance the quality of the generated sentences. § CONCLUSION We introduce semantic expression into wireless networks for the first time and develop an LLM-assisted end-to-end health management scheme for DHNs. Our model automatically processes collected anomaly data, predicts anomaly categories, and offers mitigation options. To address the inability of algorithms that depend on expert input or basic rule-based systems to adapt to multi-device environments, we propose the MSADM. MSADM utilizes a predefined rule base to monitor the state of entity communication KPIs, conducts anomaly detection and classification through a rule-enhanced Transformer structure, and produces unified and standardized textual representations of anomalies using a semantic rule tree. Furthermore, the inclusion of a chain-of-thought-based LLM in the diagnostic process not only enhances fault detection but also generates detailed reports that pinpoint faults and recommend optimization strategies. Experiments demonstrate that MSADM surpasses current mainstream models in anomaly detection accuracy. Additionally, the experimentally generated anomaly reports and solutions highlight our approach's potential to boost the efficiency and accuracy of intelligent operations and maintenance analysis in distributed networks. 10 10172904 Toufique Ahmed, Supriyo Ghosh, Chetan Bansal, Thomas Zimmermann, Xuchao Zhang, and Saravan Rajmohan. Recommending root-cause and mitigation steps for cloud incidents using large language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pages 1737–1749, 2023. barco2008continuous Raquel Barco, Pedro Lázaro, Luis Díez, and Volker Wille. Continuous versus discrete model in autodiagnosis systems for wireless networks. IEEE Transactions on Mobile Computing, 7(6):673–681, 2008. barco2010learning Raquel Barco, Volker Wille, Luis Díez, and Matías Toril. Learning of model parameters for fault diagnosis in wireless networks. Wireless Networks, 16:255–271, 2010. baumler2022hybrid Connor Baumler and Soumya Ray. Hybrid semantics for goal-directed natural language generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1936–1946, 2022. boem2019distributed Francesca Boem, Alexander J Gallo, Davide M Raimondo, and Thomas Parisini. Distributed fault-tolerant control of large-scale systems: An active fault diagnosis approach. IEEE Transactions on Control of Network Systems, 7(1):288–301, 2019. brown2020language Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. cao2024advanced Kangjie Cao, Ting Zhang, and Jueqiao Huang. Advanced hybrid lstm-transformer architecture for real-time multi-task prediction in engineering systems. Scientific Reports, 14(1):4890, 2024. 10443962 Xuehan Chen, Jingjing Tan, Litian Kang, Fengxiao Tang, Ming Zhao, and Nei Kato. Frequency selective surface towards 6g communication systems: A contemporary survey. IEEE Communications Surveys & Tutorials, pages 1–1, 2024. chen2024automatic Yinfang Chen, Huaibing Xie, Minghua Ma, Yu Kang, Xin Gao, Liu Shi, Yunjie Cao, Xuedong Gao, Hao Fan, Ming Wen, et al. Automatic root cause analysis via large language models for cloud incidents. In Proceedings of the Nineteenth European Conference on Computer Systems, pages 674–688, 2024. cheng2020towards Jiezhu Cheng, Kaizhu Huang, and Zibin Zheng. Towards better forecasting by fusing near and distant future visions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34(04), pages 3593–3600, 2020. hayat2016survey Samira Hayat, Evşen Yanmaz, and Raheeb Muzaffar. Survey on unmanned aerial vehicle networks for civil applications: A communications viewpoint. IEEE Communications Surveys & Tutorials, 18(4):2624–2661, 2016. jeong2023anomalybert Yungi Jeong, Eunseok Yang, Jung Hyun Ryu, Imseong Park, and Myungjoo Kang. Anomalybert: Self-supervised transformer for time series anomaly detection using data degradation scheme. arXiv preprint arXiv:2305.04468, 2023. jin2015detecting MRuofan KarthikeyJini, S VWanitha Devi, J Srinivasan, Bing Arulpg, Wei Wei, Xiaolan Zhang, Xian Chen, Yaakov Bar-Shalom, and Peter Willett. Detecting node failures in mobile wireless networks: A probabilistic approachetecting node failures in mobile wireless networks: a probabilistic approach. IEEE Transath, Actions on Mobile Computing, 15(7):1647–1660, 2015. khanafer2008automated Rana M Khanafer, Beatriz Solana, Jordi Triola, Raquel Barco, Lars Moltsen, Zwi Altman, and Pedro Lazaro. Automated diagnosis for umts networks using bayesian network approach. IEEE Transactions on vehicular technology, 57(4):2451–2461, 2008. khatib2015diagnosis Emil J Khatib, Raquel Barco, Ana Gómez-Andrades, and Inmaculada Serrano. Diagnosis based on genetic fuzzy algorithms for lte self-healing. IEEE Transactions on vehicular technology, 65(3):1639–1651, 2015. kuklinski2019key Slawomir Kukliński and Lechosław Tomaszewski. Key performance indicators for 5g network slicing. In 2019 IEEE conference on network softwarization (NetSoft), pages 464–471. IEEE, 2019. Luo_Lou_Lin_Fu_Ding_Zhang_Wang_2014 Chen Luo, Jian-Guang Lou, Qingwei Lin, Qiang Fu, Rui Ding, Dongmei Zhang, and Zhe Wang. Correlating events with time series for incident diagnosis. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, Aug 2014. Lyu_Wu_Lai_Yang_Li_Zhou_2022 Ziyu Lyu, Yue Wu, Junjie Lai, Min Yang, Chengming Li, and Wei Zhou. Knowledge enhanced graph neural networks for explainable recommendation. IEEE Transactions on Knowledge and Data Engineering, page 1–1, Jan 2022. Ma_Zhang_Chen_Xu_Li_Lin_Nie_Zhou_Wang_Pei_2021 Minghua Ma, Shenglin Zhang, Junjie Chen, Jim Xu, Haozhe Li, Yongliang Lin, Xin Nie, Bo Zhou, Yong Wang, and Dan Pei. Jump-starting multivariate time series anomaly detection for online service systems. USENIX Annual Technical Conference,USENIX Annual Technical Conference, Jan 2021. malgorzata2004survey STEINDER Malgorzata. A survey of fault localization techniques in computer networks. Elsevier Science of Computer Programming Journal, pages 165–194, 2004. middlehurst2024bake Matthew Middlehurst, Patrick Schäfer, and Anthony Bagnall. Bake off redux: a review and experimental evaluation of recent time series classification algorithms. Data Mining and Knowledge Discovery, pages 1–74, 2024. 10.1145/3649448 Navid Mohammadi Foumani, Lynn Miller, Chang Wei Tan, Geoffrey I. Webb, Germain Forestier, and Mahsa Salehi. Deep learning for time series classification and extrinsic regression: A current survey. ACM Comput. Surv., 56(9), apr 2024. nti2022mini Isaac Kofi Nti, Juanita Ahia Quarcoo, Justice Aning, and Godfred Kusi Fosu. A mini-review of machine learning in big data analytics: Applications, challenges, and prospects. Big Data Mining and Analytics, 5(2):81–97, 2022. premsankar2018edge Gopika Premsankar, Mario Di Francesco, and Tarik Taleb. Edge computing for the internet of things: A case study. IEEE Internet of Things Journal, 5(2):1275–1284, 2018. qian2021detection Bing Qian and Shun Lu. Detection of mobile network abnormality using deep learning models on massive network measurement data. Computer Networks, 201:108571, 2021. ren2019time Hansheng Ren, Bixiong Xu, Yujing Wang, Chao Yi, Congrui Huang, Xiaoyu Kou, Tony Xing, Mao Yang, Jie Tong, and Qi Zhang. Time-series anomaly detection service at microsoft. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 3009–3017, 2019. Riley2010 George F. Riley and Thomas R. Henderson. The ns-3 Network Simulator, pages 15–34. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. srivastava2022beyond Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. szilagyi2012automatic Péter Szilágyi and Szabolcs Nováczki. An automatic detection and diagnosis framework for mobile communication systems. IEEE transactions on Network and Service Management, 9(2):184–197, 2012. 9854182 Fengxiao Tang, Xuehan Chen, Tiago Koketsu Rodrigues, Ming Zhao, and Nei Kato. Survey on digital twin edge networks (diten) toward 6g. IEEE Open Journal of the Communications Society, 3:1360–1381, 2022. tariq2019detecting Shahroz Tariq, Sangyup Lee, Youjin Shin, Myeong Shin Lee, Okchul Jung, Daewon Chung, and Simon S Woo. Detecting anomalies in space using multivariate convolutional lstm with mixtures of probabilistic pca. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2123–2133, 2019. tuli2022tranad Shreshth Tuli, Giuliano Casale, and Nicholas R Jennings. Tranad: Deep transformer networks for anomaly detection in multivariate time series data. arXiv preprint arXiv:2201.07284, 2022. Wang_Mei_Cui_Wang_Shen_2023 Xianbin Wang, Jie Mei, Shuguang Cui, Cheng-Xiang Wang, and Xuemin Sherman Shen. Realizing 6g: The operational goals, enabling technologies of future networks, and value-oriented intelligent multi-dimensional multiple access. IEEE Network, 37(1):10–17, Jan 2023. NEURIPS2022_9d560961 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc., 2022. yang2023dcdetector Yiyuan Yang, Chaoli Zhang, Tian Zhou, Qingsong Wen, and Liang Sun. Dcdetector: Dual attention contrastive representation learning for time series anomaly detection. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3033–3045, 2023. THz Xun Yuan, Fengxiao Tang, Ming Zhao, and Nei Kato. Joint rate and coverage optimization for the thz/rf multi-band communications of space-air-ground integrated network in 6g. IEEE Transactions on Wireless Communications, pages 1–1, 2023. § ACKNOWLEDGMENTS § EVALUATION OF NETWORK ATTRIBUTES AND PERFORMANCE METRICS We use KPIs from both communication nodes and communication links as rule-based filtering features and machine-learning features to detect and classify anomalies. The specific features considered are shown in Table  <ref> below. § NETWORK NODE PARAMETERS On the network simulation platform ns-3, we designed and configured four different devices to build a virtual Ad-hoc network (refer to Table  <ref> for specific device configurations). This network consists of 9 to 20 nodes. We set a data collection duration of 30 seconds and defined a collection period of 200ms. § ANOMALIES CATEGORIES When using traditional machine learning techniques for fault detection, we are particularly concerned with obtaining sufficient labeled negative samples. In the context of DHNs, there is a wide range of anomaly types. Therefore, a careful classification of common fault types is crucial. Table <ref> shows our final classification results for these anomaly types, which are seven in total. § COMPLETE ANOMALY REPORT The rest of this section shows the anomaly reports that our scenario outputs: Current NetWork Context: The current node0 status is as follows: The packet loss rate shows a very high average value is 44.43%, with extremely volatile fluctuation and has fallen sharply and then rose trend. The information about the communication links of the current node is as follows: The current node may have a fault appdown! The current node1 status is as follows: The number of neighboring nodes shows serious over-average num minor fluctuation and has an up trend. The number of routing table caches shows serious over-average num minor fluctuation and has an up trend. The information about the communication links of the current node is as follows: The current node may fault maliciously! The current node2 status is as follows: The number of neighboring nodes shows serious over-average num minor fluctuation and has an up trend. The number of routing table caches shows serious average num minor fluctuation and has an up trend. The information about the communication links of the current node is as follows: The current node may have a fault appdown! Questions: According to the preceding description, if similar historical fault information exists, identify the fault type and provide a solution. If no, identify the current fault type and provide the optimal solution. Select a fault type from the options. The fault type mentioned above may not be correct. Determine and confirm the fault according to the context's information. If you have different views on the fault, state the cause. Options: Please select the anomaly type that best matches the context's performance from the following:a: Node Down; b: Malicious Traffic; c: Network Congestion; d: Communication Obstacles; e: Out-of-Range; f: Network Node Crash. § COMPLETE LLM OUTPUT The rest of this section shows the output of the LLM-based on the anomaly report: §.§ Node Analysis Based on the provided network context, here's the detailed analysis: Node0: Status: High packet loss rate averaging 44.43 Communication Link Info: Indicates a possible fault for the app down. Analysis: The high packet loss rate and the indication of an appdown fault suggest a significant issue with the application layer on Node0. This could be due to software errors, resource limitations, or misconfigurations. Recommended Action: Investigate the application running on Node0, check for any software updates, review configurations, and allocate additional resources if necessary to mitigate packet loss. Node1: Status: The number of neighboring nodes and routing table caches are significantly above average, with minor fluctuations and an upward trend. Communication Link Info: Possible fault for malicious activity. Analysis: The increasing number of neighboring nodes and routing table caches, along with the indication of malicious activity suggests a potential security breach or an attack targeting Node1. Recommended Action: Conduct a thorough security audit, implement intrusion detection systems, and strengthen network security measures to detect and mitigate malicious activity targeting Node1. Node2: Status: Similar to Node1, with an increasing number of neighboring nodes and routing table caches showing minor fluctuations and an upward trend. Communication Link Info: Also indicates a possible fault for appdown. Analysis: The pattern of increasing neighboring nodes and routing table caches, along with the appdown fault indication, points to a potential issue similar to Node0, possibly related to application failures or disruptions. Recommended Action: Conduct a detailed investigation into the application running on Node2, review configurations, and address any software issues or resource constraints to prevent further disruptions. §.§ Overall Network Analysis The network is experiencing multiple issues across different nodes, including high packet loss rates, indications of malicious activity, and potential application failures. There might be underlying systemic issues affecting the stability and security of the network, such as insufficient resources, misconfigurations, or vulnerabilities. Addressing these issues promptly is crucial to prevent service disruptions, security breaches, and network performance degradation. Recommended Actions: Perform comprehensive network diagnostics to identify and address underlying issues. Implement robust security measures to mitigate malicious activities, including firewalls, intrusion detection systems, and access controls. Optimize resource allocation and configurations to improve network performance and reliability. Regularly monitor and maintain the network to detect and prevent future issues. If historical fault information exists, it should be analyzed to identify patterns and potential root causes, aiding in developing more effective long-term solutions.
http://arxiv.org/abs/2406.08956v1
20240613093559
Skein Categories in Non-semisimple Settings
[ "Jennifer Brown", "Benjamin Haïoun" ]
math.QA
[ "math.QA", "math.CT", "math.GT", "18M15, 57K31" ]
Signature of non-trivial band topology in Shubnikov–de Haas oscillations J. Carlos Egues June 17, 2024 ========================================================================= § ABSTRACT We introduce a version of skein categories which depends on a tensor ideal in a ribbon category, thereby extending the existing theory to the setting of non-semisimple TQFTs. We obtain modified notions of skein algebras of surfaces and skein modules of 3-cobordisms for non-semisimple ribbon categories. We prove that these skein categories built from ideals coincide with factorization homology, shedding new light on the similarities and differences between the semisimple and non-semisimple settings. As a consequence, we get a skein-theoretic description of factorization homology for a large class of balanced braided categories in , precisely all those which are expected to induce an oriented categorified 3-TQFT. § INTRODUCTION We pass between two main perspectives in this work. First is a topological viewpoint, emphasizing the role of skeins, while the second is algebraic, emphasizing that of factorization homology. The two are a natural pair, since skeins give us topological intuition for the abstract constructions provided by factorization homology, which in turns gives us a toolkit for understanding the explicit skein constructions. The general plan of this paper is to establish then take advantage of this exchange to study non-semisimple skein theory. §.§ Background and motivation Skein theory was developed in the 1990s <cit.>, as a sweeping generalization of Vaughan Jones' construction of knot invariants <cit.>. It is the basis for a family of link and 3-manifold invariants known collectively as Reshetikhin–Turaev invariants and has close ties to Topological Quantum Field Theories (TQFTs). The construction is based on a graphical calculus built using decorated ribbon graphs living in 3-dimensional space, whose decorations and relations depend on the choice of a ribbon category. A central notion is that of a skein module associated to a 3-manifold, the best known of which is the Kauffman bracket skein module. These have been studied extensively, see <cit.> for a few recent developments and <cit.> for a historical overview. The skein module of a thickened surface inherits an algebra structure from stacking and is therefore referred to as the skein algebra. We work over a field to better suit future work on TQFTs, at the cost of precluding the study of torsion elements. Non-semisimple ribbon categories appear naturally in physical applications, for example in Chern-Simons theories built from supergroups <cit.>. Unfortunately, when the Reshetikhin–Turaev construction is applied to such theories, the associated link invariant is often identically zero. This prevents the construction of 3-manifold invariants and TQFTs. This problem was first overcome by Hennings <cit.> and Lyubashenko <cit.>. The construction of non-semisimple quantum invariants and TQFTs now relies on the use of modified traces <cit.>. These are defined relative to a tensor ideal in the ambient ribbon category, which is typically the ideal of projective objects. These have fuelled the rigorous mathematical construction of non-semisimple 3-manifold invariants and TQFTs <cit.>. Another key ingredient in these constructions is the notion of admissible skein modules <cit.>. We refer to this family of constructions well adapted to non-semisimple ribbon categories as modified skein theory or -skein theory. Many standard constructions and results in the semisimple case, for example comparisons with Crane–Yetter or Turaev–Viro-type TQFTs are yet to appear in the non-semisimple case. Work to translate unmodified objects and properties into the modified setting is significant and ongoing. A persistent theme in this larger project is working around the absence of a unit. In the current paper this manifests as a need to work with presheaf-valued functors, see Remark <ref>. We focus on skein categories (Definition <ref>) in part because they drive a number of related constructions, see Sections <ref> and <ref>. Skein categories were introduced in <cit.> to describe Crane–Yetter 4-TQFTs on surfaces, and in his work Walker proposes that the skein modules and categories of any ribbon category form part of a fully extended categorified 3-TQFT. More formal treatments can be found in <cit.>. Twenty years later, this conjectural relation to TQFTs (see <cit.>) is only partially settled. Thanks to their coincidence with factorization homology and the work of <cit.>, it is known that skein categories are part of a twice-categorified fully-extended 2-TQFT. Factorization homology was introduced by Ayala and Francis <cit.>. It gives a way of “integrating" an 𝔼_n-algebra A in a symmetric monoidal ∞-category over an n-manifold M. In the context of factorization homology, 𝔼_n-algebras are often replaced with the equivalent notion of _n-algebras <cit.>. Skein theory relates to (oriented) 𝔼_2-algebras in categories, which are known correspond to (balanced) braided categories, see <cit.> and <cit.>. We will contemplate three choices of ambient category . The first is the (2,1)-category of -linear categories, functors and natural transformations. The second has same objects, but allows functors to be presheaf-valued, i.e. a morphism from to is a functor →(^op,_), called a bimodule functor. The third is the category presentable cocomplete categories and cocontinuous functors. It contains as the full subcategory of presheaf categories. Free cocompletion gives a 2-functor that sends a category to its presheaf category. Skein categories compute factorization homology in of any ribbon category <cit.>. Passing to the free cocompletion gives a skein theoretic description of ∫_Σ for those 𝔼_2 algebras in whose subcategory of compact-projective objects is ribbon. In particular, they must have compact-projective unit. We will incautiously refer to this situation as the “semisimple case" since the unit being projective looks like semisimplicity <cit.> (though this result doesn't quite apply here). The existence of the categorified 3-TQFTs is supported by another approach using the cobordism hypothesis <cit.>. Such categorified fully extended 3-TQFT should be classified by 3-dualizable (and 3-oriented) objects in a well-chosen (∞,4)-category of 𝔼_2-algebras in . This (∞,4)-category Alg_2() has been formally defined in <cit.>. Dualizability in this higher category was studied in <cit.> (though they call it BrTens). They show that a braided category in is 3-dualizable as soon as it is cp-rigid (has enough compact-projectives, which are all dualizable). It follows from <cit.> that a 2-orientation is given by a balancing. It is expected that this balancing gives a 3-orientation as soon as it induces a ribbon structure on the category of dualizable objects (i.e. the balancing of the dual is the dual of the balancing). It is therefore expected that there is a fully extended categorified 3-TQFT for any such “cp-ribbon" category. Given a ribbon category, its free cocompletion (presheaf category) is such an example of cp-ribbon category, and in this case Walker's construction should describe the categorified 3-TQFT. But as we will see there are many more examples. We are particularly interested in the cases where these categorified 3-TQFTs extend to 4-TQFTs. These should correspond via the cobordism hypothesis to 4-dualizable objects. Walker's description extends to 4-manifolds under the assumption that the input category is fusion, and he explains that the 4-TQFT should recover Crane–Yetter theories. It has indeed been shown in <cit.> that fusion braided categories are 4-dualizable. However, it is also shown in <cit.> that there are other, non-semisimple examples of 4-dualizable categories. These are the Ind-completions of non-semisimple modular tensor categories. They have non-compact-projective unit, and therefore cannot be obtained as the free cocompletion of a ribbon category[Actually, it follows from <cit.> and to-appear results of William Stewart that full dualizability of the free cocompletion of an 𝔼_2-algebra in imposes semisimplicity.]. The 3 and 4 dimensional part of the associated non-semisimple Crane–Yetter theories are constructed in <cit.> using modified skein theory and admissible skein modules. They do not give a skein-theoretic description of the theory on surfaces. It's known however that Crane–Yetter on surfaces should agree with factorization homology. A skein-theoretic description of factorization homology of 𝔼_2-algebras coming from Ind-completions of non-semisimple ribbon tensor categories was anticipated but not known. Filling this gap was one of the main motivation for this paper, see Corollary <ref>. Factorization homology of balanced braided categories in over surfaces are studied in <cit.>, for both semisimple and non-semisimple inputs. They show that factorization homology on a non-closed surface is described as modules over a “moduli algebra". They use excision properties of factorization homology to give an explicit presentation of this algebra. See <cit.> for a discussion on closed surfaces. Cooke's result was exploited in <cit.> to give skein-theoretic counterpart of <cit.>'s result on the examples of the braided categories of locally finite modules over quantum groups 𝒰_q(𝔤). These results prove to be powerful, and in particular show gluing for stated skein algebras <cit.> and for (some versions of) stated skein modules <cit.> for any 𝔤, which is completely non-obvious from the standard topological approach. However, on these examples Cooke's result applies only at generic q, i.e. in the semisimple case. §.§ Goals Here we state the main questions which motivated and organize this paper. questionQuestion subquestionQuestion What is the appropriate adaptation of skein categories to non-semisimple ribbon categories? We argue that modified skein categories with coefficients in a tensor ideal of a braided tensor category (Definition <ref>) are the natural adaptation, given their relation with factorization homology (Theorem <ref>). An algebraic formulation of Question <ref> could be stated as follows: What is the factorization homology of a cp-ribbon category whose unit is not compact-projective? From this standpoint, Theorem <ref> (more precisely its reformulation in Corollary <ref>) states that -skein categories give more concrete descriptions of an existing theory. Yet another way of phasing this question is: What does the fully-extended non-semisimple Crane–Yetter theory assign to surfaces? Even though such a fully extended theory is not formalized yet, our results suggest that skein categories of tensor ideals model their values on surfaces. Can we give a version of the proof <cit.> which does not appeal to excision and which holds in greater generality? This is accomplished by the proof of Theorem <ref>, which drives many constructions in this paper. §.§ Results In Definition <ref> we introduce the skein category associated to a tensor ideal in a ribbon category Å. The motivating example of such an ideal is the subcategory of projective objects in a non-semisimple ribbon tensor category, in which case we propose our construction is the construction sought in Question <ref>. Our main technical result is that -colored skein categories compute factorization homology: *thm:main-resultTheorem <ref> \beginthm:main-resultThere is an equivalence of symmetric monoidal 2-functors ∫_-≃_(-) between factorization homology with coefficients in seen as an _2-algebra in and the functor associated to the -skein category construction. \endthm:main-result This is an indication that _ is in some sense the canonical adaptation of usual skein categories to the non-semisimple setting. Our definition closely follows what is done on surfaces in the TQFTs defined in <cit.> (see also <cit.> one dimension lower). We emphasize that the version of skein categories introduced in this work give a 2-functor from surfaces and embeddings to , not . This is seemingly minor detail powers many of our constructions and proofs, see Remark <ref>. Reformulating our main result, we answer Question <ref>: *cor:FHofE2inPrCorollary <ref> \begincor:FHofE2inPrLet be cp-ribbon category in , i.e. is balanced, braided, cp-rigid, and its balancing is a ribbon structure on its subcategory of dualizable objects. Let ^cp denote the subcategory of compact-projective objects. We have an equivalence of symmetric monoidal 2-functors ∫_-≃_^cp(-) between factorization homology in with coefficients in and free cocompletions of -skein categories. \endcor:FHofE2inPr Corollary <ref> echos a stronger prediction by David Jordan: that the 3-TQFT associated to these cp-ribbon categories is described by admissible skein modules. By analogy with the traditional construction in both skein theory and factorization homology, we define the modified skein algebra with coefficients in of a surface Σ to be the endomorphism algebra of the distinguished presheaf _Σ∈_(Σ) induced by the inclusion of the empty surface ∅Σ: _(Σ) := __(Σ)(_Σ) . We show (Remark <ref>) that this algebra acts on admissible skein modules. This action recovers the action of the usual skein algebra. Note that this construction is natural with respect to embeddings of surfaces, and so we obtain an action of the mapping class group of the surface on the -skein algebra. For connected surfaces with non-empty boundary we can use the algebraic tools developed for factorization homology in <cit.> as follows. Our -skein algebra is the algebra of invariants of their internal endomorphism algebra of the distinguished object (_Σ), which they call the moduli algebra. They show that the moduli algebra is isomorphic to a tensor power of Lyubashenko's coend , so we get a more algebraic formula for our -skein algebras in Corollary <ref>. Using the results of <cit.>, we obtain a similar description for surfaces with empty boundary via quantum Hamiltonian reduction. There is an analogous definition of -skein modules of 3-cobordisms. The cobordism induces a bimodule functor between the skein categories, from which there's a standard procedure for getting an associated bimodule of the boundary skein algebras. Surprisingly, this module depends on the boundary decomposition that distinguishes a cobordism from its underlying 3-manifold. Playing with decompositions of the boundary, we can either recover admissible skein modules or a new notion which allows the empty ribbon graph, see Remark <ref>. We take a moment to discuss the terminology used in this paper. Our constructions are close enough to the existing ones[In particular, when = Å we recover existing constructions.] that we are reluctant to introduce new nomenclature beyond decorating by instead of Å. Still, it's sometimes useful to have a tentative name to distinguish new from old. A few alternatives for the name “modified skeins” were seriously considered. Admissible or non-unital skeins seemed to be equally good candidates. In the end we settled on `modified' because, perhaps surprisingly, both `non-unital' and `admissible' would have been misleading names for the associated skein algebras. For one, these algebras are in fact unital. They also don't have an apparent admissibility condition. Any Å-colored skein in Σ× [0,1] gives an element of the modified skein algebra – including the empty skein and those without -colored strands. In fact, despite the initial restriction to a tensor ideal, the modified skein algebra in general has more elements than the traditionally constructed version, see Section <ref>. Notably they include Lyubashenko's non-semisimple replacement for the Kirby color, which he used to interpret 3-manifold surgery. §.§ Future directions Connections to TQFTs We do not address Question <ref> above, one difficulty being that non-semisimple (or even semisimple) Crane–Yetter theories are not yet formally defined as fully extended TQFTs. If they were, then they would have to agree on surfaces with factorization homology and -skein categories. Despite this, one could make progress by showing that -skein categories give the value on surfaces of a once-extended version of <cit.>. In particular using this once-extended description one could prove a non-extended version of the conjectures of <cit.>. If these conjectures are true, -skein categories should appear in the description of <cit.>. Even without the conjectures, they still provide some insight into the constructions of TQFTs. For example, the (2+1)-TQFTs of <cit.> have been extended to the circle in <cit.>. However, this once-extended description imposes admissibility conditions on surfaces and surprisingly does not recover all of <cit.>. We claim that this obstruction comes from the fact that the extension was given with values in , whereas some maps between -skein categories only exist in . New elements and relations In Section <ref> we describe new elements of _(Σ) in terms of bichrome graphs. Using the results of <cit.> it should be possible to give a much finer description. In the special case of categories of modules over a finite-dimensional ribbon Hopf algebra, the moduli algebras and their algebras of invariants are studied in <cit.> and in some examples an explicit basis is known, see <cit.>. We hope to make similarly concrete descriptions in future works, and in particular to describe the image, kernel, and cokernel of the map <ref>. In particular, in concrete examples we can show that the claimed “new" elements coming from bichrome graphs cannot be described by usual skeins. Stated skeins It is natural to ask for a non-semisimple generalization of the main result of <cit.>, which shows that <cit.>'s moduli algebras are isomorphic to Lê's stated skein algebras for the ribbon category of 𝒰_q(𝔰𝔩_2)-modules at generic q. After discussing with Francesco Costantino and Matthieu Faitg, we have come to believe that the notion of stated -skein algebras should coincide with stated Å-skein algebras when there are marked points in every connected component and = Proj(Å). The marked points in some sense flatten the theory, quelling differences between the modified/usual construction. There might be a derived explanation for this phenomenon, linked with the fact that representations varieties are smooth whereas character varieties can be singular. §.§ Acknowledgements We are grateful to Francesco Costantino, Matthieu Faitg, Theo Johnson-Freyd, David Jordan, and Eilind Karlsson for enlightening conversations which informed and motivated this work. We thank Lukas Müller, Lukas Woike, and Adrien Brochier for kindly sharing ideas for a proof of our main result. Although we ended up taking an alternative approach, such discussions were a source of motivation and inspiration. Earnest discussion of this project began while both authors were at a meeting of the Simons Collaboration on Global Categorical Symmetry at the SwissMap Research Station. We are grateful to the funders and organizers of that event for providing us with a productive and agreeable atmosphere in which to pursue new work. JB is funded by the Simons Foundation award 888988 as part of the Simons Collaboration on Global Categorical Symmetry, and partially supported by the National Science Foundation under Award No. 2202753. § TOPOLOGICAL CONSTRUCTIONS We introduce -colored skein categories, algebras and modules of surfaces and 3-manifolds. They take as algebraic input a tensor ideal in a ribbon category Å, i.e. a full subcategory stable under tensoring with any object of Å and under taking retracts. Tensor ideals appear frequently in the non-semisimple setting, as a defining input to both modified traces <cit.> and admissible skein modules <cit.>. As mentioned in the introduction, it is typical to consider the ideal of projective objects ProjÅ⊆Å. This is in part because ProjÅ is contained in every non-zero ideal whenever the monoidal unit is simple (see e.g. <cit.>.) Since by definition every object in a semisimple category is projective, this means that semisimple ribbon categories with simple unit have no non-zero proper tensor ideals. §.§ Skein categories Skein categories were informally introduced in <cit.> and formally defined in <cit.>. We also use the descriptions from <cit.> and <cit.>. To avoid confusion, we will not reintroduce the usual notion of skein categories and whenever we say skein categories we mean the modified version of Definition <ref>. Often the objects in skein categories are finite collections of colored oriented framed points in the surface. To help with the proof of Theorem <ref>, we give an equivalent definition in terms of embeddings of disks based on the Å-labelings of <cit.>. An embedding of the standard disk induces a framed oriented point as the image of its center. Its framing is the positive x-axis and it is oriented by the orientation of the embedding. In the other direction, a collection of framed oriented points in Σ corresponds to a contractible space of possible embedded disks. Let Σ be a compact oriented surface, possibly with boundary, M a compact oriented 3-manifold containing Σ in its boundary and ⊆Å a tensor ideal in a ribbon category. We also assume that each connected component of Σ has a choice of either inward or outward normal. An -labeling in Σ is a pair (ι, W) where ι: d↪Σ is a (possibly orientation-reversing) embedding of a collection of | d| standard disks and W is an object of ^| d|. Let X⃗_ι denote the collection of framed oriented points in Σ determined by ι and colored by the corresponding components of W. An -colored ribbon graph in M is the image of an embedding Γ↪ M of a finite oriented graph Γ equipped with a smooth framing. We require that Γ intersects ∂ M in Σ, transversely and only at 1-valent vertices which we call boundary vertices. Each edge of Γ is colored by an object of and each non-boundary vertex by a morphism as detailed below. Boundary vertices inherit a color, orientation, and framing from their unique incident edge (we say that the orientation is positive if the strand is going in the direction of the chosen normal and negative otherwise). At a non-boundary vertex, the embedding together with the framing induces a cyclic ordering on the incident edges, and we color by an element of _Å(_Å, V_1^±⊗⋯⊗ V_n^±) where V_1,…,V_n are the ordered colours of the incident edges, taken as V^+=V if the edge is outgoing, and V^-=V^* if the edge is incoming. A ribbon graph is said to be compatible with an -labeling (ι,W) if the boundary vertices of Γ match the colored oriented framed points X⃗_ι. A central result of <cit.>, <cit.> is the definition of the Reshetikhin-Turaev evaluation _Å that sends an Å-colored (and therefore -colored) ribbon graph in the thickened disk 𝔻× [0,1] compatible with two -labelings (ι,W) and (ι',W') in 𝔻×{0,1} to a morphism in Å from the tensor product of the Ws to the tensor product of the W's. Note that if d = ∅ or d' = ∅ this tensor product may not be in . To avoid this and similarly troublesome situations, we introduce the following admissibility condition. Let ⊆Å be a tensor ideal in a ribbon category and Σ a compact oriented surface, possibly with boundary. An -labeling (ι,W) in Σ is called admissible if ι: d↪Σ is surjective on connected components, i.e. π_0(ι):π_0(d)→π_0(Σ) is surjective. Similarly an -colored ribbon graph Γ is called admissible if Γ M is surjective on connected components. We now have all the ingredients to define admissible skein modules. Let M be a compact oriented 3-manifold with Σ⊆∂ M as above. Given an -labelings (ι,W) in Σ, the relative admissible skein module _(M;(ι,W)) is the vector space freely generated by isotopy classes of admissible -colored ribbon graphs in M compatible with (ι,W) quotiented by the following admissible skein relation: Consider a finite family of -colored ribbon graphs (T_i)_i∈ I and an oriented embedded cube φ:[0,1]^3 M such that the T_i's coincide strictly outside this cube, intersect its boundary transversely, and only intersect the top and bottom faces. We ask moreover that this intersection is non-empty. Each T_i∩im φ is sent by the Reshetikhin–Turaev functor to a morphism _Å(T_i∩im φ) in a common -space in Å. We say ∑_i∈ Iλ_i T_i ∼ 0 if ∑_i∈ Iλ_i _Å(T_i∩im φ)=0. We will often take M = Σ× [0,1] for a compact surface Σ, possibly with boundary, with Σ×{0,1}⊆∂ M. As orientation data we choose the inward normal for Σ×{0} and outward normal for Σ×{1}. For two -labelings (ι,W) in Σ×{0} and (ι',W') in Σ×{1}, we abbreviate _(Σ× [0,1],(ι⊔ι',(W,W')) as _(Σ,(ι,W),(ι',W')). Note that if either (ι,W) or (ι', W') is admissible, then every compatible ribbon graph in Σ× [0,1] is automatically admissible. Similarly, in this case every skein relation is isotopic to an admissible one. The relative admissible skein module is then simply the usual relative skein module. We follow the definition of admissible skein modules of <cit.> but allow non-empty boundary objects. One difference though is that they allow some colors of the ribbon graphs to be in Å, and only ask that at least one edge per connected component is in . The resulting notion is equivalent to ours by a standard trick. One can always run the -colored strand next to every other and fuse them. The resulting color is in because is a tensor ideal. This operation is readily checked to be well-defined up to admissible skein relations. Similarly our notion of admissible skein relation is easily checked to be equivalent to theirs, by isotoping an -colored strand outside the cube to intersect one of its faces. Let Σ be a compact oriented surface, possibly with boundary, and a tensor ideal of Å. The modified skein category _(Σ) of the surface Σ with coefficients in has: * Objects: Admissible -colored disks in Σ * Morphisms: The set of morphisms from (ι,W) to (ι',W') is the admissible relative skein module _(Σ;(ι,W),(ι',W')) * Composition: Vertical stacking (with any isotopic choice of smoothing at the gluing). When context permits, we will sometimes shorten _ to and -colored ribbon graphs to ribbon graphs. Without loss of generality, we can assume that the embeddings underlying objects of (Σ) preserve orientation. Any object (ι,W) with orientation-reversing ι is canonically isomorphic to (ι̅,W̅), where ι̅ is the orientation-preserving mirror of ι and W̅ is the dual of W. If happens to contain the unit, i.e. =Å, we recover the usual notion of skein categories. In this case every -labeling is isomorphic to an admissible one because any non-empty collection of embeddings with colors the monoidal unit acts as the empty object. Suppose ≠Å. The admissibility condition brings us to a major departure from the skein categories of <cit.>. Working in instead of is crucial for accommodating the need for -labelings when extending _(-) to a 2-functor. This detail becomes conspicuous with the inclusion of the empty surface ∅↪Σ, which induces a map _(∅) →_(Σ). The skein category of the empty surface has a single element with endomorphism algebra the base field. When working with , the induced map would be a functor which would send this one object to the empty object (i.e. no marked points) if it existed. On the other hand, in it is the presheaf-valued functor which lands on the presheaf which would be represented by the empty object if, again, _(Σ) had one. The empty presheaf exists even while the empty object does not because our admissibility condition requires -labelings in each connected component of the 3-cobordism, but not each component of its boundary. Therefore we have a distinguished non-representable presheaf which plays the role of the empty object in various constructions: _Σ: _(Σ)^op → X ↦_(Σ;X,∅) We thank David Jordan for this insight, which allows us to make sense of the inclusion of the empty set ∅Σ as inducing a morphism between -skein categories in . We will see that gives this way a unital 𝔼_2-algebra in , as opposed to a non-unital 𝔼_2-algebra in . A more general discussion on how to turn a non-unital algebra in into a unital one in can be found in <cit.>. The following (2,1)-categories feature prominently in our discussions. The (2,1)-category of -linear categories and bimodule functors has * Objects: small -linear categories , * 1-Morphisms: --bimodule functors, i.e. functors ⊗^op→ or equivalently →(^op,). They are still some kind of functors from to but are allowed to be presheaf-valued. Given F:⊗^op→ and G:⊗^op→, their composition G∘ F:⊗^op→ is defined by the coend (G∘ F)(C,E) := ∫^D∈ F(C,D)⊗ G(D,E) . Identity is given by _(-,-). * 2-Morphisms: natural isomorphisms A symmetric monoidal structure is induced by cartesian product on objects and tensor product on Hom-spaces <cit.>. embeds as a full subcategory in the symmetric monoidal bicategory of presentable linear categories equipped with Kelly-Deligne tensor product, see <cit.> for a modern introduction. A category ∈ is mapped to its presheaf category := (^op,). There is likewise a symmetric monoidal inclusion → given on objects by the identity and on functors → by post-composition with the Yoneda embedding →:= (^op,). Next, we define the source category used in Theorem <ref>: The (2,1)-category of surfaces has: * Objects: compact oriented surfaces, possibly with boundary and non-connected. The empty surface is permitted. * 1-Morphisms: orientation-preserving embeddings (these are not required to send the boundary to the boundary). * 2-Morphisms: isotopies considered up to higher isotopy. Going forward we will suppress higher isotopies. Note that is symmetric monoidal under disjoint union. Functoriality of the usual construction is used heavily in works such as <cit.>. The assignment Σ↦_(Σ) can be extended to a symmetric monoidal 2-functor _ : →. We give a new proof in the -colored context, but first establish a useful technical lemma which allows us to preserve admissibility while decomposing -skeins. Let M be a compact connected 3-manifold and N⊆ M a non-empty sub-manifold of dimension 2 or 3 with ∂ N ⊆∂ M. Define _^N(M;X) the relative admissible skein modules where ribbon graphs must intersect N, and isotopies must preserve this property. Then the canonical map induces an isomorphism of vector spaces _^N(M;X) →̃_(M;X) for any -labeling X in ∂ M. The main argument of the following proof appears in various forms throughout this paper. Note that rigidity is essential here, as we need to drag graphs along arbitrary paths. We start with surjectivity. Let T∈_(M;X) and choose any isotopy representative , not necessarily intersecting N. Choose any point p on any edge of and any path γ in M∖ starting from p and going through N. Isotope by pulling a small neighborhood of p along the path γ. The resulting ribbon graph _γ is a representative of T in _^N(M;X). Next we show injectivity. First not that for any other generic choice p', γ' we can form the ribbon graph (_γ)_γ' where we have pulled strands along both paths. Retracting either one of them gives isotopies (_γ)_γ'∼_γ and (_γ)_γ'∼_γ' through ribbon graphs intersection N, so _γ and _γ' represent the same element in _^N(M;X). Consider an isotopy φ: ⇒' between two ribbon graphs intersecting N, but possibly not preserving this property. By definition it is an ambient isotopy φ = (φ_s:M→̃M)_s∈[0,1] with φ_0=𝕀_M, φ_1()=', and _s:=φ_s() is a ribbon graph in M, possibly not intersecting N. Pick any s∈ [0,1] and choose a path γ_s as above such that (_s)_γ_s intersect N transversely at least once (this is why we need N of codimension at most 1). This condition is open and holds for small perturbations of (_s)_γ_s. By compactness we can pick finitely many times s_i such that the isotopy (_s_i)_γ_i∼ (_s_i+1)_γ'_i intersects N throughout (here γ'_i is the deformed γ_i). Now as explained above, there is an isotopy (_s_i+1)_γ'_i∼ (_s_i+1)_γ_i+1 that maintains at least one intersection with N. We have given a chain of isotopies from to ' intersecting N throughout, so any two ribbon graphs isotopic in _(M;X) are likewise in _^N(M;X). Now up to isotopy every skein relation can be confined to happen in a small ball, either contained or disjoint from N, in particular not affecting the intersection with N. With Lemma <ref> at our disposal, we can now proof the functoriality result: Definition <ref> gives the value on objects of . Disjoint union is sent to the tensor product in and the monoidal unit ∅ is sent to the one-object category with endomorphisms . The bulk of this proof is defining on embeddings and isotopies. Embeddings to Bimodule Functors. Let κ : Σ→Σ' be an embedding. We set [ (κ):(Σ) ⊗ (Σ')^op →; X ⊗ Y ↦_(Σ';Y,κ_*X) ] where κ_*(ι,W) := (κ∘ι,W). Morphisms T ⊆Σ×[0,1] and T' ⊆Σ'×[0,1] transform the admissible relative skein module by post-composition with κ_*T := (κ×𝕀)(T) and pre-composition with T'. If κ is surjective on connected components (or if the tensor unit of Å is in ) this bimodule is induced by a functor κ_* : (Σ) →(Σ'). We need to check that this construction preserves composition. Consider Σ_1κ^1Σ_2κ^2Σ_3. By definition, the composite bimodule on a pair of objects X_1∈(Σ_1) and X_3∈(Σ_3) is the coend ∫^X_2∈(Σ_2)_(Σ_2;X_2, κ^1_*X_1) ⊗_(Σ_3;X_3, κ^2_*X_2) There's a canonical map from this composition to _(Σ_3;X_3, (κ^2κ^1)_*X_1) which sends a simple tensor T_2 ⊗ T_3 in (<ref>) to κ^2_*T_2∘ T_3. We must show that this defines an isomorphism. We need to show that any skein S∈_(Σ_3;X_3, (κ^2κ^1)_*X_1) factors as S=κ^2_*T_2∘ T_3 for some T_2⊗ T_3 ∈_(Σ_2;X_2, κ^1_*X_1) ⊗_(Σ_3;X_3, κ^2_*X_2) , where X_2 is an admissible -labeling in Σ_2. We can always factor S as S = 𝕀_(κ^2κ^1)_*X_1∘ S = κ^2_*(𝕀_κ^1_*X_1)∘ S, but κ^1_*X_1 may not be admissible. We need to pull a strand of S to cross κ_2(Σ_2)×{1/2}. This is possible and well-defined up to coend relations by the arguments of Lemma <ref>. Isotopies to Transformations. Let H: κ^0 ⇒κ^1 be an isotopy, i.e. a continuous map H : Σ× [0,1] →Σ' with each H(-,t) an embedding, H(-,0) = κ^0, and H(-,1) = κ^1. Tracing out the isotopy we obtain an embedding ϕ_H : Σ× [0,1] →Σ' × [0,1] (p,t) ↦( H(p,t), t). An object X ∈(Σ) gives an -colored ribbon graph ϕ_H(X×[0,1]) from κ^0_*X to κ^1_*X in Σ'×[0,1] whose strands are colored and oriented by the colors and orientations of X. Isotopic isotopies give isotopic ribbon graphs. The natural transformation (H) : (κ^0) ⇒(κ^1) has components (H)_X,Y: _(Σ';Y,κ^0_*X) →_(Σ';Y,κ^1_*X) induced by composition with ϕ_H(X×[0,1]). Next we show naturality of (H). Naturality in Y is clear as we compose in the X direction. For naturality in X, we need to show that for any morphism T : X_1 → X_2 in (Σ) and any Y ∈(Σ'), the following diagram commutes: (κ^0)(X_1,Y) [r,"(κ^0)(T)"][d,"(H)_X_1,Y"'] [1cm] (κ^0)(X_2,Y) [d,"(H)_X_2,Y"] (κ^1)(X_1,Y) [r,"(κ^1)(T)"] [1cm] (κ^1)(X_2,Y) These maps are induced by composition with respectively (κ^1_*T)∘ϕ_H(X_1×[0,1]) and ϕ_H(X_2×[0,1]) ∘ (κ^0_*T). We will show that these two ribbon graphs are isotopic. First, consider the isotopy Φ : Σ× [0,1]× [0,1] →Σ' × [0,1] given by (p,t,s) ⟼ (κ^1(p),t) if 2/3s+1/3≤ t (H(p,3t-2s),t) if 2/3s≤ t ≤2/3s + 1/3 (κ^0(p),t) if t≤2/3s At s=0, Φ applies the embedding ϕ_H in the bottom third of the thickened surface and κ^1 in the top two thirds. At s=1 it applies κ^0 on the bottom two thirds and ϕ_H in the top third. At values in between it transitions from κ^0 to κ^1. Next let T be any isotopy representative which is strictly vertical outside of Σ× [1/3,2/3]. Then Φ(·,0)|_T = (κ^1_*T)∘ϕ_H(X_1×[0,1]) and Φ(·,0)|_T = ϕ_H(X_2×[0,1]) ∘ (κ^0_*T) as morphisms in the skein category of Σ'. We conclude that (<ref>) commutes, and therefore that (H) is indeed a natural transformation. §.§ Skein algebras The inclusion of the empty surface ∅Σ induces a morphism between -skein categories _(∅) = _→_(Σ) in , i.e. a presheaf-valued linear functor. This gives a presheaf _Σ∈_(Σ) which we call the distinguished object. See Remark <ref> for more details on _Σ. The modified skein algebra of the surface Σ with coefficients in is the endomorphism algebra of the distinguished presheaf in the presheaf category of the -skein category of Σ: _(Σ) := __(Σ)(_Σ) More explicitly, an element α of the -skein algebra of Σ is a collection of linear maps α_X : _(Σ;X,∅) →_(Σ;X,∅) natural in the admissible -labelling X. We will see in Remark <ref> that this notion differs from the non-unital admissible skein algebras of <cit.>. §.§.§ Old and new skeins We now take some time to compare _(Σ) and _Å(Σ). When =Å, Definition <ref> recovers the usual notion of a skein algebra, since in this case the distinguished presheaf is represented by any collection of disks all colored by the monoidal unit. By the Yoneda lemma, the associated skein algebra is isomorphic to its endomorphism algebra in the skein category[Recall from Remark <ref> that the usual notion of skein category is likewise recovered when = Å.], which is the usual skein algebra _Å(Σ) of the surface Σ. The skein algebra for a proper ideal ⊊Å is not a restriction of the usual skein algebra to -colored strands. On the contrary, every Å-colored ribbon graph defines an element in the -skein algebra. Not only that, but the -skein algebra appears to contain both elements and relations beyond those in the Å-skein algebra. These “new skeins" are only traditionally defined in the presence of -colored strands, but can exist on their own in the -skein algebra. The canonical algebra homomorphism _Å(Σ) →_(Σ) takes a closed Å-colored skein T∈_Å(Σ) to the natural transformation that acts on _(Σ;-,∅) by stacking T on top. The resulting ribbon graph is always skein-equivalent to an admissible -colored one precisely because is a tensor ideal. More generally, each inclusion of ideals ⊂𝒥 induces an algebra map _𝒥(Σ) →_(Σ). We will introduce the “new" skeins, which the reader can convince themselves do not generally lie in the image of this map. The following definition follows <cit.>, though the finiteness assumptions have been dropped. Let ⊆Å be an ideal in a ribbon category. A bichrome graph T in a 3-manifold M is a framed oriented embedded graph whose edges are either colored by an object of or labeled “red". We call the former “blue" edges. Red edges are required to be loops whose endpoints are adjacent in the cyclic ordering on the half-edges ending at their shared vertex. Only blue edges can end at the boundary. See Figure <ref>. Vertices touching only blue edges are colored by morphisms in the usual way. For the morphism coloring mixed vertices, we fuse each pair of adjacent red strands and color the fused pair by Lyubashenko's coend := ∫^X ∈ X ⊗ X^* ∈. We can then color the vertex in the usual way but by a morphism in , using the Yoneda embedding as necessary. The bichrome graph T is called admissible if there is at least one blue edge per connected component of M. The admissibility condition allows us to turn bichrome graphs into -colored (blue) ones. The red-to-blue operation associates an -skein T to an admissible bichrome graph T in M. We describe the procedure for a single mixed vertex. To simplify our equations, we assume all red edges are adjacent. First pull a blue strand colored by Q ∈ near the vertex. Locally, we observe the evaluation P := Q ⊗ Q^* → and the original vertex →^⊗ k⊗ X where X = or X ∈. Composing we get a morphism f:P →^⊗ k⊗ X in . We have a natural isomorphism _(P, ^⊗ k⊗ X) ≃∫^X_1,…,X_k ∈_(P,X_1⊗ X_1^*⋯ X_k⊗ X_k^* ⊗ X) Choose a representative ∑_j λ_j f_j of f in the coend[The right hand side of (<ref>) is a colimit in vector spaces, so is a quotient of the direct sum of its components, which are Hom spaces.], with each f_j : P → X_j,1⊗ X_j,1^*⋯ X_j,k⊗ X_j,k^* ⊗ X a morphism in . Finally, define T := ∑_j λ_j T_j where T_j is the -colored ribbon graph where the i-th red strand is colored by X_j,i and the mixed vertex is colored by f_j. See Figure <ref>. A number of choices must be made in the red-to-blue operation, but the result is nevertheless well defined. Two skeins obtained via two different red-to-blue operations on the same admissible bichrome graph T are related by isotopies and admissible skein relations. There are two choices in the red-to-blue operation. First, there is the choice of a blue edge and the path used to to pull it near the mixed vertex. Second there is the choice of a representative for f in the right hand side of (<ref>). This is unique up to the coend relations, which leave the skein invariant by isotoping vertices along the red edges (recall that we assume that pairs of adjacent red half-edges are connected.) For path independence we do the usual trick of bringing both chosen blue edges near the vertex. Suppose one is labeled Q_1 and the other Q_2. Denote f_1 (resp. f_2) the morphisms in (<ref>) obtained by pulling the first (resp. second) edge along its chosen path. After pulling both edges, locally we observe a tensor product of evaluations and the original vertex. Their composition f_12 is equal to both f_1 ⊗ ev_Q_2 and ev_Q_1⊗ f_2. A choice of representative for either f_1 or f_2 provides a representative for f_12. The two red-to-blue modifications differ only by the choice of a representative for f_12, and hence do not affect T. Using the operation T ↦T we can now define skeins from bichrome graphs. The modified skein α^T∈_(Σ) associated with a bichrome graph T is the natural transformation whose components α^T_X: _(Σ;X,∅) →_(Σ;X,∅) are given by stacking T, then turning the resulting admissible bichrome graph blue using the red-to-blue operation. This is natural because the bichrome graph obtained after stacking is natural and the red-to-blue operation is well-defined. In the definition we've used that non-admissible bichrome graphs will become admissible after stacking on an element of _(Σ;X,∅). Note that if T is an admissible bichrome graph, it can be turned blue before stacking and α^T is the natural transformation coming from stacking the -colored ribbon graph T. If T is not admissible (i.e. has no blue edges) then α^T will not in general be the image of an Å-colored skein. The take-away here is that there are more ways of acting (naturally) on -skeins than there are of acting on all skeins, and therefore the -skein algebra is bigger than the Å-skein algebra. Using factorization homology (see Corollary <ref>) one can prove that the α^T generate the -skein algebra for surfaces with non-empty boundary. The red-to-blue operation builds on Lyubashenko's way of interpreting surgery on a 3-manifold, replacing the Kirby color in the semisimple case. It already appeared in <cit.> in the case when = Proj() ⊆ is the ideal of projectives in a finite unimodular ribbon tensor category . Note that we write as a coend over instead of Å, so that we get -colored ribbon graphs from the summands. The two coends are canonically isomorphic <cit.>. Admissible skein algebras whose elements are admissible closed ribbon graphs in the thickened surface where introduced in <cit.>. They do not allow the empty skein and are therefore non-unital. There is a canonical algebra map from this admissible skein algebra to the usual Å-skein algebra given by inclusion. Composing with the canonical morphism _Å(Σ) →_(Σ) of Definition <ref>, we see that the -skein algebra contains but is not generated by elements coming from the admissible skein algebra. Consider for example the empty skein. We will see that all three algebras act on admissible skein modules. These algebra maps preserve the action and therefore the actions of the admissible and Å-skein algebras can be recovered by that of the -skein algebra. §.§ Skein modules of 3-cobordisms As usual, a 3-dimensional cobordism M between two surfaces induces a bimodule functor between their -skein categories. To see how this works, we take a moment to consider how orientation affects the skein category. Let Σ be Σ with opposite orientation. There is a categorical equivalence _(Σ) ≃_(Σ)^op induced by the orientation reversing diffeomorphism rev: Σ× [0,1] →Σ× [0,1] (x,t) ↦ (x,1-t) . An object (ι,W) is mapped to itself (though now ι is orientation reversing) and a ribbon graph T⊆Σ×[0,1] is mapped to rev(T)⊆Σ×[0,1] which now goes from the target of T back to its source. Let M: Σ' →Σ be a cobordism. That is, M is a compact oriented 3-manifold with a diffeomorphism ∂ M ≃Σ⊔Σ' where ∂ M is oriented with outward normal. The admissible skein bimodule functor of M is the functor _(M): _(Σ)⊗_(Σ')^op → (X,Y) ↦_(M;X,Y) The action of morphisms in _(Σ) and _(Σ') is induced by stacking in a neighborhood of ∂ M diffeomorphic to (Σ⊔Σ')×[0,1]. Note that M above is a cobordism from Σ' to Σ, whereas its skein bimodule functor goes in the other direction. This contravariance also appears in <cit.>, where skeins are treated as the dual theory to a TQFT. We emphasize that this is only a nuisance and not a deep issue, since Cob≃Cob^op via orientation reversal. The results of <cit.> generalize to this context. Skein bimodule functors extend skein categories to a contravariant symmetric monoidal functor _: Cob_2+1→ i.e. skein categories and skein bimodule functors form a categorified 3-TQFT. We start by showing that skein bimodule functors compose. For any composable pair of cobordisms Σ_1 M_12→Σ_2 M_23→Σ_3 and objects X_1, X_3 of (Σ_1), (Σ_3), we show that the following canonical gluing map is an isomorphism: ∫^X_2 ∈(Σ_2)_(M_12;X_1,X_2) ⊗_(M_23;X_2,X_3) →_(M_12Σ_2∪M_23;X_1,X_3). The proof of <cit.> applies here once we have made sure that every ribbon graph intersects Σ_2 in an admissible X_2 and that isotopies preserve this property. This is ensured by Lemma <ref> with M=M_12Σ_2∪M_23 and N=Σ_2. Symmetric monoidality was already observed in Theorem <ref> on objects and follows from _(M⊔ M', X⊔ X') ≃_(M,X)⊗_(M',X') on morphisms. When =Å, the usual skein module with empty boundary object is obtained by evaluating the skein bimodule functor on the empty object. When ⊊Å is a proper ideal, we again generalize the construction by letting the distinguished presheaf play the role of the empty object. Let M: Σ' →Σ be a cobordism as above. The modified skein module _(M) of M with coefficients in is the _(Σ)-_(Σ')-bimodule _(M) := __(Σ')(_Σ',(M)(_Σ)) where (M) denotes the essentially unique cocontinuous extension of (M):_(Σ) →_(Σ') to (M): _(Σ) →_(Σ'), see <cit.>. Let's unpack this definition some. If M = Σ× [0,1] is a cylinder, then (M) is the identity and _(M) coincides with the -skein algebra introduced before. In general the presheaf (M)(_Σ) : _(Σ')^op→ is given by the coend Y ↦∫^X ∈_(Σ)_(M;X,Y)⊗_(Σ×[0,1]; X, ∅) ≃_(M;∅,Y) where the last equivalence uses the fact that (<ref>) is an isomorphism. An element α of _(M) is therefore a natural transformation α : _Σ'⇒_(M;-,∅) determined by its coefficients α_Y : _Σ'(Y) := _(Σ'×[0,1]; Y , ∅) →_(M; ∅, Y) which are natural in Y ∈_(Σ'). The action of _(Σ') := __(Σ')(_Σ') is given by composition in _(Σ'). The action of an element β in _(Σ) is given by post-composition with _(M)(β). The empty skein in the -skein module of M makes sense if and only if the outgoing boundary Σ' M is surjective on connected component. This is closely related to the fact that the undecorated <cit.> theories are defined only for these cobordisms. Algebraically, it is an instance of the fact that the inclusion of the unit in a non-semisimple ribbon category is almost, but not completely, 3-dualizable <cit.>. Skein modules as we've defined them treat incoming and outgoing boundary components very differently. We consider a compact 3-manifold M as a cobordism Σ' →Σ in two canonical ways. Taking Σ' = ∅, Σ = ∂ M, _(M) is the admissible skein module, where we do not allow the empty skein. Hence the -skein algebra of ∂ M also acts on the admissible skein module of M introduced in <cit.>. When Σ' = ∅, then _(Σ') = and _Σ' =. By (<ref>) we have _(M)≃∫^X ∈_(Σ)_(Σ;X, ∅)⊗_(M;X) ≃_(M;∅). On the other hand, we can take Σ' = ∂ M and Σ = ∅. If each connected component of M has non-empty boundary, then any Å-colored skein in M gives an element of the -skein module and there are new elements coming from the bichrome graphs described above. § RELATION TO FACTORIZATION HOMOLOGY The main result of this section is Theorem <ref>, which establishes an identification of _(Σ) with the factorization homology over Σ of the disk algebra associated to . §.§ Factorization homology Factorization homology is defined in <cit.> for any _n-algebra with values in a symmetric monoidal (∞,1)-category. We will be interested in _2-algebras in . As is a (2,1)-category, it is enough to consider the homotopy (2,1)-categories of disks and surfaces. The (2,1)-category of disks is the full subcategory of with objects given by finite disjoint unions of the standard disk. A -algebra is a symmetric monoidal 2-functor out of . The factorization homology of a -algebra : → over a surface Σ is the homotopy 2-colimit ∫_Σ := 2– . Where : _/Σ→ is the composite functor _/Σfor→→ and _/Σ is the slice (2,1)-category. By <cit.>, <cit.> and using the universal property of colimits, this assignment extends uniquely to a symmetric monoidal 2-functor ∫_-:→. Let's unpack this definition, taking advantage of the fact that we're dealing with (2,1)-categories. The slice (2,1)-category _/Σ has: * Objects: disks over Σ, i.e. embeddings ι : d ↪Σ where d ∈. * 1-Morphisms: embeddings of disks over Σ, i.e. d [rr,"ρ"][dr, "ι"', ""name=iota, above right] d' [dl, "ι'"] Σ[from=iota,to=1-3,Rightarrow,"φ"' near start, shorten > = 15] . * 2-Morphisms from (ρ,φ) to (ρ',φ'): isotopies over Σ, i.e. d [rr, bend left,"ρ", ""name=rho, below][rr, bend right,"ρ'"', ""name=rhoprime, above] d' [from=rho,to=rhoprime,Rightarrow,"h"] such that (ι'h)∘φ=φ' as 2-morphisms in , i.e. up to higher isotopy. For any category X, let _X : _/Σ→ be the functor that assigns X to every object, the identity to every 1-morphism, and equality to every 2-morphism. By definition, the 2-colimit of the functor[By the colimit of a functor, we mean the colimit over the diagram defined by its image in the target category. e.g. there is an arrow in the diagram for each morphism in the source category.] is the data of a category ∫_Σ together with a universal cocone. In other words we have a strong natural transformation c:⇒_∫_Σ which induces a categorical equivalence for each X∈: _(∫_Σ, X)∼→(, _X). Here the left hand side is the groupoid of bimodule functors and natural isomorphisms, while the right hand side is the groupoid of strong natural transformations and modifications (also sometimes called transfors or 2-transfors), which we recall in Appendix <ref>. The equivalence (<ref>) sends a morphism G: ∫_Σ→ X to the composition c⟹_∫_Σ_G⟹_X , which is a cocone over the slice category with tip X, see Figure <ref>. For any 2-functor F : →, we have a canonical cocone c_F over the slice category _/Σ with tip F(Σ), i.e. a natural transformation c_F: (F|_∘for) ⇒_F(Σ). Its component on an object (ι:d→Σ) is the 1-morphism (c_F)_ι = F(ι): F(d)→ F(Σ) in and its component on a 1-morphism Φ=(ρ,φ):ι→ι' of _/Σ is (c_F)_Φ = F(d) [d, "F(ρ)"'] [r, "F(ι)"] F(Σ) [d, equal] F(d') [r, "F(ι')"'] [ur, Rightarrow, "F(φ)"] F(Σ) A symmetric monoidal 2-functor F:→ coincides with factorization homology of F|_ if and only if each cocone c_F: (F|_∘for) ⇒_F(Σ) induces an equivalence _(F(Σ), X)∼→(F|, _X) for every X in and Σ in . Factorization homology is proven to exist and be well-behaved in a ⊗-sifted cocomplete symmetric monoidal ∞-category. This is the case of the bicategory of presentable cocomplete linear categories, but we do not know this result for . Therefore a priori we should always write factorization homology with values in . It is a consequence of Theorem <ref> that it indeed lies in . §.§ Decomposition properties For the proof of Theorem <ref>, we will want to decompose any morphism in _(Σ) into a composition of morphisms happening over disks. We need a strong version of this decomposition which does not pass to isotopy classes because we also need a decomposition result for relations between ribbon graphs. By ribbon graph we always mean an admissible -colored ribbon graph, sometimes considered up to admissible skein relations. Let T be an -colored ribbon graph from (ι, W) to (ι', W') in Σ×[0,1], not considered up to isotopy. A decomposition of T is the data = (t_i, κ^i:D_iΣ, ρ_i:d_i D_i)_i=1,…,n of times 0=t_0<t_1<⋯<t_n=1 and embeddings of disks κ^i:D_iΣ and ρ_i:d_i D_i such that T∩(Σ×[t_i-1,t_i])⊆κ^i(D_i) × [t_i-1,t_i] and T is transverse to each Σ×{t_i-1} and intersects it at the centers of the disks κ^i∘ρ_i (d_i)⊆κ^i(D_i)∩κ^i-1(D_i-1) with correct framing. For compatibility with the prescribed source and target, we ask that κ^1∘ρ_1 = ι and add the convention that κ^n+1∘ρ_n+1 = ι'. To satisfy the admissibility condition on -colored ribbon graphs, we also ask that each T∩(Σ×{t_i-1}) is non-empty. See Figure <ref>. From such a decomposition we obtain in particular that each T∩(Σ×[t_i-1,t_i])⊆κ^i(D_i) × [t_i-1,t_i] ≃ D_i×[0,1] defines a morphism T_i: (ρ_i, W_i) → (ρ_i', W_i') in _(D_i), where ρ_i' is the unique embedding ρ_i':d_i+1 D_i such that κ^i∘ρ_i' = κ^i+1∘ρ_i+1 and that the two colors W_i,W_i-1' of T at the center of each κ^i∘ρ_i (d_i) agree. Each induces a decomposition T= κ^n_*T_n ∘⋯∘κ^1_*T_1 where T is the morphism in _(Σ) represented by T. Not every T admits a decomposition, but each skein T has a representative that can be decomposed as above. The main requirement is that T is in good position: A ribbon graph T is in good position if * the height function T⊆Σ× [0,1]→ [0,1] has isolated critical points, and * at all times t∈[0,1], there is an edge of T intersecting the level Σ×{t}. We begin with a lemma that allows us to put ribbon graphs into good position. Every ribbon graph in good position admits a decomposition. Every morphism T in _(Σ) has a representative in good position, and is therefore a composition of morphisms happening over disks. Let T⊂Σ× [0,1] be a ribbon graph in good position. We will build a decomposition of T. By definition T intersects every level Σ×{t} at a finite (non empty) collection of points, and this intersection is transverse for all but finitely many levels. For any t, let D_t be a small neighborhood of the points T∩(Σ×{t}) consisting of disks in Σ×{t}. By continuity of the embedding, there exists an interval U_t=(t-ε,t+ε) such that T∩ (Σ× U_t)⊆ D_t× U_t. We obtain a cover {U_t}_t∈ [0,1] of the interval. By compactness we can find a finite sub-cover (U_1,…,U_n). We restrict to a minimal subcover and order the intervals U_i by increasing infimum. Each U_i has associated disks D_i and inclusion κ^i:D_i→Σ. Say has source and target ι and ι'. We add to the list intervals U_1 = [0,ε) and U_n = (1-ε,1] for small enough ε such that T∩ (Σ× U_1) lies in ι(d)× U_1 and T∩ (Σ× U_n) lies in ι'(d')× U_n. We can pick any t_i ∈ U_i∩ U_i+1 outside the finitely many critical values of the height function (and set t_n=1). We can moreover choose any embedding ρ_i: d_i → D_i corresponding to the inclusion of a small neighborhood of the finitely many framed points T∩(Σ×{t_i}) which lies in both D_i and D_i+1. Note that = (t_i, κ^i, ρ_i)_i=1,…,n is a decomposition of T. Now consider any morphism T in _(Σ). Having isolated critical points is a generic condition on the height function and we can find an isotopy representative that satisfies it. By admissibility, its source and target are non-empty and it satisfies the second condition of Definition <ref> in a small neighborhood of 0 and 1. Choose any point p on any edge of and any path γ in generic position going from p to a point of height close to 1 and then to a point of height close to 0 in (Σ× [0,1])∖. Isotope by pulling a small neighborhood of p along the path γ. The resulting ribbon graph _γ is a representative of T in good position. We want a similar result stating that the relations between ribbon graphs are local. We will need a strong form of locality that changes only one of the T_i's in a decomposition at a time. Two ribbon graphs and ' are locally skein equivalent if they have a common decomposition = (t_i, κ^i:D_iΣ, ρ_i)_i=1,…, n and there is an index i_0 such that and ' agree strictly outside κ^i_0(D_i_0)× [t_i_0-1,t_i_0] and are related by isotopy and skein relations inside, i.e. T_i_0 = T_i_0' in _(D_i_0). The relations between morphisms in _(Σ) are generated locally. More precisely, two ribbon graphs and ' in good position represent the same morphism in _(Σ) if and only if they are related by a sequence of local skein relations as described above (possibly changing the decomposition). We show that two isotopic ribbon graphs are related by a sequence of local skein relations. Let φ = (φ_s:Σ×[0,1] →̃Σ×[0,1])_s∈[0,1] be an ambient isotopy of Σ×[0,1] with φ_0=𝕀 and φ_1()='. Denote _s := φ_s() for s∈ [0,1]. We first show that we can choose φ such that all ribbon graphs _s are in good position. For generic φ, each height function _s→ [0,1] has isolated critical points (we do not ask that these are non-degenerate.) Consider the operation ↔_γ of pulling a strand close to Σ×{0} and to Σ×{1} along a path γ described in the proof of Lemma <ref>. This operation is induced by an ambient isotopy which deforms through ribbon graphs in good position. Applying the ambient isotopy φ to _γ, we obtain a family of ribbon graphs φ_s(_γ) = (_s)_φ_s(γ) which are each obtained from the _s by pulling a strand along the path φ_s(γ). The _s have non-empty boundary points, so we can find a global ε such that the height function surjects onto [0,ε) and (1-ε,1]. The ambient isotopy φ is the identity on the boundary so there exists a small δ>0 such that φ(Σ×[0,δ)) stays in Σ×[0,ε) and similarly near 1. Choosing the path γ that's δ-close to Σ×{0} and Σ×{1}, we get that every φ_s(_γ) is in good position. Finally, φ_1(_γ) = '_φ_1(γ) is also isotopic to ' through ribbon graphs in good position because ' is. We have produced an isotopy ∼_γ∼'_φ_1(γ)∼' that only passes through ribbon graphs in good position. We can now suppose that φ is such that every _s is in good position. For any fixed s_0∈ [0,1] we can choose a decomposition (t_i, κ^i:D_iΣ, ρ_i:d_i D_i)_i=1,…,n of _s_0. Now the condition that (t_i, κ^i:D_iΣ, ρ_i:d_i D_i)_i=1,…,n is a decomposition of _s is open which means that * for small ε, (t_i, κ^i:D_iΣ, ρ_i:d_i D_i)_i=1,…,n is a decomposition of _s_0±ε, and * for small δ_1,…,δ_n-1, (t_i±δ_i, κ^i:D_iΣ, ρ_i:d_i D_i)_i=1,…,n is a decomposition of _s_0±ε Therefore we have a globally valid decomposition for the isotopy ψ_s_0 := (φ_s)_s∈ [s_0-ε,s_0+ε]. Consider the covering (Σ× (t_i-1-δ_i-1, t_i+δ_i))_i=1,…,n of Σ× [0,1]. Up to higher isotopy, ψ_s_0 can be decomposed into a composition of isotopies supported on each Σ× (t_i-1-δ_i-1, t_i+δ_i), <cit.>. Each of these isotopies is a local skein relation in the sense above, using a decomposition (t_i±δ_i, κ^i:D_iΣ, ρ_i:d_i D_i)_i=1,…,n with appropriate signs. Finally, by compactness of [0,1], the isotopy φ can be decomposed into a finite composition of such ψ_s_0's, and we have shown locality for isotopies. Finally, we can use ambient isotopies to confine any skein relation to a small ball in the thickened surface, hence to a local skein relation. §.§ Modified skein categories compute factorization homology Let ⊆Å be a tensor ideal in a ribbon category. As a corollary of Theorem <ref>, the restriction of _ to is a unital disk algebra in . [Passing to gives us the disk algebra in corresponding to under the equivalence between disk algebras and balanced braided categories.] We will use the notation =_|_ and will denote its extension to the slice category _/Σ by . The central claim is: There is an equivalence of 2-functors ∫_-≃_(-) between factorization homology of in and -skein categories. Let c : ⇒_(Σ) be the strong natural transformation obtained by functoriality of and the embeddings into Σ in the slice category, as described in Remark <ref>. We will show that for any linear category X the functor c_* : _((Σ), X) →(,_X). is an equivalence of categories. When X = (^op,) is a presheaf category, we have _((Σ),):=_((Σ), X) and the equivalence (<ref>) exhibits _(Σ) as the 2-colimit defining factorization homology. We split the proof into three claims. Claim: c_* is essentially surjective. We will show that each strong natural transformation α : ⇒_X is isomorphic to c_*F for some functor F: _(Σ)→ X. Remember that the data specifying α is a functor α_ι:(d)→ X for each embedding ι:d↪Σ and a natural isomorphism α_Φ:α_ι⇒ α_ι'∘(ρ) for each 1-morphism Φ= (ρ:d→ d', φ: ι⇒ι'∘ρ) in the slice category _/Σ. We will frequently use special 1-morphisms where ι=ι'∘ρ and φ is the identity isotopy, denoted Φ^ρ = (ρ, 𝕀_ι). For objects, we set F(ι, W) := α_ι(W) , where by abuse of notation W is identified with the object (𝕀_d,W) ∈(d). The main part of this proof is constructing F on morphisms and showing that it is well defined. Step 1: Definition: Let T:(ι,W)→(ι',W') be a morphism in (Σ). By Lemma <ref>, we have a decomposition T= κ^n_*T_n ∘⋯∘κ^1_*T_1 for a finite collection of embeddings κ^i:D_i↪Σ. This decomposition is not unique. Applying α_κ^i:(D_i)→ X on T_i, we get a morphism α_κ^i(T_i):α_κ^i(ρ_i,W_i)→α_κ^i(ρ_i',W_i'). We would like to compose these morphisms, but they are not composable on the nose. We have to conjugate each one by isomorphisms (α_Φ^ρ_i)_W_i: α_κ^iρ_i(W_i) →̃α_κ^i+1(ρ_i,W_i) and (α_Φ^ρ_i')_W_i+1: α_κ^iρ_i'(W_i') →̃α_κ^i(ρ_i',W_i'). The composable versions of the α_κ^i(T_i) are β_i := (α_Φ^ρ_i')_W_i'^-1∘α_κ^i(T_i) ∘ (α_Φ^ρ_i)_W_i: α_κ^iρ_i(W_i)→α_κ^iρ_i'(W_i') . We define F(T,):= β_n ∘⋯∘β_1 and next prove that this does not depend on the choice of the decomposition of T. 10pt Step 2: Independence of decomposition for a fixed isotopy representative: We will show that for an isotopy representative T of T in good position, the definition F(T) := F(T,) for any decomposition of T does not depend on . Again, such a decomposition exists by Lemma <ref>. We prove independence using a common refinement of two decompositions of T. Consider two choices = (t_i, κ^i: D_i↪Σ, ρ_i:d_i↪ D_i)_i≤ n and ' = (t_j', κ^j': D_j'↪Σ, ρ_j':d_j'↪ D_j')_j≤ n'. Up to using an intermediate choice with a small perturbation of the t_i's, we can suppose that the t_i and the t_j' are all distinct. Therefore their union (t̂_l)_l≤ n+n'-1 gives a vertically-finer decomposition of either. Each T∩(Σ×[t̂_l-1,t̂_l]) lies in both a D_i and a D_j' and therefore in their intersection. A small neighborhood of the framed center of the image of each ρ_i, ρ_i' also lies in their intersection. The intersection D_i ∩ D_i' may not be a disk, but we can re-decompose T there. We obtain a new decomposition of T which is both vertically and horizontally finer than and '. It is related to either by a sequence of the following moves: * Object refinement: for i>1, replace ρ_i:d_i D_i by a smaller ρ̂_i: d̂_i→ D_i such that ρ̂_i(d̂_i)⊆ρ_i(d_i) which gives the same framed points on the centers. * Horizontal refinement: suppose that there is a subdisk D̂_i κ D_i containing both ρ_i(d_i) and ρ_i+1(d_i+1) and such that T∩(Σ×[t_i, t_i+1]) lies in κ^i(D̂_i)×[t_i, t_i+1]. Replace D_i by D̂_i. * Vertical refinement: suppose that D_i+1=D_i at time t_i+1. Then forget (t_i+1, D_i+1, ρ_i+1). (The refinement is actually going in the other direction but this one is easier to write.) Let us show invariance of F(T) under each of these moves: * For object refinement, we can factor ρ̂_i as d̂_i κ d_i ρ_i D_i. Therefore by definition of a strong natural transformation (<ref>) we have α_Φ^ρ̂_i = α_Φ^ρ_i∘α_Φ^κ. The extra term in β_i cancels with its inverse in β_i-1 in the definition of F(T). * For horizontal refinement, we show that the two possible definitions of β_i agree. We drop indices i and restrict to Σ×[t_i, t_i+1] as the rest will not feature. We need to compare the top and bottom compositions below α_ικρ(W) [d, equal] [r, "(α_Φ^κρ)_W"] α_ι(κρ,W) [r, "α_ι(κ_*T_1)"] [d, <-, "(α_Φ^κ)_(ρ,W)"] α_ι(κρ',W') [r, "(α_Φ^κρ')_W^-1"] [d, <-, "(α_Φ^κ)_(ρ',W')"] α_ικρ'(W') [d, equal] α_ικρ(W) [r, "(α_Φ^ρ)_W"] α_ικ(ρ,W) [r, "α_ικ(T_1)"] α_ικ(ρ',W') [r, "(α_Φ^ρ')_W'^-1"] α_ικρ'(W') The first and last square commute by (<ref>) and the middle square commutes by naturality of α_Φ^κ. * For vertical refinement, we simply use that α_κ^i is a functor and preserves composition. We have shown that F(T) is well defined. Step 3: Invariance under isotopy and skein relations: Let and ' be two ribbon graphs in good position which represent the same skein T. We must show that F() = F('). We have shown Lemma <ref> that it is enough to consider local skein relations T∼T' where T and T' have a common decomposition and agree strictly except that one of the T_i's change by an equivalent skein in (D_i). The definition of F is invariant under such a move, as α_κ^i(T_i) is unchanged. Step 4: c_*F is isomorphic to α: We define a modification m:c_*F ⇛α. This is the data of a natural isomorphism F ∘(ι) ⇒α_ι between functors (D) → X for every object ι:DΣ of _/Σ. On objects of the form (𝕀_D, W), c_*F agrees with α by definition. On an object (ρ:d D,W), we have an isomorphism (α_Φ^ρ)_W:c_*F(ρ,W):= α_ιρ(W) →α_ι(ρ,W). We set m(ι)_(ρ,W) := (α_Φ^ρ)_W . Let us check that this defines a natural transformation m(ι). Let T: (ρ,W)→ (ρ',W') be a morphism in (D). Then (ι)(T) = ι_*T is already decomposed with n=1. Naturality is built in the definition (<ref>): F(ι_*T) := β_1 := (α_Φ^ρ')_W'^-1∘α_ι(T) ∘ (α_Φ^ρ)_W . Now, let us check that m is a modification. Let Φ = (ρ,φ):ι→ι' be a 1-morphism in _/Σ. We need to check (<ref>) that [column sep=50pt] (d) [d, "(ρ)"'] [r, "F∘(ι)"] X [d, equal] (d') [r, "F∘(ι')",""name=alpha,below] [d, equal] X [to = 1-2, from= 2-1, Leftrightarrow, "(c_*F)_Φ"] [d, equal] (d') [r, "α_ι'"', ""name=beta,above] X [from=beta,to=alpha,Leftarrow,"m_ι'"] ?= [column sep=50pt] (d) [d, equal] [r, "F∘(ι)",""name=alpha,below] X [d, equal] (d) [d, "(ρ)"'] [r, "α_ι", ""name=beta,above] X [d, equal] (d') [r, "α_ι'"] X [to = 2-2, from= 3-1, Leftrightarrow, "α_Φ"] [from=beta,to=alpha,Leftarrow,"m_ι", shorten=5] It is enough to check it on objects of the form (𝕀_d,W) which generate. Then we have to check that (α_Φ^ρ)_W ∘ F((φ)_W) ?= (α_Φ)_W for any (𝕀_d,W) in (d). This is not immediate: in the left hand side F((φ)_W) is computed using α on objects of _/Σ evaluated on morphisms in the skein categories of disks. The right hand side uses α on 1-morphisms of _/Σ. We need to show that these are related, and essentially this is true because morphisms in the skein category encode isotopies happening over disks. We first show that (<ref>) holds for the special case of an isotopy happening inside a bigger disk. Consider two embeddings ρ, ρ':d D and an isotopy ψ:ρ⇒ρ' in . Then given an embedding κ:DΣ, we have a 1-morphism Ψ = (ρ',κψ: κρ⇒κρ') from κρ:dΣ to κ:D→Σ in _/Σ. This 1-morphism really comes from an isotopy living over a disk, and indeed it is isomorphic to the 1-morphism Φ^ρ=(ρ, 𝕀_κρ) by the following 2-morphism in _/Σ: [row sep = 40pt] d [bend left,rr,"ρ",""name=rho,below] [dr, "κρ"', ""name=kaprho, above, pos = 0.8,""name=kaprhoup, above, pos = 0.5] D [dl, "κ", ""name=kap, above] Σ[from = kaprho, to = 1-3, Rightarrow, bend right = 10pt, "κψ"above= 1pt, pos = 0.25 , shorten > = 10pt][from = kaprhoup, to = 1-3, equal, dashed, bend left = 10pt, shorten > = 10pt] [from = 1-1, to = 1-3, bend right,"ρ'" pos = 0.15,""name=rhop,above] [from = rho, to = rhop, Rightarrow, "ψ" pos = 0.4] The compatibility (<ref>) of α with 2-morphisms gives precisely (α_Ψ)_W = α_κ((ψ)_W) ∘ (α_Φ^ρ)_W =: (α_Φ^ρ')_W ∘ F((ψ)_W) . To prove the general formula we need to decompose Φ into such Ψs and Φ^ρs. Let 1/2d denote the disjoint union of radius 1/2 disks in the disjoint union of radius 1 disks d. Denote ν:1/2 d d the inclusion and r=(2-s/2𝕀:d→ d)_s∈[0,1] the retraction of d on the image of ν. By continuity and compacity, there exists t_0=0<t_1<…<t_n=1 such that φ_s(1/2d) ⊆φ_t_i(d) for s∈ [t_i-1,t_i]. In particular φ|_[t_i-1,t_i]×1/2d is of the form (φ_t_i)ψ_i for some isotopy ψ_i between ν and another embedding ρ_i':1/2d d. Together with the retraction, this gives a 1-morphism Ψ_i = [column sep = 100pt, row sep = 40pt] d [d, "φ_t_i-1" pos = 0.3] [r, "1/2𝕀"] 1/2d [d, "φ_t_i-1|_1/2d" pos = 0.3] [r, "ν"] d [d, "φ_t_i" pos = 0.3] Σ[r, equal] Σ[r, equal] Σ[from = 2-1, to = 1-2, "φ_t_i-1r"' sloped, Rightarrow, shorten = 10pt] [from = 2-2, to = 1-3, "φ|_[t_i-1,t_i]×1/2d"' sloped, Rightarrow, shorten = 10pt] induced by an isotopy ψ_ir : 𝕀_d ⇒ρ_i'∘1/2𝕀_d in . The 1-morphism Φ is isomorphic, using the retraction r, to the composition Φ≃Φ^ρ∘Ψ_n∘⋯∘Ψ_1 Using that α preserves composition (<ref>) and applying (<ref>) on every term gives us a formula for the right hand side of (<ref>) which agrees with the formula for the left hand side given by the decomposition = (t_i, κ^i:=φ_t_i:dΣ, ρ_i:=ν: 1/2d d) of (φ)_W. Claim: c_* is faithful. We will show that c_* is injective on morphisms. This proof comes down to unraveling definitions until we can compare two collections of morphisms in X. Let η : F ⇒ G be a natural transformation between arbitrary functors F,G: (Σ) → X. It is entirely determined by its components: {η_(ι, W)∈_X(F(ι,W), G(ι,W)) | ι : d ↪Σ, W ∈(d) }. The modification c_*η : c_* F ⇛ c_*G is likewise determined by its components: {. (d) [r,bend left=60,"c_*G(ι)",""name=G,below] [r,bend right=60,"c_*F(ι)"',""name=F,above] [5mm] X [from=F,to=G,Rightarrow,"c_*η_ι" description] | ι : d ↪Σ}. Here we've used that (ι:d ↪Σ) = (d) while _X(ι) = X. Each c_*η_ι is a natural transformation and so also determined by components. These are indexed over objects W ∈(d). Hence (<ref>) can be re-written as {. (c_*η_ι)_W : c_*F(ι)(W) → c_*G(ι)(W) | ι : d ↪Σ, W ∈(d)}. Next, recall from the construction of c_* in <ref> that c_*F(ι)(W) = F((ι)(W)) =: F(ι,W) (same for G) while (c_*η_ι)_W = η_(ι,W). It follows that the collections in (<ref>) and (<ref>) have the exact same elements and indexing set. Therefore if c_* η = c_* η' as modifications, it would mean that η_(ι,W) = η'_(ι,W) as morphisms in X for all objects (ι,W) ∈(Σ). This can happen if and only if η = η' as natural transformations. Claim: c_* is full. We will show that c_* is surjective on morphisms. Let m : α⇛β be a modification between strong natural transformations α,β : ⇒_X. We are interested in fullness, so we assume α = c_*F and β = c_*G for functors F,G: (Σ) → X. As in (<ref>), m is determined by its constituent natural transformations m_ι for ι : d ↪Σ, which in turn are described by their component morphisms (m_ι)_W, with W ∈(d). We define a (presumed) natural transformation η : F ⇒ G by η_(ι,W) := (m_ι)_W. It remains to show that η is indeed natural. Let T : (ι: d ↪Σ, W) → (ι' : d' ↪Σ, W') be a morphism in (Σ). We will prove that the following diagram in X is commutative: F(ι, W) [r,"F(T)"] [d,"η_(ι,W)"'] F(ι', W') [d,"η_(ι',W')"] G(ι, W) [r,"G(T)"'][ru,equal,"?"] G(ι',W'). Let T = κ^n_*T_n ∘⋯∘κ^1_*T_1 be a decomposition over disks with the T_i as in (<ref>). Each component m_κ^i of the original modification is a natural transformation, so for each T_i: (ρ_i, W_i) → (ρ_i', W_i') in (d_i) we have a commutative square in the category X: F(κ^iρ_i, W_i) [r,"F(κ_*^iT_i)"] [d,"η_(κ^iρ_i,W_i)"'] F(κ^iρ_i', W_i') [d,"η_(κ^iρ_i',W_i')"] G(κ^iρ_i, W_i) [r,"G(κ_*^iT_i)"] G(κ^iρ_i',W_i'). Gluing these diagrams together we have: [column sep = 14pt] F(ι,W) [r,equal][d,"η_ι,W"] [-6mm] F(κ^1ρ_1,W_1) [r,"F(κ_*^1T_1)"][d, "η_κ^1ρ_1,W_1"] [3mm] F(κ^1ρ_1',W_1') [r,equal][d,"η_κ^1ρ_1',W_1'"] [-6mm] F(κ^2ρ_2,W_2) [r,"F(κ_*^2T_2)"][d, "η_κ^2ρ_2,W_2"] [3mm] ⋯[r,"F(κ_*^nT_n)"] [3mm] F(κ^nρ_n',W_n') [r,equal][d,"η_κ^nρ_n',W_n'"] [-6mm] F(ι',W') [d, "η_ι',W'"] G(ι,W) [r,equal] G(κ^1ρ_1,W_1) [r,"G(κ^1_* T_1)"] G(κ^1ρ_1',W_1') [r,equal] G(κ^2ρ_2,W_2) [r,"G(κ_*^2T_2)"] ⋯[r,"G(κ^n_*T_n)"] G(κ^nρ_n',W_n') [r,equal] G(ι',W') It follows that (<ref>) is a commutative square, hence η : α⇒β is a natural transformation. Since (c_*η_ι)_W := η_ι,W = (m_ι)_W, we conclude that c_* is full. When contains the unit, i.e. =Å, we obtain exactly <cit.>'s result. Note that our proof is quite different, as Cooke shows excision properties of skein categories and the characterization of factorization homology of <cit.>. The original motivation for our approach was that it would be easier to generalize than the excision arguments. We will want to use the theorem above to relate our topological constructions to the results of <cit.>. Let us first translate our result to their context. We can embed in the bicategory of cocomplete presentable linear categories and cocontinuous functors by free cocompletion ↦:= (^op,). This embedding is symmetric monoidal. Its essential image is the full sub-bicategory spanned by categories with enough compact-projectives. See <cit.> for more details. Let be an oriented 𝔼_2-algebra in . Suppose that is cp-rigid and that its balancing is a ribbon structure on its subcategory of dualizable objects Å. Then set :=^cp the subcategory of compact-projective objects, we have an equivalence of symmetric monoidal 2-functors ∫_-≃_(-) between factorization homology with coefficients in and free cocompletions (or presheaf categories) of -skein categories. By definition <cit.>, being cp-rigid means that it has enough compact-projectives and its compact-projectives are dualizable. The first condition implies that ≃. The second asks that ⊆Å. It is a tensor ideal by the same proof as <cit.>. Note that Å is a priori a rigid balanced category, and the extra condition we ask is that the balancing on the dual is the dual of the balancing. In the proof of the theorem above we actually show that (<ref>) is an equivalence for any linear category X, not only for presheaf categories. By the universal property of free cocompletions we get _ (_(Σ), X) ≃_(_(Σ),X) ≃(|_∘for,_X) which, using the universal property of free cocompletion again, exhibits _(-) as the 2-colimit defining factorization homology in . We just have to check that the -algebra |_ is equivalent the one induced by , namely that (𝔻) agree with as a balanced braided category. We already have an equivalence of categories ≃≃(𝔻). The braiding and balancing in (𝔻) between two objects of (𝔻) ≃ are defined, via the Reshetikhin–Turaev functor, by the braiding and balancing in hence agree with the braiding and balancing of there. Now generates either category under colimits, and natural transformations are determined by their value on such a generating subcategory. §.§ Computation of -skein algebras Showing that -skein categories agree with factorization homology gives us access to a new toolkit. The following statement follows directly from <cit.>. Let ,Å and be in Corollary <ref> and let Σ be a connected genus g surface with n≥ 1 punctures. There is an isomorphism of vector spaces _(Σ) ≃_(, ℒ^⊗ 2g+n-1) between the -skein algebra of Σ and the invariants in a tensor power of Lyubashenko's coend ℒ := ∫^X ∈ X ⊗ X^* ∈. This isomorphism can be upgraded to an algebra isomorphism by endowing ℒ^⊗ 2g+n-1 with a product twisted by appropriate braidings. The -skein algebra is exactly the algebra of invariants _(, A_Σ) of the moduli algebra A_Σ studied in <cit.>. Indeed A_Σ is defined to be the internal endomorphism algebra _∫_Σ(_Σ) whose defining universal property asks that _(, A_Σ) ≃_∫_Σ(_Σ, _Σ) . Here is the action induced by inserting a disk though the boundary of Σ. By Theorem <ref>, _∫_Σ(_Σ, _Σ)≃__(Σ)(_Σ, _Σ) =:_(Σ) . The moduli algebra A_Σ is shown to be isomorphic to ℒ^⊗ 2g+n-1 with a product twisted by appropriate braidings in <cit.>. The moduli algebra and its algebra of invariants have been well-studied in the literature. The best known case <cit.> is when Å = H–mod^fd for a non-semisimple finite-dimensional Hopf algebra H (e.g. small quantum groups at roots of unity) and the tensor ideal of projectives, so ≃ H–mod. For H the small quantum group associated with 𝔰𝔩_2 at a p-th root of unity and Σ = S^1× [0,1] the annulus, it is shown to be 3p-1 dimensional in <cit.> and <cit.> using an explicit basis. This explicit description can be used to show non-surjectivity of the canonical map _Å(Σ) →_(Σ). By arguments of Matthieu Faitg, its image is 2p-dimensional for the annulus. § STRONG NATURAL TRANSFORMATIONS AND MODIFICATIONS We describe the category (F,G) which appears in Definition <ref> of factorization homology. In what follows we assume that and are 2-categories and that F,G:→ are 2-functors. We suppress unitors and associators below. A strong natural transformation α: F⇒ G between two 2-functors F,G:→, 𝒞[bend left,rr,"G",""name=G,below] [bend right,"F"',rr,""name=F,above] 𝒟, [from=F,to=G,Rightarrow,"α"] is the data of a 1-morphism α_A: F(A) → G(A) in for each object A∈ and an invertible 2-morphism α_f in filling the square F(A_1) [d, "F(f)"'] [r, "α_A_1"] G(A_1) [d, "G(f)"] F(A_2) [r, "α_A_2"] G(A_2) [to = 1-2, from= 2-1, Leftrightarrow, "α_f"] for each 1-morphism f:A_1 → A_2 in . These should be natural in the following sense. We require that α_𝕀_A=𝕀_α_A for each object A and that for composable A_1f→ A_2 g→ A_3 we have an equality of 2-morphisms in : F(A_1) [d, "F(g∘ f)"'] [r, "α_A_1"] G(A_1) [d, "G(g∘ f)"] F(A_3) [r, "α_A_3"] G(A_3) [to = 1-2, from= 2-1, Leftrightarrow, "α_g∘ f"] = F(A_1) [d, "F(f)"'] [r, "α_A_1"] G(A_1) [d, "G(f)"] F(A_2) [d, "F(g)"'] [r, "α_A_2"] G(A_2) [d, "G(g)"][to = 1-2, from= 2-1, Leftrightarrow, "α_f"] F(A_3) [r, "α_A_3"] G(A_3) [to = 2-2, from= 3-1, Leftrightarrow, "α_g"] Finally, for every 2-morphism h: f ⇒ g in we must have the following equality: F(A_1) [d, "F(f)"'] [r, "α_A_1"] G(A_1) [d, "G(f)"'] [r, equal] G(A_1) [d, "G(g)"] F(A_2) [r, "α_A_2"'] G(A_2) [to = 1-2, from= 2-1,shift left,Leftrightarrow,,"α_f"] [r, equal] G(A_2) [to = 1-3, from= 2-2, Rightarrow, "G(h)"] = F(A_1) [d, "F(f)"'] [r, equal] F(A_1) [d, "F(g)"'] [r, "α_A_1"] G(A_1) [d, "G(g)"] F(A_2) [r, equal] F(A_2) [r, "α_A_2"'] G(A_2) [to = 1-2, from= 2-1, shift left, Rightarrow, "F(h)"] [to = 1-3, from= 2-2, Leftrightarrow, "α_g"] A modification m : α⇛β between two strong natural transformations α,β : F ⇒ G , written 𝒞[bend left=60,rr,"G",""name=Gright,below,xshift=2ex,""name=Gleft,below,xshift=-2ex] [bend right=60,"F"',rr,""name=Fright,above,xshift=2ex,""name=Fleft,above,xshift=-2ex] m⇛ 𝒟, [from=Fleft,to=Gleft,Rightarrow,"α", xshift = -2] [from=Fright,to=Gright,Rightarrow,"β"', xshift = 2] is the data of a 2-morphism m_A : α_A ⇒β_A for every object A ∈𝒞. : F(A) [bend left=60,r,"α_A",""name=alpha,below] [bend right=60,"β_A"',r,""name=beta,above] [2em] G(A). [from=beta,to=alpha,Leftarrow,"m_A"] They must be compatible with the 2-cell components of α and β, namely for every 1-morphism f : A_1 → A_2 we have an equality of 2-morphisms in : F(A_1) [d, "F(f)"'] [r, "α_A_1"] G(A_1) [d, "G(f)"] F(A_2) [r, "α_A_2",""name=alpha,below] [d, equal] G(A_2) [to = 1-2, from= 2-1, Leftrightarrow, "α_f"] [d, equal] F(A_2) [r, "β_A_2"', ""name=beta,above] G(A_2) [to = 1-2, from= 2-1, Leftrightarrow, "α_f"] [from=beta,to=alpha,Leftarrow,"m_A_2"] = F(A_1) [d, equal] [r, "α_A_1",""name=alpha,below] G(A_1) [d, equal] F(A_1) [d, "F(f)"'] [r, "β_A_1", ""name=beta,above] G(A_1) [d, "G(f)"] F(A_2) [r, "β_A_2"] G(A_2) [to = 2-2, from= 3-1, Leftrightarrow, "β_f"] [from=beta,to=alpha,Leftarrow,"m_A_1", shorten=5] Given 2-functors F,G:→ we write (F,G) for the category of strong natural transformations from F to G and modifications between these. Note that in this paper we work with (2,1)-categories. This implies in particular that every modification is invertible, and that the category (F,G) is a groupoid. alpha
http://arxiv.org/abs/2406.08816v1
20240613051721
ToSA: Token Selective Attention for Efficient Vision Transformers
[ "Manish Kumar Singh", "Rajeev Yasarla", "Hong Cai", "Mingu Lee", "Fatih Porikli" ]
cs.CV
[ "cs.CV" ]
Effects of Halo Spin on the Formation and Evolution of Bars in Disk Galaxies [ ============================================================================ *Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. § ABSTRACT In this paper, we propose a novel token selective attention approach, , which can identify tokens that need to be attended as well as those that can skip a transformer layer. More specifically, a token selector parses the current attention maps and predicts the attention maps for the next layer, which are then used to select the important tokens that should participate in the attention operation. The remaining tokens simply bypass the next layer and are concatenated with the attended ones to re-form a complete set of tokens. In this way, we reduce the quadratic computation and memory costs as fewer tokens participate in self-attention while maintaining the features for all the image patches throughout the network, which allows it to be used for dense prediction tasks. Our experiments show that by applying , we can significantly reduce computation costs while maintaining accuracy on the ImageNet classification benchmark. Furthermore, we evaluate on the dense prediction task of monocular depth estimation on NYU Depth V2, and show that we can achieve similar depth prediction accuracy using a considerably lighter backbone with . § INTRODUCTION Vision transformers (ViTs) <cit.> have been the core of many latest advances in computer vision, with self-attention playing a critical role to generate key visual features. However, the self-attention operation incurs quadratic computation and memory costs w.r.t. the input size. This makes it expensive and challenging to run vision transformers on high-resolution images and on resource-constrained devices. Researchers have looked into alternative ways to apply attention to make vision transformers more efficient. For instance, Swin <cit.> applies self-attention within local windows of the image. Others propose to first apply a few stages of convolutional layers and only apply self-attention on significantly downsampled versions of the input image to reduce computation costs (e.g., <cit.>). Some other papers design alternative ways to generate attention, e.g., removing softmax <cit.>, using ReLU for normalization <cit.>, applying attention on feature channel dimension <cit.>, additive attention <cit.>, etc. However, many latest advances in computer vision still predominantly rely on the vanilla ViTs, thanks to several favorable properties: their design is straightforward and easy to implement, there exist powerful pretraining methods and pretrained checkpoints for them (e.g., CLIP <cit.>, DINOV2 <cit.>), they scale better with massive data <cit.>, and they may have more consistent performance across problem domains given the generic design of standard self-attention. As such, researchers have looked into reducing the computation costs of ViTs while preserving the standard self-attention. A common approach is to reduce the tokens as they go through the network <cit.>. This effectively reduces the computation and memory usage which scale w.r.t. the number of attended tokens. However, these methods only work for classification tasks. As some of the tokens are discarded or merged during inference, these networks cannot be used for dense prediction tasks which require distinct features for all the image pixels/patches. In this paper, we propose a novel token selective attention approach, , which can make any vision transformer more efficient by selecting only subsets of tokens for self-attention. More specifically, is applied to two consecutive transformer layers where the latter will become token selective. Given the multi-head attention maps from the first layer, our token selector predicts the attention maps for the next layer and produces important scores for the tokens accordingly. The top important tokens will go into self-attention of the next layer, which we call a transformer layer; the transformer layer replaces the original second standard transformer layer. The rest of the tokens simply bypass the layer and are re-joined with the attended tokens. In this way, we reduce the quadratic computation and memory costs as fewer tokens are attended, maintain the favorable standard self-attention operation, and retain the full set of tokens during inference so the network can be used as an encoder for dense prediction tasks. Fig. <ref> provides a high-level illustration of . In summary, our main contributions are as follows: * We present , a novel approach that improves the efficiency of a multi-layer vision transformer by reducing the number of tokens participating in the attention computation in certain layers. Although not all tokens are attended in , they are retained throughout the layers. This allows the model to be used for dense prediction tasks. * In order to select a subset of tokens to be fed into self-attention, we devise a token selector that predicts important scores of the tokens based on the previous transformer layer. The scores are used to identify tokens that need to be attended at the next layer and the rest will simply skip the next layer. * We evaluate on the standard ImageNet classification benchmark <cit.>, showing that significantly reduces the computation cost while maintaining accuracy. In addition, we apply to a vision transformer backbone and use it as an encoder for a monocular depth estimation task. We show that our more efficient backbone maintains the depth prediction accuracy. § PROPOSED APPROACH: In this section, we present our proposed token selective attention method, . selects only a subset of tokens to participate in the self-attention of a transformer layer, which considerably reduces computation and memory costs, while maintaining the full set of image features throughout the network, making it feasible to be used as an encoder for dense prediction tasks. §.§ Self-Attention with Selected Tokens Standard Self-Attention: Consider an input X_i ∈ℝ^L × D, where L is the number of tokens (i.e., features of image patches) and D is the feature dimension; the batch dimension is omitted here for conciseness. In a standard vision transformer layer i, X_i first goes through linear layers to generate the query Q_i^h, key K_i^h, and value V_i^h, respectively, for each attention head, h∈{1,...,H}, as follows: Q_i^h = W_Q,i^hX_i, K_i^h = W_K,i^hX_i, V_i^h = W_V,i^hX_i, where W_Q,i^h, W_K,i^h, W_V,i^h are linear transformation matrices. Then, an attention map is computed based on the query and key, which is then multiplied with the value: A_i^h = softmax(Q_i^h(K_i^h)^T/√(D)), X_i+1^h = A_i^h · V_i^h. The outputs from all attention heads are concatenated and fed into a linear layer, F_i, to general the final output of this transformer layer: X_i+1 = F_i(W_O,i·Concat(X_i+1^1, ..., X_i+1^H)), where W_O,i is a linear projection matrix. Token Selective Attention (): In the standard self-attention, all tokens participate in the attention computation, resulting in O(N^2) computation and memory costs. One way to reduce computation costs is to reduce the number of tokens taking part in the self-attention, i.e., reducing N. In order to do this, we need to identify the subset of important tokens that should go into self-attention, and the remaining set that should skip the attention. We propose a novel token selector, which operates between a standard transformer layer and a selective attention transformer layer (transformer layer). It consumes the un-normalized attention maps (i.e., QK^T) from the previous standard layer and predicts the attention maps for the respective heads of the next layer. These predictions are used to identify important tokens. More specifically, we perform column sum over the predicted attention map of each head, which generates an importance score for each token. Based on the important scores, we select the top-K tokens to perform self-attention in the next layer, to which we refer as the attention tokens X_i+1^a,h, and K is determined based on a prescribed attention ration r. For instance, r=80% indicates that the top 80% tokens will participate in the self-attention in the next layer. The remaining tokens simply bypass the next transformer layers, which we call skip tokens X_i+1^s,h. After the attention tokens go through the transformer layer, where standard self-attention is applied to this subset, the attended tokens and skipped tokens are combined to re-form the complete set of tokens, for each head, i.e., X_i+2^h = Concat(X_i+2^a,h, X_i+1^s,h), where X_i+2^a,h is the self-attention output from X_i+1^a,h and the concatenation is performed along the token dimension. Finally, the outputs from all the heads are concatenated and processed by a linear layer to produce the final output from this layer. Fig. <ref> illustrates this process for a pair of standard layer and layer. §.§ Token Selector: Architecture and Training A high level overview of the token selector architecture is given in Fig. <ref>. The token selector takes as input the multi-head un-normalized attention maps (QK^T) from the previous standard transformer layer. It consists of two 1D convolutional layers with a intermediate ReLU activation, followed by a log-softmax layer that predicts the full L× L multi-head attention maps, Â_i+1^1, ..., Â_i+1^H, for the next layer; we choose log-softmax for the output layer as it produces normalized attention maps and is more numerically favorable in training. We train the token selector's attention prediction part based on ground-truth attention maps computed by the pretrained model. More specifically, given a pre-trained backbone (e.g., DeiT <cit.> trained on ImageNet classification <cit.>), we insert a token selector between two consecutive standard transformer layers i and i+1. During training, the token selector consumes the QK^T maps from the first transformer layer and its predicted attention maps are compared against the actual attention maps computed by the second transformer layer of the pretrained model, which provides supervision to train the token selector, i.e., ℒ_i,i+1 = ∑_h=1^H ℒ_KLD(Â_i+1^h,A_i+1^h), where ℒ_KLD denotes the KL-Divergence loss. In this process, the pre-trained transformer layers are frozen and only the token selector is trained. Note that we only use the same data used to pre-train the vision transformer model to train our token selectors, and do not use extra training data. §.§ Full Vision Transformer Model with Given a pretrained vision transformer model, we can apply to any pair of layers where the second layer will be replaced by a layer and we train a token selector between them. Consider a 12-layer standard vision transformer as an example. We can apply to consecutive pairs of layers, and replace the 2nd, 4th, 6th, 8th, and 10th standard transformer layers with transformer layers; see an illustration in Fig. <ref>. Once the token selectors are trained for the pairs of layers where will be applied, we freeze the token selectors, modify the second layers in the pairs to be token selective, and finetune the full model on the same training set where it was originally pre-trained (e.g., ImageNet). § EXPERIMENTS We evaluate on the ImageNet classification benchmark. We further utilize it for monocular depth estimation and demonstrate its effectiveness. Note that in this short paper, we present representative results to showcase the efficacy of our proposed approach. We plan to include more comprehensive results in the full paper. §.§ Experimental Setup Datasets and Evaluation: We perform standard image classification evaluation on ImageNet <cit.>. We further use classification networks as encoders for monocular depth estimation on NYU Depth v2 <cit.> and KITTI <cit.>, where we use standard depth accuracy metrics such as Absolute Relative Error (Abs Rel) and and δ < 1.25; see <cit.> for mathematical definitions of these metrics. Networks and Implementation: In this work we use DeiT-Tiny <cit.> as the base model as it is one of the standard vision transformers and the computation resource required for training is more accessible, e.g., we use 2 Nvidia A100 GPUs. For ImageNet evaluation, we train the token selector and finetune DeiT-Tiny with both on ImageNet training data. Since DeiT-Tiny has 12 layers, we apply to the 2nd, 4th, 6th, 8th, and 10th layers, using 80% tokens for self-attention at each layer. For depth estimation, we use NeWCRFs <cit.> as the base model and replace the encoder to be either DeiT-Tiny or DeiT-Tiny with . The original token merging method reduces tokens by merging and cannot be used for dense prediction tasks which will require distinct features for all the image patches. <cit.> uses token merging to accelerate stable diffusion, but it requires additional steps to un-merge the tokens. Moreover, while un-merging works for image generation, it may not work for other dense tasks such as depth estimation, since similar-looking patches may have very different depth values. Computation reduction by VTC-LFC also includes channel pruning. §.§ Image Classification Table <ref> summarizes the evaluation results on ImageNet-1K classification benchmark. By applying , we significantly reduce the computation of DeiT-Tiny by nearly 25% while maintaining the classification accuracy. Since the full set of tokens are preserved, our network can be readily used as an encoder for dense prediction tasks. On the other hand, while existing works like ToMe <cit.> and a-STAR <cit.> can also significantly reduce computation, they rely on reducing the number of tokens, i.e., only a small number of tokens are kept after going through the network. This works for classification but makes it challenging to use such models as encoders for dense tasks. Furthermore, these existing methods result in accuracy drops. Fig. <ref> visualizes the token selections at sample layers (second, sixth, and tenth) of the network. It can be seen that most patches of the main object (cat) are selected for self-attention and the network selects different subsets of background patches at different layers. For instance, at these layers, the network selects background patches from the right, left, and top parts of the image, respectively, while always selecting most of the cat patches. §.§ Monocular Depth Estimation allows the modified network to be used as an encoder for dense prediction. In Table <ref>, we see that our DeiT-Tiny w/ network provides similar depth estimation performance on both NYU Depth v2 and KITTI benchmarks, although it uses considerably fewer tokens for self-attention in several layers, as compared to using the original DeiT-Tiny as an encoder. § CONCLUSION In this paper, we proposed , a novel token selective attention approach to make vision transformers more efficient. Based on predicted attention maps, a subset of important tokens are selected for self-attention while the rest bypass the transformer layer. Unlike existing works, maintains the full set of tokens, making the network readily usable for dense prediction. Our evaluation results indicate that can significantly reduce computation costs while maintaining accuracy on both classification and dense prediction tasks. ieeenat_fullname
http://arxiv.org/abs/2406.09184v1
20240613144701
Existence of solitary waves in particle lattices with power-law forces
[ "Benjamin Ingimarson", "Robert L. Pego" ]
nlin.PS
[ "nlin.PS", "math.AP", "math.CA", "Primary 37K40, 70F45, Secondary 37K60, 70H09, 35R11" ]
Department of Mathematical Sciences Carnegie Mellon University Pittsburgh, PA 15213. Existence of solitary waves in particle lattices with power-law forces Robert L. Pego[Email address: rpego@cmu.edu] June 9, 2024 ====================================================================== § ABSTRACT We prove the existence of small solitary waves for one-dimensional lattices of particles that each repel every other particle with a force that decays as a power of distance. For force exponents α+1 with 4/3<α<3, we employ fixed-point arguments to find near-sonic solitary waves having scaled velocity profiles close to non-degenerate solitary-wave profiles of fractional KdV or generalized Benjamin-Ono equations. These equations were recently found to approximately govern unidirectional long-wave motions in these lattices. Keywords: Solitons, Hamiltonian lattices, particle chains, forces of infinite range, traveling waves Mathematics Subject Classification: Primary 37K40, 70F45; Secondary 37K60, 70H09, 35R11 § INTRODUCTION In this work we prove an existence theorem for solitary waves of small amplitude in an infinite lattice of particles which all interact with each other through long-range power-law forces. The equations of evolution that govern the particle positions x_j (required to increase with j) are ẍ_j = -∑_m=1^∞( (x_j+m-x_j)^--1 -(x_j-x_j-m)^--1) , j∈. When 1<α<3, we showed in a previous work <cit.> that the unidirectional propagation of long-wave solutions of (<ref>) is formally governed by the nonlocal dispersive PDE _t u + u_x u + H|D|^α u = 0, where H is the Hilbert transform and the dispersion term f=H|D|^α u has Fourier transform f̂(k)=(-i k)|k|^αû(k). Subsequently, Wright <cit.> has rigorously proved that long-wave solutions of (<ref>) are close to solutions of (<ref>) over a suitable long time-scale provided α_*<α<3, where α_*≈ 1.48. It is our aim in the present paper to prove that the system (<ref>) admits exact solitary wave solutions for speeds slightly exceeding the `sound speed' c_α, which is the maximum speed of linear waves and is given by c_α = √(α(α+1)ζ_α) , where ζ_s=∑_n=1^∞ n^-s denotes the Riemann zeta function. We will find such waves by approximation to solitary waves of (<ref>), which were first proved to exist by Weinstein <cit.> and Benjamin et al. <cit.>. The waves that we approximate need to have a non-degeneracy property proved for a class of solutions including ground states by Frank and Lenzmann <cit.>. In the case α=2, the system (<ref>) is an infinite Calogero-Moser system. For this case, in <cit.> we also established explicit formulas providing solitary waves having any supersonic speed c>c_2=π. The proof exploited some of the well-known completely integrable structure of finite Calogero-Sutherland systems to find periodic waves. Our present study builds instead on the formulation and methods devised by Herrmann and Mikikits-Leitner in <cit.> in order to find solitary waves that approximate KdV solitons for particle lattices with forces of any finite range. The work <cit.> in turn improved and simplified the method earlier employed by Friesecke and Pego in <cit.> to obtain such a result for Fermi-Pasta-Ulam-Tsingou (FPUT) lattices, which are lattices with nearest-neighbor forces. Vainchtein <cit.> has recently reviewed the literature concerning solitary waves in particle lattices of various kinds. We seek solitary waves having the following form: x_j(t) = j- ^ν U(x) , x = (j-c t), c^2=c_α^2+^μ. Consistent with the formal long-wave scaling found in <cit.> we take μ=α-1, ν=α-2. For waves of the form in (<ref>), the particle velocity ẋ_j(t)=c^μ U'(x). As we discuss in Section <ref> below, the scaled velocity profile W=U' needs to satisfy a nonlocal, nonlinear eigenvalue problem, which formally reduces in the limit →0 to a nonlocal quadratic equation, namely W + κ_3 |D|^μ W = 12κ_2 W^2 , where κ_3 and κ_2 are positive constants as found in <cit.>; see (<ref>) and (<ref>) below. Solutions of (<ref>) provide solitary waves of (<ref>) after appropriate scaling. A profile W satisfying (<ref>) is called non-degenerate if the linearized operator L_+ = I + κ_3|D|^μ - κ_2 W , acting in L^2(), has one-dimensional kernel spanned by the derivative W'. As stated by Frank and Lenzmann <cit.>, for (<ref>) to admit any solution having finite energy (i.e., in H^μ/2()∩ L^3()), it is necessary that 43<α<3 , due to Pohozaev identities. (See <cit.> for the key to the nontrivial proof of these identities.) Consequently our results for 1<α<3 will be restricted to the smaller range in (<ref>). For all α in this smaller range, however, ground-state solutions (positive, even, energy-minimizers) exist and are proved in <cit.> to be non-degenerate. Moreover, any solution of (<ref>) must be positive, as discussed in Section <ref> below. To find profiles of solitary waves of (<ref>), similar to <cit.> and <cit.> we formulate a fixed-point equation and regard it as a perturbation of a corresponding fixed-point equation for solutions of (<ref>). We analyze the fixed-point equations, however, in the space of even functions in H^1(), rather than in L^2() as was done in <cit.>. This has the natural advantage of working in a Banach algebra of functions, and we obtain further simplification by initially seeking less precise control over the size of the correction. In principle, spectral analysis of the linearization of the fixed-point equation could have become more complicated in H^1() instead of L^2(). But we were able to substantially simplify spectral analysis in H^1() by extracting from <cit.> a key compactness argument and casting it into an abstract form; see Lemma <ref> below. The plan of this paper is as follows. In Section <ref> we develop preliminaries. We derive the fixed-point equations governing solitary wave profiles for (<ref>) and its formal limit (<ref>) in Section <ref> and precisely state the main theorem. In Section <ref> we prove the existence of solitary wave profiles for (<ref>). For 4/3<α<3, given any non-degenerate even solution W_0∈ H^1() of (<ref>), when c-c_α is positive and small we find a (locally unique) scaled profile W_=U' that is even, positive, and close to W_0 in H^1, providing a solitary wave for (<ref>) as in (<ref>). We carry out a fixed-point analysis based on a quantitative fixed-point lemma from <cit.>. Control over the deviation W_-W_0_H^1 comes by adapting the rigorous residual estimates of Wright <cit.>. When α=2 and is sufficiently small, the waves we find here agree with the ones provided by the implicit formulas in <cit.>; see Subsection <ref>. We establish positivity and smoothness of the profiles W_ in Section <ref>. There we also show that the unscaled velocity profiles, given by v_c(z)=c^μ W_( z), are analytic as functions of wave speed. We study a Hamiltonian energy for the solitary waves of (<ref>) in Section <ref>. For α=2 we find explicit formulas by using results from <cit.>. The sign of d/dc agrees with the sign of α-3/2 when the latter is non-zero, for sufficiently small depending on α. In a variety of lattice wave and other Hamiltonian wave stability problems, a change in the sign of d/dc has been associated with transitions to instability; e.g., see <cit.>. Whether this may be the case for systems such as (<ref>) remains an open problem. The value α=3/2 is L^2-critical for (<ref>). In this regard it is curious that Wright's result in <cit.>, showing that solutions of (<ref>) approximate long-wave solutions of (<ref>) over long times, is valid for all α in a neighborhood of 3/2. In the interest of brevity, we do not address the range α≥3 in the present paper. In that range naturally one expects a KdV limit, but also one should be able to treat a much more general family of interparticle forces. In particular, the case of alternating signs studied formally in <cit.> seems particularly challenging and deserves a separate study. § PRELIMINARIES The Maclaurin series for Z(r):=α(1-r)^-α-1 takes the form Z(r) = ∑_k=0^∞α_k r^k , α_k = α(α+1)⋯(α+k)/k! . The coefficients α_k are defined differently than in <cit.> for present convenience. In the standard Sobolev space H^s=H^s(), s≥0, we use the inner product given in terms of the Fourier transform f̂(k)=1/2π∫_ f(x)e^-ikx dx by f,g_H^s = ∫_ (1+|k|^2)^s f̂(k)ĝ(k) dk . We recall that for each s>1/2, there is a constant C_H^s≥1 such that fg_H^s≤ C_H^sf_H^sg_H^s for all f,g∈ H^s. We take the inner product in L^2=L^2() identical to that for s=0 above. We let denote the subspace of even elements of L^2, elements f for which f(-x)=f(x) for a.e. x∈ (or equivalently f̂(-k)=f̂(k) for a.e. k∈), and we let ^s=H^s∩. The space (H^s) is the space of bounded linear operators on H^s, equipped with the operator norm. Following <cit.>, we will make heavy use of the symmetric averaging operators _η defined for η>0 on H^s for s≥0 by _η f(x) = 1/η∫_-η/2^η/2 f(x+z) dz, which satisfy f(x+12η) - f(x-12η) = η_η (_x f)(x) = η_x (_η f)(x) , _η f(k) = (12 η k) f̂(k) . Here (z)=(sin z)/z. From the Fourier representation it is clear that the operators _η map H^s into H^s+1 continuously. Moreover since (1/2η k) lies in [-1,1] and converges to 1 as η→ 0 for any k, it is clear that _η is nonexpansive on H^s and converges to the identity strongly (but not in operator norm). That is, for any f∈ H^s we have _η f_H^s≤f_H^s , and _η f-f^2_H^s = ∫_ℝ(1+ k^2)^s | (12η k) - 1|^2 |f̂(k)|^2 dk → 0 as η→0. Because |1- z |≤1/6 z^2 for all z it also follows _η f-f_H^s≤η^2/24f_H^s+2 for all f∈ H^s+2. Note further that _η f is even if and only if f is even. Moreover, if f is even and unimodal (i.e., even, and decreasing on (0,∞)) then _η f is also, since for f smooth we have _x(_η f)≤0 by (<ref>). § EQUATIONS FOR SOLITARY-WAVE PROFILES In this section, we follow the approach of Herrmann and Mikikits-Leitner in <cit.> to formulate a fixed point equation whose solution provides velocity profiles of solitary waves for (<ref>). §.§ Equations for profiles on lattices Due to the ansatz (<ref>), by (<ref>) and since μ=α-1=ν+1 we can write x_j+m-x_j = m - ^ν(U(x+ m)-U(x)) = m(1-^μ_mW(x+12m)), x_j-x_j-m = m - ^ν(U(x)-U(x- m)) = m(1-^μ_mW(x-12m)), where W = U'. By consequence, α(x_j+m-x_j)^-α-1 = Z(^μ_mW(x+12m)) m^-α-1, α(x_j-x_j-m)^-α-1 = Z(^μ_mW(x-12m)) m^-α-1, and after taking the difference and using formula (<ref>) again, we find that for system (<ref>) to be satisfied it is necessary and sufficient that c^2^α _x W(x) = ∑_m=1^∞m/m^α+1_x_m Z(^μ_mW) . We seek (weak) solutions of this equation in H^1. By requiring that ^μ C_H^1W_H^1<1 , we ensure that the MacLaurin series for Z(^μ_mW)-Z(0) converges in H^1, avoiding the singularity of Z(r) at r=1. Since Z(0)=α, we thus find it necessary and sufficient that W should satisfy the nonlocal nonlinear eigenvalue problem c^2 W = ∑_m=1^∞^-μ/m^α_m (Z(^μ_mW)-α) . As in <cit.>, we recast this equation by collecting linear terms on the left-hand side and separating the quadratic terms. Recall that c^2=^μ+α_1∑_m≥1m^-α from (<ref>) and (<ref>), and define Z_3(r) = α(1-r)^-α-1-α-α_1 r -α_2r^2, _ W = W + α_1∑_m=1^∞^-μ/m^α(W - _m^2 W), _(W) = α_2∑_m=1^∞1/m^α_m(_mW)^2, _(W) = ∑_m=1^∞^-2μ/m^α_m Z_3(^μ_mW). After substitution and further dividing by ^μ we find (<ref>) equivalent to _ W = _(W) + _(W). §.§ Formal limit equations The operator _ is a Fourier multiplier, with _ W(k) = b_(k)Ŵ(k) where the symbol b_ is given by b_(k) = 1+ α_1 ∑_m=1^∞1-^2(1/2 km)/(m)^α . We have 1≤ b_(k)≤^-μα_1ζ_α for all k, so _ is bounded on H^s for any fixed >0, with nonexpansive inverse _. For 1<α<3, similar to what was noted in <cit.>, the sum in (<ref>) approximates a convergent integral. Indeed, as h→0^+, S_α(h):= α_1 h ∑_m=1^∞1-^2(1/2 mh)/(mh)^α→κ_3 , where, with notation consistent with <cit.>, κ_3 := α_1∫_0^∞1-^2(z/2)/z^α dz . Since b_(k)=b_(|k|)=1+ |k|^α-1S_α( |k|), we have that for each fixed k∈, b_(k) → b_0(k) := 1+κ_3 |k|^α-1 as →0. We let _0 denote the Fourier multiplier with symbol b_0(k), writing _0 W = W + κ_3|D|^α-1W . We remark that due to the formula for the integral in <cit.>, we have κ_3= π, α=2, -2sin(1/2πα)Γ(1-α), α∈(1,2)∪(2,3). For the quadratic term in (<ref>), we find that _m(_mf)^2 → f^2 in H^s for any f∈ H^s with s>1/2, by using the strong convergence property (<ref>) and the fact that H^s is a Banach algebra. Hence by dominated convergence, _(f) - _0(f)_H^s→ 0 as →0 , where _0(f) := 12 κ_2 f^2, κ_2 = 2α_2ζ_α . We will establish a rigorous bound on the higher-order term _(W) later. For now, we note that it is formally O(^μ) since Z_3(r)=O(r^3). Thus we expect that as →0, (<ref>) should approximate equation (<ref>), which we can recast in the form _0 W = _0(W) . This equation determines the profile of solitary waves of speed c̃=1/κ_1 of the nonlocal dispersive equation κ_1_t u + κ_2 u_x u +κ_3 H|D|^α u = 0 . According to <cit.> and the rigorous results of Wright <cit.>, this equation, with κ_1=2c_α, is the correctly scaled formal limit of (<ref>) consistent with the long-wave ansatz x_j=j+^ν v((j-c_α t),^α t) , u = -_x v. §.§ Fixed-point formulation and main result Similar to <cit.>, our approach to find solutions of (<ref>) is to fix a known even solution W_0 to (<ref>), meaning an even solution of the fixed-point equation W_0 = (W_0):= _0_0(W_0) , and solve the fixed-point corresponding to (<ref>), which is W = _(W):= _ (_(W)+_(W)) , through a perturbation analysis. We will suppose that W_0∈^1 is a given solution of (<ref>) that is non-degenerate. Recall this means that the linearized operator _+=_0-D_0(W_0) , acting in L^2, has one-dimensional kernel spanned by the odd function W_0'. By a bootstrapping argument, it follows that W_0 = _0 (1/2κ_2 W_0^2) belongs to H^s_ even for all s>0, hence is smooth. Moreover, W_0 is positive, since W_0^2 is positive and the Green's function for the operator _0 is positive (see Section <ref> below). Our main results are stated precisely as follows. Assume 4/3<α<3 and W_0∈ H^1 is an even solution of (<ref>) that is non-degenerate. Then there exist positive constants _0, δ, and C such that the following hold, for each ∈(0,_0): (i) _ has a unique fixed point W_∈^1 satisfying W_-W_0_H^1≤δ. (ii) W_-W_0_H^1≤ C^γ, where γ = α-1 α∈(1,2], 3-α α∈ (2,3). (iii) W_ is everywhere positive. (iv) W is smooth, with W∈ H^∞. Furthermore, the map c↦ v_c∈^1, from wave speed c to the unscaled velocity profile v_c given by v_c(z) := c^μ W_( z), c^2 = c_α^2+^μ, is analytic. Note that the unscaled velocity profile function v_c determines the particle velocities according to ẋ_j(t) = v_c(j-ct), cf. (<ref>). Note as well that given f∈^1, the dilation map ↦ f(·) may not be analytic, or even differentiable; thus we do not discuss the regularity of the map ↦ W_. § FIXED-POINT ANALYSIS In order to prove the existence of wave profiles as fixed points in equation (<ref>), we make use of the quantitative version of the standard inverse function theorem stated as Lemma A.1 in <cit.> and proved there. Restated for clarity, it takes the following form, in which · denotes the norm in E or the operator norm on (E) as appropriate. Let F and G be C^1 maps from a ball B in a Banach space E to E. Suppose u_0=F(u_0) and that L=I-DF(u_0) is invertible with operator norm L≤ C_0<∞. Assume that positive constants C_1, C_2, θ and δ satisfy C_0(C_1+ C_2)≤θ<1 , F(u_0)-G(u_0)≤δ(1-θ)/C_0 , and that whenever u-u_0≤δ, u is in the ball B and DF(u)-DF(u_0) ≤ C_1 , DF(u)-DG(u) ≤ C_2 . Then u=G(u) for some unique u∈ B satisfying u-u_0≤δ, and moreover u-u_0≤ C_0(1-θ)F(u_0)-G(u_0) . We will apply this lemma to the functions F= and G=_ on E=^1, for >0 sufficiently small. The functions and _ will be shown to be analytic on a suitable ball in H^1. The operator _0 = I - D(W_0) = I - _0 D_0(W_0) will be shown to be Fredholm, and is invertible because W_0 is non-degenerate. Establishing (<ref>) will be easy. To obtain the residual estimate (<ref>) we will use rigorous residual bounds established by Wright <cit.>. Our proof of (<ref>) involves a contradiction argument based on a key compactness property. This is essentially a distillation of Herrmann & Mikikits-Leitner's proof in <cit.> of invertibility in for an operator analogous to I-D_(W_0), uniformly for all small enough >0. In the present context, uniform invertibility in ^1 follows from conditions (<ref>)–(<ref>) together with a Neumann series expansion. §.§ Analyticity and symmetry We first establish the analyticity of various maps on H^1, referring to <cit.> for the basic theory of analytic maps on Banach spaces. Note the maps _0 and _ are continuous quadratic maps on H^1, hence are analytic. Any monomial map f ↦ f^k is analytic, and compositions and uniform limits of analytic functions are analytic. Regarding _ we have the following. Let ,ρ∈(0,1), and for R>0 let B̃_R = {f∈ H^1: C_H^1f_H^1≤ R} be the closed ball of radius R/C_H^1 in H^1. Then _B̃_R→ H^1 is analytic provided ^μ R≤ρ<1, and the following bounds hold for all f∈B̃_R: _(f)_H^1≤^μζ_α Z_3(ρ) (R/ρ)^3, D_(f)_(H^1)≤^μζ_α Z_3'(ρ) (R/ρ)^2. Recall the series expansion for Z_3(r)=∑_k=3^∞α_k r^k converges for |r|<1. Since f^k_H^1≤ (C_H^1f_H^1)^k for all k, it follows that the Nemytskii operator f↦ Z_3∘ f is analytic on the ball B̃_ρ provided ρ<1, with Z_3∘ f_H^1≤ Z_3(ρ) for all f∈B̃_ρ. Thus, for any R>0 and each m≥0, the map W↦_mZ_3(^μ_mW) is analytic on the ball B̃_R provided ^μ R≤ρ, and _mZ_3(^μ_mW) _H^1≤ Z_3(^μ R) ≤ Z_3(ρ)(^μR/ρ)^3 . It follows that the series expansion for _(W) in (<ref>) then converges uniformly in H^1 on B̃_R under the same condition, with the stated bound on the H^1 norm. It is then straightforward to show in a similar way that that for all W∈B̃_R and V∈ H^1, since Z_3'(^μ R)≤ Z_3'(ρ)(^μ R/ρ)^2, D_(W)V = ^-2μ∑_m≥1 m^-α_m (Z_3'(^μ_mW)(^μ_m V)) , and D_(W)V_H^1≤^μζ_α Z_3'(ρ)(R/ρ)^2V_H^1 . This finishes the proof. Regarding symmetry, we note that since the symbols of the operators _η, _ and _0 are real, even, and bounded, these operators map even functions in H^s to even functions in H^s. For s>1/2 the monomial maps f↦ f^k also have the same property. From this and the lemma above we infer the following. The map in (<ref>) is an analytic map from ^1 into itself. For any ∈(0,1) and R>0 such that ^μ R<1, the map _ in (<ref>) is analytic from B̃_R∩^1 into ^1. §.§ Residual estimates According to the definitions in (<ref>) and (<ref>) we can write (W_0)-_(W_0) = _ - _ , where _ = _0_0(W_0) - __(W_0) , _ = __(W_0) . We have __H^1≤_(W_0)_H^1 since _ is non-expansive on H^1, so we find the following by simply applying Lemma <ref> and recalling μ=α-1. For all >0 sufficiently small we have __H^1≤ C^α-1. For the term _ we claim the following. For all >0 sufficiently small we have __H^1≤ C α∈ (1,2], C ^3-α α∈(2,3). The proof will be provided presently. But the last two results together immediately imply the following residual estimate. For all sufficiently small >0 we have (W_0)-_(W_0) _H^1≤ C^α-1 α∈ (1,2], C ^3-α α∈(2,3). To prove Proposition <ref> we adapt Wright's method of estimating residuals in the long-wave approximation of (<ref>) in <cit.>. We begin with an estimate on the difference of quadratic functions. There exists C>0 independent of such that _(W_0)-_0(W_0)_H^1≤ C^α-1 . First observe that _ (W_0) - _0(W_0) _H^1≤∑_m≥ 1α_2/m^α_m (_m W_0 )^2 - W_0^2_H^1 . We claim that for some constant C independent of and m, (_m (_m W_0 )^2 - W_0^2_H^1≤ C m^2^2. Indeed, by the triangle inequality, _m (_m W_0 )^2 - W_0^2_H^1≤ (_m-I) (_m W_0 )^2_H^1 + (_m W_0 )^2 - W_0^2_H^1 , from which one can infer (<ref>) by using (<ref>) and the H^3 regularity of W_0. Using the estimate (<ref>) in (<ref>) for small m, we find ∑_m=1^⌊ 1/⌋ m^-α_m (_m W_0)^2 - W_0^2_H^1 ≤∑_m=1^⌊ 1/⌋ m^-α C(m)^2 ≤C^α-1/3-α . In the last line, we used a simple integral bound ∑_m=1^⌊ 1/⌋ m^2-α≤1/3-α^α -3 as in <cit.>. For large m, we simply bound the norms in (<ref>) by a constant, and get through a similar integral bound ∑_m > ⌊ 1/⌋α_2/m^α_m (_m W_0)^2 - W_0^2_H^1≤C'^α-1/α -1 , where C' is another constant independent of and m. Next we deal with estimates on differences of symbols and operators. For all >0 sufficiently small we have (i) For all k∈, |b_(k) - b_0(k)| ≤ C|k| α∈ (1,2], C|k|^3-α^3-α α∈(2,3). (ii) For each s≥0 and all f∈ H^s+3, (_-_0)f_H^s≤ C f_H^s+1 α∈ (1,2], C ^3-αf_H^s+3-α α∈(2,3). First we note the estimates S_α(h) = h^1-α ∑_m≥ 11-^2(mh/2)/m^α≤ h^1-αζ_α , | b_^-1(k) - b_0^-1(k)| = |b_0(k)-b_(k)|/b_0(k)b_(k) = |k|^α-1|S_α( k) - κ_3 |/b_0(k)b_(k) ≤|S_α( k) - κ_3|/κ_3 b_(k) , since by its definition in (<ref>), b_0(k)=1+κ_3 |k|^α-1. Now we invoke Lemma 3 of <cit.>, which directly implies that for all h>0, |S_α(h)-κ_3| ≤ Ch α∈ (1,2], Ch^3-α α∈(2,3). Part (i) follows using this in (<ref>). Plancherel's identity yields part (ii). From (<ref>) and the triangle inequality, we get __H^1≤_ (W_0) - _0(W_0) _H^1 + (_ - _0 ) _0(W_0) _H^1 , which are estimated respectively by Lemmas <ref> and <ref>, using smoothness of W_0. Considering each case α∈ (1,2] and (2,3) gives the desired result. §.§ Derivative estimates Here our goal is to prove derivative estimates which will entail the conditions (<ref>) and (<ref>) in Lemma <ref>. In fact, we seek to prove the following. (i) Given any C_1>0, if 0<δ≤ C_1/κ_2C_H^1 then D(W)-D(W_0)_(H^1)≤κ_2 C_H^1W-W_0_H^1≤ C_1 for all W∈ H^1 with W-W_0_H^1≤δ. (ii) For any W∈ H^1, the operator D(W) is compact on H^1. Given any C_2>0 there exist positive constants δ and _0 such that whenever ∈(0,_0) and W-W_0_H^1≤δ we have D(W)-D_(W)_(H^1)≤ C_2 . Define H^1→(H^1) by (W)f=_0(Wf) . Then D = κ_2, and Proposition <ref> follows immediately from the following lemma. (i) For any W_1,W_2∈ H^1 we have (W_1)-(W_2)_(H^1)≤ C_H^1W_1-W_2_H^1 . (ii) For any W∈ H^1, the operator (W) is compact on H^1. For all V∈ H^1 we have (W_1)V - (W_2)V = _0 ((W_1-W_2) V) . Since _0 is nonexpansive on H^1 the estimate in (i) follows. For part (ii), assume at first that W∈ C^∞_c(). Then the operators (W) and (W') are compact on L^2 by the compactness criteria in <cit.>, since the functions b_0, W and W' are continuous and vanish at ∞. It follows easily that (W) is compact on H^1. For a general W∈ H^1, choose a sequence of functions W_n∈ C^∞_c() approximating W in H^1. Then part (i) implies (W) is approximated in (H^1) by the compact operators (W_n), hence is itself compact. Recall κ_2=2α_2ζ_α and D(W)V = _0 (D_0(W)V) = κ_2(W)V = κ_2 _0(WV) , D_(W)V = _( D_(W)V + D_(W)V) , where D_(W)V = 2α_2 ∑_m≥1m^-α_m( (_mW) (_m V)) . Based on the multiplicative inequality for H^1 and the non-expansivity of _0, _, and _m, the proof of the following lemma is easy and is omitted. For all W∈ H^1 we have _ (D_(W)-D_(W_0))_(H^1) ≤κ_2 C_H^1W-W_0_H^1. By this result and the bounds on D_ in Lemma <ref>, to prove Proposition <ref> it suffices to prove that _0 D_0(W_0) - _ D_(W_0)_(H^1)→ 0 as →0. Key to our approach is the following result on operator norm convergence of Fourier multipliers. It will be proved in the subsection to follow, by use of Plancherel's identity and a proof that b_(k)→ b_0(k) uniformly in k. As →0 we have _ - _0_(H^1)→ 0. Taking this for granted at present, since D_(W_0) is uniformly bounded we infer that to prove Proposition <ref> it suffices to replace _ by _0 in (<ref>), i.e., to prove _0 D_0(W_0) - _0 D_(W_0) _(H^1)→ 0 as →0. To proceed we define operators _η and _0 by _η f = _0( (_ηW_0) f) , _0 f= _0(W_0 f). Evidently by Lemma <ref> we have that _η - _0 _(H^1)≤ C_H^1(_η-I)W_0_H^1→ 0 as η→0. And we may write _0 D_(W_0)V = 2α_2∑_m≥1 m^-α_m_m_m V . Since _η is self-adjoint and _η→ I strongly as η→0, we can use the fact that _0 is compact and the abstract Lemma <ref> below to conclude that _η_η_η - _0_(H^1)→ 0 as η→0. Then (<ref>) follows by dominated convergence from the fact that _0 D_0(W_0) - _0 D_(W_0) = 2α_2 ∑_m≥1 m^-α (_0-_m_m_m). Modulo the proofs of Proposition <ref> and Lemma <ref> to come, this completes the proof of Proposition <ref>. §.§ Lemmas on compactness and Fourier multipliers §.§.§ Compactness and operator convergence Let be a Banach space. Let S,T∈(), and assume T is compact. Let (S_n)_n and (T_n)_n be sequences in (), and assume T_n-T→0 as n→∞. Then: (i) If S_n → S strongly, then S_nT_n-ST→0. (ii) If the adjoints S_n^* → S^* strongly, then T_nS_n-TS→0. To prove (i), suppose the claimed convergence fails. Then there must exist a constant c>0 and a sequence (x_n)_n in such that x_n=1 and c≤(S_nT_n-ST)x_n for all n. However, since T is compact we may pass to a subsequence (denoted the same) such that Tx_n→ y for some y∈. Then T_nx_n→ y also, while (S_nT_n-ST)x_n ≤S_n(T_nx_n-y)+(S_n-S)y+S(y-Tx_n) . But the hypotheses ensure this tends to 0, since S_n must be uniformly bounded. This contradiction proves (i). For (ii) we note that the compactness of T on implies the compactness of its adjoint T^* on ^*, and that T_nS_n-TS = S_n^*T_n^*-S^*T^*. Then applying part (i) to the adjoints yields part (ii). §.§.§ Convergence of Fourier multipliers Due to Plancherel's identity it is evident that _ - _0_(H^1)≤ω_b():= sup_k∈ |b_(k) - b_0(k)| . Then Proposition <ref> is implied by the following. As →0 we have ω_b()→0. We prepare for the proof with some lower bounds on b_(k). Fix h_0>3√(ζ_α+2/ζ_α). Then there exist positive constants ν_1, ν_2 such that b_(k) ≥ 1+ ν_1 |k|^α -1, |k| ≤ h_0 / , ν_2 ^1-α, |k| > h_0 / . 1. Suppose h := |k| ≤ h_0. Recall from (<ref>) that S_α(h) →κ_3 as h → 0^+. Since S_α is continuous and positive, it attains a positive minimum on [0,h_0]. That is, there exists ν_1 > 0 such that S_α(h) ≥ν_1 for 0 ≤ h ≤ h_0 , and hence b_(k) ≥ 1 + ν_1 |k|^α -1 for |k|≤ h_0. 2. Now suppose h=|k| > h_0. Then ∑_m=1^∞1- ^2(1/2 mh)/m^α = ζ_α - ∑_m=1^∞4sin^2 ( 1/2 mh)/h^2m^α+2≥ζ_α - 4/h^2ζ_α+2 . But since h > h_0, we get S_α(h) ≥α_1 ζ_α(1 - 4/9). Then b_(k) = 1+|k|^α-1S_α(h) ≥ 1 + ν_2^1-α , where ν_2=5/9α_1ζ_α h_0^α-1. The lemma follows. Since b_ and b_0 are even, it suffices to confine attention to k>0. Recall from (<ref>) that | b_^-1(k) - b_0^-1(k)| ≤|S_α( k) - κ_3|/κ_3 b_(k) . Let δ > 0. We proceed in three steps. 1. Choose h_δ > 0 such that whenever 0 < h≤ h_δ, |S_α(h) - κ_3 | < C_0^-1δ . Assuming 0< k≤ h_δ, since b_(k)≥1 we find from (<ref>) that | b_^-1(k) - b_0^-1(k)| ≤ C_0 |S_α( k) - κ_3| < δ . 2. Next, assume h_δ≤ k ≤ h_0, where h_0 was introduced in the previous lemma. By (<ref>) we get |S_α( k) - κ_3 | < h_δ^1-αζ_α+κ_3 =: C_1 . Then from the previous lemma, it follows C_0 |S_α ( k) - κ_3|/b_(k) ≤C_0C_1/1+ ν_1 k^α-1≤ C_0C_1 h_δ^1-α^α -1 . 3. Lastly, assume h_0 ≤ k < ∞. With the second bound from the previous lemma, we get C_0 |S_α ( k) - κ_3|/b_(k)≤C_0C_1/ν_2^α-1 . Using the inequalities above, we see there exists _0>0 (depending on δ) such that for all ∈ (0,_0) and all k ∈ (0,∞), |b_^-1(k) - b_0^-1(k)| < δ. This finishes the proof of the lemma. §.§ Existence proof We are now in a position to prove the part of Theorem <ref> concerning the existence and local uniqueness of solitary wave profiles, by invoking Lemma <ref> to obtain fixed points of (<ref>) for small >0. Let E=^1 and suppose W_0∈ E is a non-degenerate solution of (<ref>). Then u_0=W_0 is a fixed point of F= in E. The operator _0=I-D(W_0) on E is Fredholm due to Proposition <ref>(ii) and has trivial kernel in E, hence is invertible. Let C_0=_0_(E) and choose positive constants θ, C_1 and C_2 such that (<ref>) holds, i.e., C_0(C_1+C_2)<θ<1. Let R>C_H^1W_0_H^1 and let B={f∈ E: C_H^1f_H^1≤ R}. By applying Corollary <ref> and Propositions <ref>, <ref> and <ref>, we can find positive constants δ and _0 sufficiently small, such that whenever 0<<_0, then: (i) and _ are analytic on B, (ii) the residual bound (<ref>) holds, and (iii) whenever u-u_0_E≤δ we have u∈ B and estimates (<ref>) and (<ref>) hold. Then with G=_, Lemma <ref> applies and we conclude that for every ∈(0,_0), _ has a unique fixed point W=W_∈^1 satisfying W-W_0_H^1≤δ. Moreover there is a constant C independent of such that W_-W_0_H^1≤ C(W_0)-_(W_0)_H^1≤ C^α-1 α∈ (1,2], C ^3-α α∈(2,3). the last bound being due to Corollary <ref>. §.§ The Calogero-Moser case In the case α=2 that corresponds to an infinite Calogero-Moser lattice, we recall from <cit.> that traveling waves in the form x_j(t) = j-(j-ct) exist for any c>c_α=π, where the function =(z) takes values in (-1/2,1/2) and is determined for all z∈ by the implicit equation (c^2 - π^2)(z- φ) = πtanπφ . We seek to relate the velocity profile v_c(z)=c'(z) to the fixed point W_ provided by Theorem <ref> with W_0 taken to be the ground state solution of (<ref>) (known to be non-degenerate by <cit.>). Here, equation (<ref>) takes the form W + π |D| W = 1/2 (2π W)^2 , since κ_2=4π^2 and κ_3=π when α=2. Using that f(z)=i/(z+iπ) satisfies f'=if^2, one can check that a solution of (<ref>) is given by f(x)/π. By the classical uniqueness result of Amick and Toland <cit.>, this is the only solution of (<ref>) in ^1. Therefore, W_0(x) = 1/x^2+π^2 . If α =2 and > 0 is sufficiently small, then the fixed point W_ from Theorem <ref> with c^2 = π^2 + precisely satisfies '(z) = W_( z), where satisfies (<ref>). Let c^2 = π^2 +. Define ψ_ by '(z)=ψ_( z) where φ_ satisfies (<ref>) with z = j - ct. Since determines a solitary wave for (<ref>) by <cit.>, and by the discussion in Section <ref> above, ψ_ must satisfy the fixed point equation (<ref>). From Theorem <ref>, W_ is the unique fixed point of (<ref>) satisfying W-W_0_H^1≤δ. Thus to show ψ_=W_ it remains to show ψ_-W_0_H^1≤δ for small enough >0. Now, by differentiation of (<ref>), one derives that ψ_(x) = 1/(x-)^2+π^2+ . Then it straightforward to check that ψ_ converges to W_0 in H^1 as → 0 due to the boundedness of ψ_' and ψ_”. By the local uniqueness in Theorem <ref>, the proof is complete. § POSITIVITY AND REGULARITY In this section we establish the positivity and regularity properties of the velocity profiles that were stated in Theorem <ref>. §.§ Positivity First we remark on reasons why any solution W_0∈ H^1 of (<ref>) is positive. As we have pointed out, the Green's function for _0=I+κ_3 |D|^μ is positive. This follows by scaling from <cit.>. Alternatively, it can be proved by invoking Kato's formula <cit.> to show that for any λ>0 and s∈(0,1), (λ + |D|^2s) = sinπ s/π∫_0^∞t^s/λ^2+ 2λ t^scos(π s)+t^2s (t I-Δ) dt , and using the positivity of the Green's function for tI-Δ, which in dimension one is e^-√(t)|x|/2√(t). Curiously, we can get a third proof by taking the limit →0 in the next lemma, which we will use to study (<ref>). Let f∈^1. If f is positive (resp. unimodal) then _ f is positive (resp. unimodal). The proof is essentially similar to one provided in <cit.> for the corresponding operator in the case of finite-range interactions. From (<ref>) we may write b_(k) = 1 + α_1ζ_α^-μ(1-j_(k)), where j_(k) = ζ_α∑_m≥1 m^-α^2(12km). Then j_ is even, takes values in [0,1], and is the symbol of the Fourier multiplier _ = ζ_α∑_m≥1 m^-α_m^2 . The operator norm __(H^1)≤1, hence by Neumann series expansion, _ = ^μ/α_1ζ_α∑_n=0^∞_^n/(1+^μ/α_1ζ_α)^n+1 , and the series converges in operator norm. Suppose f∈^1 and f is positive (resp. unimodal). Since the same is true for _mf, for all f, we infer that _ f is positive (resp. unimodal). By induction, the same is true for _^n f, for all n≥1. It follows that _ f is positive (resp. unimodal) as well. Let f∈^1 with C_H^1f_H^1<1. Then _(f) is positive. Moreover, if f is unimodal, then _(f) is unimodal. From the definitions (<ref>)–(<ref>), we find ^2μ(_(f)+_(f)) = ∑_m=1^∞1/m^α_m Z_2(^μ_m f) , where Z_2(r) = α(1-r)^-α-1 -α-α_1 r. Because Z_2 is strictly convex with Z_2(0)=Z_2'(0)=0 we have Z_2(r)>0 for 0<|r|<1. Because _m preserves positivity, by Lemma <ref> it follows _(f) is positive. A similar argument applies to the unimodality statement. The positivity of the fixed points W_ of _ proved to exist in Section <ref> follows immediately from Lemma <ref>. Remarks on unimodality. Regarding the question of whether W_ is unimodal if W_0 is, we can only reiterate what was said on this subject by Herrmann and Mikikits-Leitner <cit.>. Unimodality would follow, if, starting from W_0, one could show that W_ arose as a fixed-point limit of a suitable variant of the (unstable) iteration scheme W ↦_(W) = _(_(W)+_(W)), Perhaps for this one could use Petviashvili iteration <cit.>, say, or compactness arguments similar to those Herrmann used in <cit.> for nearest-neighbor forces. Also see <cit.>. The analysis involved is outside the scope of the present paper, however. §.§ Regularity of velocity We will prove that the fixed points W_ are in H^∞ by a bootstrap argument based on equation (<ref>). We provide details since the terms in the infinite series depend on m (though weakly). Throughout the proof we keep ∈(0,_0) fixed and write W=W_ and a_m = ^μ_mW. By the choice of _0 in the existence proof we have that C_H^1a_m_H^1≤^μ R≤ρ where ρ<1. We note that for every m≥ 1 and every k≥1, we have Z^(k)∘ a_m-Z^(k)(0)∈ H^1 with Z^(k)∘ a_m-Z^(k)(0)_H^1≤ Z^(k)(ρ) - Z^(k)(0) , due to the fact that the Maclaurin series for Z(r) has positive coefficients and unit radius of convergence. We will prove by induction that for every integer n≥0, W∈ H^n+1 and ^μ c^2 W^(n) = ∑_m=1^∞ m^-α_m(Z_1∘ a_m)^(n) , where Z_1(r)=Z(r)-α, with the series converging in H^1. This holds for n=0, since W∈ H^1 and (<ref>) holds in H^1. Now fix n∈ and suppose W∈ H^n+1 with (<ref>) holding in H^1. Then W∈ C^n, (Z_1∘ a_m)^(n)=(Z∘ a_m)^(n), and by the Faà di Bruno formula, (Z∘ a_m)^(n) = ∑_k∈Λ_nnk (Z^(|k|)∘ a_m)·∏_j=1^n (a_m^(j)/j!)^k_j , where Λ_n = {k=(k_1,…,k_n)∈^n: ∑_j=1^n j k_j=n}, nk = n!/k_1!⋯ k_n!, and |k|= k_1+…+k_n. From (<ref>) and (<ref>), it follows easily that (Z∘ a_m)^(n) is bounded in H^1 uniformly in m, by writing Z^(|k|)∘ a_m = Z^(|k|)∘ a_m - Z^(|k|)(0) + Z^(|k|)(0) , and using the the Banach algebra property of H^1 together with the fact that a_m^(j)|_H^1≤^μ W^(j)_H^1 for all m. The map _m is bounded from H^1 into H^2 with bound independent of m. (The bound depends on but it does not matter here.) We infer therefore that the series (<ref>) converges in H^2. Hence W∈ H^n+2 and (<ref>) holds in H^1 with n replaced by n+1. This completes the induction step, and finishes the proof that W∈ H^∞. §.§ Regularity in wave speed In this subsection we prove the part of Theorem <ref> stating that the unscaled velocity profile is analytic as a function of wave speed. Similar to what was done in <cit.>, we look at a fixed scaling, and apply the analytic implicit function theorem in complexified Banach spaces, as provided by Berger <cit.>. Let W_ be the wave profiles provided by the existence proof in Subsection <ref> for 0<<_0. Fixing some such , define W_,β(x) := η^μW_η (η x) , β = η^μ , whenever 0<η<_0. This function is related to the unscaled velocity profiles described in (<ref>) by v_c(z) = c^μ W_,β( z) with c^2 = c_α^2 + β^μ . Thus, to study the regularity of v_c as a function of c, it suffices to fix and study W_,β as a function of β in an interval around β=1. Define _,β = β I + α_1 ∑_m ≥ 1^-μ/m^α (I - _m^2) . (1) For 0<η <_0, W_,β satisfies the traveling wave equation _,β V = _ (V) + _(V) . (2) Moreover, there exists an interval (β_-,β_+), which contains 1 and depends upon , on which the map β↦ W_,β∈^1 is analytic. Using the scaling formulas in the following lemma, we get that W_,β solves the traveling wave equation (<ref>) after setting =η and multiplying (_ W_)(η x) = (_(W_) + _ (W_))(η x) by η^2μ. We have η^μ(_mW_)(η x) = (_m W_,β)(x) , η^kμ_m[ (_m W_)]^k(η x) = _m [ (_m W_,β)]^k(x) , _m Z_3 (^μ(_mW_))(η x) = _m Z_3 (^μ(_m W_,β))(x) . Through the change of variables z=η y, we get η^μ (_mW_ )(η x) = 1/m∫_-m/2^m/2η^μ W_(η x + z) dz = 1/m∫_-m/2^m/2 W_,β(x+y) dy = (_mW_,β)(x) . Similarly, for all k≥1, η^kμ_m[(_m W_)]^k(η x) = 1/m∫_-m/2^m/2[η^μ (_m W_) (η x + z)]^k dz = 1/m∫_-m/2^m /2 [ (_m W_,β)(x+y)]^k dy = _m[(_m W_,β)]^k(x) . Finally, _m Z_3 (^μ (_m W_))(η x) = ∑_k ≥ 3α_k _m [ ^μη^μ (_m W_ )]^k(η x) = ∑_k ≥ 3α_k _m [^μ(_m W_,β)]^k(x) = _m Z_3(^μ(_mW_,β))(x) . The mapping β↦_,β^-1∈(H^1) is analytic on (0,∞). First, by linearity of the inverse Fourier transform, it suffices to show that β↦ b_,β^-1∈ L^∞ is analytic. Let β_0 ∈ (0,∞). Naming f(k) = |k|^α -1 S_α(|k|), we get b_,β^-1(k) = 1/β + f(k) = 1/β_0 + f(k)1/β - β_0/β_0 + f(k) + 1 = ∑_n≥ 0(-1)^n/(β_0 + f(k))^n+1 (β - β_0)^n , granted that |β - β_0| ≤β_0, since f(k)≥0. Hence, the mapping is analytic. Let > 0. The operator _:(0,∞) × (B̃_R∩^1) →^1 defined by _ (β, V) = _,β^-1(_(V) + _(V)) is analytic, jointly in β and V. We omit the proof of this lemma, as it is straightforward to justify local convergence of power series expansions given the results of Lemmas <ref> and <ref>. For small enough , it follows from estimates in Theorem <ref> that I - D_(W_) is invertible. This is the partial derivative of the function f(β,V):=V-(β,V) with respect to V, at the point (1,W_) where f vanishes. Using the joint analyticity to develop a power series expansion at the point (1,W_), we can extend f to be analytic in a ball around (1,W_) in the complexification of the real Hilbert space ×^1. The Frechèt derivative D_Vf at this point is the natural extension of the real operator I-D_(W_) and remains invertible. We can deduce then from the analytic implicit function theorem (see <cit.>) that for some interval (β_-,β_+) containing 1, there exists an analytic mapping β↦W̃_,β taking values in ^1 (complexified) such that W̃_,β is a solution of (<ref>). But by local uniqueness, we deduce that W̃_,β = W_,β. § HAMILTONIAN ENERGY AND WAVE SPEED Let us study the behavior of the Hamiltonian as a function of wave speed. We have not yet written a Hamiltonian for system (<ref>), due to complications over convergence of the double sums that appear. To proceed we describe a potential function related to the force function Z(r)=α(1-r)^-α-1=∑_k=0^∞α_k r^k, defined so that _2(0)=0 and _2'(r)=Z(r)-α, whence _2(r) = (1-r)^-α - 1 -α r = ∑_k=2^∞α_k-1 r^k/k . The lattice Hamiltonian, kinetic, and potential energies are regarded as functions of the particle positions x_j and momenta p_j=ẋ_j and are given by = + , = ∑_j∈12 p_j^2 , = ∑_j∈∑_m=1^∞ m^-α_2(r_j+m,j) , where the quantities r_k,j, representing normalized relative compressions, are defined via 1- r_k,j = x_k-x_j/k-j . In particular, note (x_j+m-x_j)^-α = m^-α(1-r_j+m,j)^-α . It is straightforward to check that the canonical Hamiltonian equations for yield (<ref>) and that is finite and constant in time for solitary wave solutions. The main result in this section is the following result which links the value of the Hamiltonian to an approximation of the squared L^2 norm of the unscaled velocity profile ẋ_j(t) = c^μ W_( z), z =j-ct. For the Hamiltonian evaluated along the family of solitary waves given by Theorem <ref>, we have = ^2μ-1(∫_ c_α^2 W_0(x)^2 dx + O(^γ)) , d/d = (2μ-1)^2μ-2(∫_ c_α^2 W_0(x)^2 dx + o(1)) . Thus for α3/2, d/dc agrees with (α-3/2) for small enough >0. For the Calogero-Moser case α=2, when the solitary waves are determined through (<ref>) by <cit.>, we can be more explicit. In the case α=2, for the solitary waves determined by (<ref>), for every wave speed c>π we have = 12(c^2-π^2) . To study the Hamiltonian on solitary waves, we write the waves provided by Theorem <ref> in the form x_j(t) = j- q(j-ct), ẋ_j(t) = -p(j-ct) , temporarily suppressing dependence on wave speed (and with apologies for the sign reversals but noting _t q = p = -cq'). With z=j-ct we then have that r_j+m,j = δ^+_m q(z) := q(z+m)-q(z)/m . As in <cit.>, we average the Hamiltonian over a time interval [0,1/c] to reduce it to an integral over . Because dz=-c dt, we find the expressions = c ∫_0^1/c dt = c ∫_-∞^∞( 12 p(z)^2 + ∑_m=1^∞ m^-α_2( δ^+_mq(z) ) ) dt = ∫_-∞^∞( 12 p(z)^2 + ∑_m=1^∞ m^-α_2( δ^+_mq(z) ) ) dz . Although the lattice system (<ref>) does not admit a continuous spatial symmetry, we note that traveling wave profiles are nevertheless formally critical points of an “energy-momentum” functional +c, where is the Noether functional associated with the (Lagranian) translation invariance of (<ref>) and is given by = ∫_-∞^∞ p(z)_z q(z) dz. Indeed, setting to zero the variations of c+ with respect to p and q yields 0 = c_z q(z)+p(z) , 0 = -c_z p + ∑_m=1^∞ m^-α-1( Z(δ^+_m q(z))- Z(δ^+_m q(z-m)) ) , which are the correct equations for solitary wave profiles. In other words, on solitary wave profiles we have δ=-cδ, a fact which will simplify a monotonicity calculation below. (This functional differs from the physical momentum ∑_j p_j generated by the translational symmetry x_j↦ x_j+h, however.) With these relations established, let us now insert the scaled form of solitary wave profiles provided by our main theorem. We indicate by subscript the dependence of the profile tuple u_c=(q_c,p_c) upon wave speed c. In particular, our ansatz (<ref>) and the relation U'=W_ yields q(z) = ^ν U(x) , p(z) = -c_zq(z)= -c^μ W_(x) , with x=(j-ct)= z. Then, we have the relation δ^+_m q(z) = ^μ_mW_(x+12m) , and, using the facts that _2(r)= O(r^2) and W_∈ H^1, we obtain (u_c) = ^2μ-1∫_( 1/2 c^2 W_(x)^2 + ∑_m=1^∞ m^-α^-2μ_2(^μ_mW_(x) ) ) dx , (u_c) = -^2μ-1∫_ c W_(x)^2 dx . Write _3(r)=∑_k=3^∞α_k-1r^k/k, so that _2(r) = 1/2α_1 r^2 + _3(r), and define _2,(W) = α_1/2∫_∑_m=1^∞ m^-α (_mW(x))^2 dx , _3, = ∫_∑_m=1^∞ m^-α^-2μ_3(^μ_mW_(x) ) dx . In terms of these expressions we have (u_c) = ^2μ-1( ∫_1/2c^2 W_^2 dx + _2,(W_) + _3,) . As → 0 we have (u_c) = ^2μ-1( ∫_ c_α^2 W_0^2 dx + O(^γ) ) . 1. By Theorem <ref> we have that W_-W_0_H^1=O(^γ), hence |∫_ W_^2 dx - ∫_ W_0^2 dx |≤ C W_-W_0_L^2≤ C ^γ. 2. Noting that | ∫_ (A_mW_)^2 dx - ∫_ (A_mW_0)^2 dx | ≤ C_m(W_-W_0)_L^2≤ C^γ, straightforward estimates imply |_2,(W_)-_2,(W_0)| ≤ C^γ. Furthermore, by using (<ref>) and the regularity of W_0 we get |∫_ (A_mW_0)^2 dx -∫_ W_0^2 dx | ≤ C_mW_0-W_0_L^2≤ C (m)^2 , so by splitting the sum in (<ref>) just as in the proof of Lemma <ref>, we find |_2,(W_0) - 12α_1ζ_α∫_ W_0^2 dx| ≤ C^μ . 3. By arguments nearly identical to those that establish the estimates for _3 in Lemma <ref>, we find that |_3,|≤ C^μ. 4. Recalling that c_α^2 = α_1ζ_α and c^2=c_α^2+^μ and μ≥γ, the proof is finished by using the estimates in steps 1-3 to estimate the terms in (<ref>). This proposition establishes (<ref>), and it remains to discuss the monotonicity of solitary-wave energy as a function of wave speed. Define W_0,β(x) = η^μW_0(η x) where η = β^1/μ . Through scaling, we find W_0,β to be a solution of the limiting equation _0,βV = _0(V), _0,β = β I + κ_3|D|^μ , which reduces to (<ref>) when β=1. We have ∫_ℝ 2W_0 ∂/∂β W_0,β|_β = 1 = 2μ - 1/μ∫_ℝ W_0^2 , and that as → 0, ∂/∂β (W_,β - W_0,β)|_β = 1_H^1→ 0 . 1. We have ∫_ℝ W_0,β^2(x) dx = β^2∫_ℝ W_0(β^1/μx) dx = β^2-1/μ∫_ℝ W_0^2(z) dz . Hence at β =1, d/d β∫_ℝ W_0,β^2 dx = ∫_ℝ 2 W_0 ∂/∂β W_0,β dx = 2μ -1/μ∫_ℝ W_0^2 dx . 2. From differentiating the traveling wave equations, (<ref>) for W_,β and (<ref>) for W_0,β, against β, we get V_:= ∂/∂β W_,β|_β = 1 = - (I - D_(W_))^-1(_^-1W_) , V_0:=∂/∂β W_0,β|_β =1 = - (I - D(W_0))^-1(_0^-1W_0) . The convergence V_→ V_0 in H^1 is obtained through the operator norm convergence _^-1→_0^-1 and the estimates in Proposition <ref>. One should note that we lack a rate of convergence due to our result for _^-1→_0^-1. We have that as → 0, d/dc(u_c) = 2c_α^3^μ-12μ - 1/μ∫_ℝ W_0^2 dx + o(^μ-1) . Using the definition (<ref>) of W_,β and with the scaling c^2= c_α^2+β^μ, from (<ref>) we get (u_c) = -c ∫_ℝ (^μW_,β( z))^2 dz = -c ^2μ-1∫_W_,β^2(x) dx . Then, fixing and differentiating in β at β=1, since dβ/dc = 2c^-μ we find d/dc(u_c) = -^2μ-1∫_ℝ W_,β^2(z) dx - c ^2μ-1∫_ℝ 2W_,β∂/∂β W_,β dx dβ/dc|_β =1 = O(^2μ-1) -2c^2 ^μ-1∫_ℝ 2W_ V_ dx . Recalling that δ = -c δ, we have d/dc(u_c) = 2c^3^μ-1∫_ℝ 2 W_ V_ dx + O(^2μ -1) . Expanding c^2=c_α^2+^μ and using the previous lemma gives d/dc (u_c) = 2c_α^3^μ-1∫_ℝ 2 W_0 V_0 dx + o(^μ-1) = 2c_α^3^μ-12μ-1/μ∫_ℝ W_0^2 dx + o(^μ-1) . This completes the proof. Now, through multiplying (<ref>) by dc/d = μ^μ-1/2c = μ^μ-1/2c_α +O(^2μ-1) , we deduce (<ref>). This completes the proof of Theorem <ref>. We conclude by calculating the Hamilonian in the case of the Calogero-Moser lattice when α=2. We have (u_c)→0 as c→π^+ by Proposition <ref>, so the claimed formula (u_c)=1/2(c^2-π^2) follows by integration from the formula d/dc = -cd/dc = c , which holds due to the following computation. When α=2, for all c>π we have (u_c) = π -c. For α=2, the solitary waves satisfy x_j(t)=j-(j-ct) with (z) satisfying (<ref>). Hence q(z)=(z), so from (<ref>) and (<ref>) it follows (u_c) = -c∫_(d/dz)^2 dz = -c∫_-1/2^1/2d/dz d , since →±1/2 as z→±∞. Differentiating (<ref>), we see (c^2-π^2) = d/dz(c^2 +π^2 tan^2π) , whence (u_c) = -c∫_-1/2^1/2(1 - π^2^2π/c^2+π^2tan^2π) d . Using the substitution c y = πtanπ one finds the claimed result. § ACKNOWLEDGEMENTS This material is based upon work supported by the National Science Foundation under Grant No. DMS 2106534. Thanks go to Doug Wright for very helpful discussions. siam
http://arxiv.org/abs/2406.08285v1
20240612145040
A New Class Biorthogonal Spline Wavelet for Image Edge Detection
[ "Dujuan Zhou", "Zizhao Yuan" ]
cs.CV
[ "cs.CV" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals A New Class Biorthogonal Spline Wavelet for Image Edge Detection Dujuan Zhou1, Zizhao Yuan1 1These authors contributed equally to this work. June 17, 2024 ================================================================================ § ABSTRACT Spline wavelets have shown favorable characteristics for localizing in both time and frequency. In this paper, we propose a new biorthogonal cubic special spline wavelet (BCSSW), based on the Cohen–Daubechies–Feauveau wavelet construction method and the cubic special spline algorithm. BCSSW has better properties in compact support, symmetry, and frequency domain characteristics. However, current mainstream detection operators usually ignore the uncertain representation of regional pixels and global structures. To solve these problems, we propose a structural uncertainty-aware and multi-structure operator fusion detection algorithm (EDBSW) based on a new BCSSW spline wavelet. By constructing a spline wavelet that efficiently handles edge effects, we utilize structural uncertainty-aware modulus maxima to detect highly uncertain edge samples. The proposed wavelet detection operator utilizes the multi-structure morphological operator and fusion reconstruction strategy to effectively address anti-noise processing and edge information of different frequencies. Numerous experiments have demonstrated its excellent performance in reducing noise and capturing edge structure details. Edge detection, structural uncertainty perception, modulus maxima, anti-noise morphology, biorthogonal cubic special spline wavelet. § INTRODUCTION Edge is a key feature of an image that is important for many vision tasks, including object detection, image segmentation, object recognition, tracking, and 3D reconstruction. The purpose of edge detection is to extract clear and continuous contour information from the target scene, providing simplified scene and object-of-interest information for advanced visual tasks. The detection effect is mainly affected by image noise, brightness, and contrast, which can result in false or broken edges. Therefore, it is crucial to design an effective detection operator that can mitigate these issues. Researchers investigate various methods for detecting operators, primarily for denoising, edge sharpening, continuity, and detail preservation. Edge detection methods are typically classified as traditional operator detection, wavelet transform-based detection, and neural operator (NO) detection. In the following section, we first review the current research on detection operators. Previously proposed conventional operators primarily rely on variations in luminance and gradient to identify edges. The Canny <cit.> operator combines the gradient and non-maximum suppression methods to obtain refined edges. The Prewitt <cit.>, Sobel <cit.>, Roberts <cit.>, and Laplacian operators <cit.> use convolution kernels to filter the image and identify edges in the direction of the gradient. However, these operators are sensitive to noise and have low detection accuracy, making it difficult to perform adaptive edge detection. Tian et al. <cit.> combined the weighted kernel paradigm minimization and Sobel operator to improve the noise robustness capability of the Sobel operator and achieve better detection results. Mittal et al. <cit.> proposed the Canny optimization algorithm, which simulates triple thresholding and efficiently carries out edge detection while improving edge continuity. Isar et al. <cit.> proposed optimizing the Canny operator by introducing a two-stage denoising system in the wavelet transform domain to improve its robustness to noise. Extracting edge features that are fully adapted to complex backgrounds remains a significant challenge due to limitations in image feature representation and differences in time-frequency domain analysis. Wavelet transform has gained significant attention from researchers due to its powerful multi-scale decomposition capability and spatial domain processing. The wavelet transform is commonly used in various visual tasks, such as image enhancement <cit.>, denoising <cit.>. For edge detection, wavelet-based detection operators <cit.> typically combine wavelet transform and modulus maxima. Inspired by Wavelet modulus operators, Gu et al. <cit.> proposed an improved wavelet mode-maximization algorithm that enhances edge contours by fusing light intensity and polarized light. You et al. <cit.> further optimized the algorithm by incorporating OTSU thresholding to detect edges in complex background images. Morphology-based edge detection is effective in reducing noise, particularly edge noise, due to its rational structural element design. Shui et al. <cit.> proposed an edge detector that is resistant to impulse noise, based on anisotropic morphological directional derivatives, to achieve competitive edge results in both noiseless and Gaussian noise scenarios. Yin et al. <cit.> proposed a multi-scale, multi-directional approach using structural elements and a Mahalanobis distance-weighted detection operator to extract smoother, more detailed, and noise-resistant edges. In recent years, there has been significant progress in machine learning (ML) methods. Dollar et al. <cit.> proposed a structured learning method for random decision forests with the aim of building fast and accurate edge detectors. Hallman et al. <cit.> proposed directed random forests based on simplified feature representations for linear boundary detection and edge sharpening of image blocks. Neural operators are often based on data-driven approaches, where they learn edge feature representations of labels to recognize edges more accurately. However, their performance heavily relies on well-labeled data <cit.> and they still suffer from coarser edges, poor noise robustness, and difficulty in retaining detailed information about object structure. More importantly, few researchers have studied the impact of different wavelets on the performance and effectiveness of edge detection algorithms. Therefore, in this work, we mainly propose a new, effective spline wavelet detection operator. This detection operator mainly includes structural uncertainty-aware modulus maxima and multi-structural anti-noise morphology. Experimental results show that the proposed spline wavelet detection operator has better anti-noise performance than other wavelets and operators, and can detect more complete and continuous edge information. Overall, our contributions are as follows: * We propose a new and effective BCSSW for edge detection based on the CDF wavelet construction method and the cubic special spline algorithm, which provides better image smoothing and noise suppression, resulting in a high-quality low and high-frequency prior for detection compared to other wavelets. * We propose a novel BCSSW edge detector (EDBSW) based on the fusion of structural uncertainty perception modulus maxima and anti-noise operators. Inspired by uncertainty-aware detectors, we introduce low-frequency structural uncertainty perception into a wavelet transform-based detection operator for the first time. * To our knowledge, we are the first to compare the effectiveness of different wavelets in edge detection with the proposed BCSSW quantitatively and qualitatively. We evaluate the proposed detection operator by substituting the filter parameters of various wavelets in the operator. The experimental results demonstrate that the proposed spline wavelet detection algorithm can excel in MSE and PSNR metrics compared to Haar and other wavelets. § METHODOLOGY In this section, we presented the proposed BCSSW spline wavelet and edge detection algorithms, which contain (a) derivation of new spline wavelet low-high-pass filter coefficients, (b) modulus maxima detection algorithms based on structural uncertainty perception, (c) design and implementation of the multi-structural anti-noise operator, (d) fusion strategies for morphological refinement and reconstruction. Overall, the diagram of the proposed new spline wavelet edge detection algorithm is shown in Fig. <ref>. For the input image I is upsampled and decomposed using the spline wavelet to obtain its low-pass and high-pass components. The first branch detects the low-frequency component cA by applying a multi-structure anti-noise morphological operator to obtain the low-frequency feature map E_d. The high-pass component H_c and low-pass component L_c are then filtered sequentially using wavelet-based modulus maxima and adaptive threshold to obtain E_h and E_l in the second branch. These components are then weighted and fused to obtain the mask E_m. The third branch applies the high-frequency components cH, cV, and cD to the structural uncertainty-aware modulus maxima to obtain structurally aware edge feature representations CH^', CV^', and CD^'. After the adaptive threshold filter, the high-frequency edge feature component M_f undergoes wavelet reconstruction and downsampling to produce E_r. E_r is then subjected to morphological refinement to obtain G_i. Eventually, G_i and E_m are fused by morphological reconstruction and context-weighted fusion of the original E_m to obtain the final detailed edge image, E_u. The following section describes the specific algorithmic implementations and related parameter derivations of each branch in detail. §.§ A New Biorthogonal Cubic Special Spline Wavelet (BCSSW) §.§.§ Cubic Special Spline Algorithm In our work, BCSSW is based on the cubic special spline algorithm proposed by <cit.>. They proposed a novel spline algorithm and provided various representations of cubic splines with different compact supports. In this paper, we selected one of the cubic splines with the smallest compact support, as shown in (<ref>), and on the basis of this spline, we derived a new class of spline wavelet algorithms following the CDF method of wavelet construction. S(t) = 451/3β_3(t)-256/3(β_3(t-1/16)+β_3(t+1/16))+ 64/3(β_3(t-1/8)+β_3(t+1/8)), and β_3(t) is the cubic B-spline: β_3(t)=∑_i=0^4(-1)^i/3 !([ 4; i ])(t+2-i)^3 ·ϖ(t+2-i), t ∈ R, where ϖ(t) is the unit step function ϖ(t)= 0, t<0, 1, t ≥ 0. S(t) comes out from a linear combination of the normalized and the shifted B-splines of the same order. Consequently, S(t) can inherit nearly all the favourable properties of β_3(t), including analyticity, central symmetry, local support, and high-order smoothness. Moreover, S(t) can directly interpolate the provided data without the need to solve coefficient equations, a capability that B-spline lacks. The Fourier transform expressions of S(t): S(ω)=(451/3-512/3cosω/16+64/3cosω/8)(sinω/2/ω/2)^4. The spline S(t) and the Fourier transform S(ω) are separately plotted in Fig. <ref>. §.§.§ Constructing Biorthogonal Cubic Special Spline Wavelet (BCSSW) <cit.> and <cit.> have proved B-spline β_m (t) is the scale function of the corresponding multi-resolution analysis. S(t) is formed by the linear combination of B-spline β_3 (t) translation and expansion. Therefore, we can naturally deduce the following conclusion. The subspaces V_j^3 are generated by S(t) binary dilation and integer translation, as follows: V_j^3=span{2^j/2 S(2^jt-k), k ∈ Z}, j ∈ Z, where {V_j^3}_j ∈𝐙 forms a general multi-resolution analysis (GMRA) in L^2(𝐑), called spline multi-resolution analysis. g(t) is the corresponding scaling function. According to the theory of wavelet construction, S(t), as a scale function, can construct a new wavelet ψ(t). Let S^*(t) be the dual scaling function of S(t) and ψ^*(t) be the dual wavelet of ψ(t), then their corresponding low-pass filters are: H(ω)=1/2∑_n=N_1^N_2 h_n e^-i n ω, H^*(ω)=1/2∑_n=L_1^L_2 h_n^* e^-i n ω. And high-pass filters are: G(ω)=1/2∑_k=1-L_2^1-L_1 g_k e^-i k ω, G^*(ω)=1/2∑_k=1-N_2^1-N_1 g_k^* e^-i k ω, where N_1, N_2, L_1, L_2 are all integers, N_2-N_1+1 and L_2-L_1+1 are the lengths of H(ω) and H^*(ω), respectively, and g_k=(-1)^k h_1-k, g^*_k=(-1)^k h^*_1-k. All coefficients are real coefficients. We also construct a new class of compactly supported wavelets based on CDF. We are aware that wavelets with compact supports exist as long as the two-scale sequence of the related scaling function is finite. In the paper, we set H(ω) and H^*(ω) as odd-length and the support set is symmetric at about 0. The vanishing moment order of H(ω) and H^*(ω) are N and N^*, respectively, and they can also have the following representation: H(ω)=cos(ω/2)^2NQ(cos(ω)), H^*(ω)=cos(ω/2)^2N^*Q^*(cos(ω)), where Q(cos(ω)), Q^*(cos(ω)) are the polynomials of cos(ω). Let P(sin^2(ω/2))=Q(cos(ω)) Q^*(cos(ω)), and when y=sin^2(ω/2), we also have: P(y)= ∑_n=0^L-1L-1+nny^n, where L=N+N^*. From the time domain expression of the two-scale equation corresponding to S(t), the low-pass filter of ψ(t) in the frequency domain can be obtained as follows: H(ω)=S(2ω)/S(ω) =451-512 cosω/8+64 cosω/4/451-512cosω/16+64 cosω/8cos^4(ω/2). From (<ref>), (<ref>), and (<ref>), we can obtain N=2, and Q(cos(ω))=451-512cosω/8+64cosω/4/451-512 cosω/16+64cosω/8, Q^*(cos(ω))=P(sin^2(ω/2))/Q(cos(ω)). When N=2, L takes different values, for example, L=4,5,6,7 , we can obtain multiple corresponding N^*. Bring these values into Equations (<ref>), (<ref>), and (<ref>), by taking in the inverse Fourier transform, we can obtain multiple groups h_n, h^*_n of the corresponding low-pass filter coefficients of the new biorthogonal spline wavelet. Considering the symmetry of the coefficients, we only give n=0,1,2,3,.... In practical application, the corresponding odd coefficients can be symmetrically selected for image processing. We can calculate the corresponding g_n,g^*_n, the high-pass filter coefficients of the ψ(t) and ψ^*(t). The filter bank in the frequency domain is {H(ω), G(ω), H^*(ω), G^*(ω)}, the decomposition and reconstruction processes use two different sets of filters, respectively. It was decomposed with {h^*_n} and {g^*_n}, the reconstruction uses a different pair of filters {h_n} and {g_n}. Because of this, we make {h^*_n} and {g^*_n} wavelet decomposition filters, and {h_n} and {g_n} wavelet synthesis filters. §.§ Modulus maxima based on structural uncertainty-aware perception Essentially, structural uncertainty perception is a global feature filter for structural uncertainty. Specifically, the algorithm consists of three parts: structural uncertainty feature selection, wavelet-based modulus maxima, and adaptive threshold filter. The pseudo-code for the algorithm is given in Algorithm <ref>. §.§.§ Structural uncertainty-aware feature selector The algorithm analyzes the structural statistics of the low-frequency cA, including the mean μ_D and standard deviation δ_D. The acceptable deviation range is set to 0.05, which corresponds to a permissible detection error of 5%. Each standard deviation of each high-frequency component modulus δ_D(h), δ_D(v), δ_D(d) is calculated. The low-frequency component distribution N_a ∼𝒩(μ_D, δ_D), is defined as the candidate edge information. The deviation of the standard deviation is then calculated to determine if it satisfies the error range. This is equivalent to converting the detection deviation of the edge to a global threshold setting, which determines the final selected region of the true detection edge. §.§.§ Wavelet-based modulus maxima For the wavelet-based modulus maxima algorithm, we first calculate the modal value and gradient direction (angle) of the detected wavelet component region, which can be defined as follows: C_x=∂ C_1(x, y)/∂ x C_y=∂ C_1(x, y)/∂ y where C_1(x,y) is the wavelet components of the first level of decomposition. C_x and C_y are the gradients of the different wavelet components in the horizontal and vertical directions. Then determine the neighborhood coordinates of the pixel based on the angle. For the low-frequency modulus of the second branch M_u(x, y) can be expressed as follows: M_u(x, y)=√(|C_x|^2+|C_y|^2) where | C_x| and | C_y| are the modulus components corresponding to the x and y directions. Similarly, the computation of the modal values of the high-frequency components M_c(x, y) is specified as (<ref>): M_c(x, y)=√(|C_H|^2+|C_V|^2+|C_D|^2) where | C_H(x,y)|, | C_V(x,y)|, | C_D(x,y)| denote the modulus of the horizontal, vertical and diagonal components respectively. The direction of the separate wavelet components A_u can be shown in (<ref>): A_u=arctan(C_y/C_x) Similarly, the direction of the high-frequency components A_s is shown in Equation (6): A_s=arctan(C_H/C_V) Specifically, we choose π/4 and 3π/4 to determine whether there is an approximate edge gradient direction and obtain the corresponding neighbor coordinates. Comparing the modulus of the two neighboring points in the gradient direction, a pixel is considered locally maximal if it is the neighborhood maximum in the gradient direction, while the other edge pixels are 0. The modulus maxima M'(x, y) can be expressed as follows: M'(x, y) = M(x, y), if M(x, y) > M(n_x1, n_y1) and M(x, y) > M(n_x2, n_y2) 0, otherwise where M(n_x 2, n_y 2) denotes the modulus of M(x, y-1), M(x, y+1), and M(n_x 1, n_y 1) denotes the modulus of M(x-1, y-1), M(x+1, y+1). This ensures that only local maxima along the gradient direction are retained to effectively sparse the edges, and non-zero values are used as the final matrix of edge coefficients. §.§.§ Adaptive threshold filter To better adapt to differences in edge strengths across gradient directions, we apply an adaptive threshold filter to the edge strengths using the average of the maximum modal value D_max and the minimum modal value D_min as the threshold value T. This produces the final edge image (M_f), with the threshold value T determined by (<ref>): T=(D_max+D_min) / 2 §.§ Morphology detection based on the multi-structure anti-noise operator For the design of multi-structural anti-noise morphological operators, we combine structural elements in three various directions to comprehensively consider texture information in each direction of the detected object. We denote g(x) as the input gray-scale image and the structural elements are denoted as λ(x). Assuming that β belongs to ℛ, a 3 × 3 matrix, considering the uniformity of the response intensity in different directions of the edges and the orientation of the wavelet components to efficiently detect the edges in all directions, we design three sets of structural elements with the bases respectively in (<ref>): λ_1=μ[ 0.5 1 0.5; 1 2 1; 0.5 1 0.5 ], λ_2=μ[ 0 0.5 0; 0.5 0.5 0.5; 0 0.5 0 ], λ_3=μ[ 0.5 0 0.5; 0 0.5 0; 0.5 0 0.5 ], λ_h= [ -1 -1 -1; -1 8 -1; -1 -1 -1 ] where the weight μ amplifies the intensity to adjust the region brightness. The value of μ is 2 and λ_h is the Laplacian operator. Dilation increases the size of the foreground by calculating the maximum value within the region of structural elements. Whereas erosion aims to refine the edges of the foreground, morphological dilation, and erosion can be shown as follows: (gΘλ)(x)=min _y ∈β(g(y)-λ(y-x)) (g⊕λ)(x)=max _y ∈β(g(y)+λ(x-y)) where (x-y) ∈ ℛ, β is the domain of the structural elements, and ℛ is the domain of the gray-scale image. Opening operation can preserve the shape structure and size information of the low-frequency components, combined with the structural elements of the design to remove small objects and noise, which is represented as follows: (g ∘λ)=(gΘλ) ⊕λ Finally, the multi-structure anti-noise operator is expressed as follows: E_d=[(g ⊕λ_1Θλ_2) ∘λ_3]-[(g ⊕λ_1Θλ_2) Θλ_3] where E_d denotes the edge image obtained by the multi-structure anti-noise morphological operator. This operator suppresses edge noise effectively, smooths the boundaries of larger objects, and retains important structural information by utilizing the difference in secondary corrosion. §.§ Fusion strategies for morphological refinement and reconstruction The morphological refinement and reconstruction fusion enhances the edges, making them easier to distinguish from the background. This process ensures that the final edge map has more continuous and coherent edges, resulting in a more accurate depiction of the area to be detected. The first level of decomposition preserves clear texture in the image, while the second and third branch reconstruction captures the image's structure more accurately. The first branch fusion is accomplished using the direct weighted fusion method. We utilized the fused edges of the wavelet decomposition of the first branch as a mask to align with the wavelet reconstructed edges, which improved the detection accuracy. The edge refinement output F_d can be expressed as follows: F_d=(g⊕λ_h)-(gΘλ_h) G_i obtained by morphological reconstruction is: G_i=min(E(g_i-1), E_m) where E_m denotes mask, E(·) indicates erosion, G_i denotes edge after morphological remodeling, and g_i-1 denotes markers for morphological refinement. Finally, we perform a context-weighted fusion of the mask E_m and the reconstructed edge G_i to obtain the final edge detection image E_u, which can be expressed as follows: E_u=α G_i+(1-α) E_m where α is 0.7. Similarly, the mask E_m is obtained by weighted fusion of E_h and E_l via Equation (16) as well. § EXPERIMENTS In order to demonstrate the effectiveness of the proposed spline wavelet and edge detection algorithm framework, we selected images from BSDS500 and MVTec datasets with labeled datasets for edge detection. We compared and analyzed the results with traditional detection operators such as Canny, Sobel, Prewitt, and the traditional wavelet transform modulus maxima (WTMM) while deploying different wavelets on the proposed algorithm, including Haar, DB2, coif1, rbio3.5, and sym4. §.§ Evaluation Metrics We consider MSE, PSNR, SSIM, and Entropy as metrics to evaluate the effectiveness of the proposed algorithm in detecting and handling noise with different wavelets. The images are converted to gray-scale before input to evaluate detection effectiveness better. §.§ Qualitative evaluation For evaluation, we select images from BSDS500 that represent various real-life environments. The results of edge detection are shown in Fig. <ref>. Traditional operators tend to detect a significant number of false, noisy, and discontinuous edges, which are easily affected by noise and less robust. On the other hand, the Canny operator faces difficulties in adapting to edge reconstruction and differentiating the main edges. For Sobel, Prewitt, and WTMM operators, Figure (c) clearly shows that there is a problem with complete structural representation, and the results of edge detection do not reflect the reconstruction effect of human visual perception well. The detected edges are unable to distinguish the main objects well, and the edges are rough with imprecise and unnatural edge textures as seen in Figure (a), (b), and (c). Our method is more textured and less sensitive to noise. It presents edge continuity and smoother lines and removes secondary edges from the background. By refining the local edge details, it is evident that the edges reconstructed by the BCSSW are finer and contain more natural details for the main object structures. To demonstrate the generalization ability of the proposed operator, we also perform tests on the MVTec dataset, which include industrial gray-scale maps and images with varying light intensities. As depicted in Fig. <ref>, the conventional operator exhibited high sensitivity to low-light and blurred industrial images, as well as noisy backgrounds. The Canny operator is not effective in detecting partial edges and produces a significant amount of noise. While Sobel, Prewitt, and WTMM reduce some of the extraneous noise, there are still many spurious edges in the interior of the part. The structural information is not accurately detected, and there are inhomogeneous chutes and broken edges. In contrast, the proposed method effectively smooths the background, suppresses extraneous noise, and extracts continuous and smooth edges. To further demonstrate the robustness of the detection operators to the light-generated noise, Fig. <ref> shows the results for industrial images under different lighting scenarios, where the Canny operator also exhibits light-sensitive properties, and the other detection operators embody the robustness to background noise to a certain extent, the edges of the detected object's structure are poorly preserved, and there are discontinuities and varying degrees of background noise texture. The proposed algorithm, on the other hand, can maximize the suppression of the generation of background noise due to light intensity variations, while extracting smooth and meaningful edge information. This further illustrates the generalization and robustness of the proposed algorithm in various detection scenarios. §.§ Quantitatively Evaluation Due to its subjective structural description caused by human labeling of edge differences, we choose MSE, PSNR, and Entropy to quantitatively analyze the effect of edge detection to assess the real edges and perceived structural representations. A smaller MSE (or larger PSNR) indicates that the edge image contains less noise and the detection result retains more effective edge content. We aim for a reasonable entropy to reflect the intensity distribution and detailed representation of edges. Fig. <ref> shows the fold analysis of different wavelet edge detection results. Table. <ref> shows that BCSSW outperforms other existing wavelets in MSE and PSNR values and is significantly more robust to noise than the traditional detection operator. The entropy is highly competitive and the plots in (a), (b), and (e) indicate that the extracted edge strength is smoother and more effective in edge information detection compared to traditional operators. Table. <ref> shows that the SSIM values of the wavelet detected images are higher than those of other existing wavelets, indicating that BCSSW effectively preserves the true edge structure and can smooth the image while maintaining noise robustness. In addition, Table. <ref> presents a comparison of entropy results for different detection operators on the MVTec dataset. The entropy of Sobel, Prewitt, and WTMM for the image (h) (i) (j) are close to 0, indicating an unbalanced distribution of edge intensities, which is insufficient to represent the complex structural edges, and the proposed algorithm obtains the highest values for detection, which further illustrates the generalization ability of this algorithm in different light intensity and noise environments. The proposed algorithm achieves the highest detection values, demonstrating its ability to generalize across different light intensities and noise environments. §.§ Ablation Study In this section, we conduct an ablation study of the structural uncertainty-aware feature selector and the fusion of multi-structural operators to demonstrate the effectiveness of the proposed algorithm. The details are given below: - Effectiveness of the structural uncertainty-aware feature selector (w/o III) - Effectiveness of multi-structure anti-noise operator design (w/o I) - Effectiveness of operator fusion (w/o I, II) Table. <ref> shows that w/o I, II has the least decrease in metrics, while the SSIM values are the highest. In contrast, w/o III has significantly lower SSIM values, reflecting the ability of the structural uncertainty feature selection to better filter out meaningful structural edge representations. The absence of separate multi-structural anti-noise operators results in the lowest MSE and PSNR rankings, demonstrating the contribution of multi-structural operator fusion to improving anti-noise capability and suppressing edge structure to a certain degree. Fig. <ref> shows that the detected edges experience edge detail loss and breakage when the multi-structural anti-noise operator and fusion are missing, while in the lack of structural uncertainty-aware feature selection, the edge results present too much detailed texture and even irrelevant structural features. Figure (d) reasonably maintains both the main structural edge information as well as complete and continuous edges. This well illustrates the importance of resultant uncertainty feature selection and multi-structure operator fusion for edge extraction, further confirming the conclusion of the quantitative analysis. § CONCLUSION In this study, we propose a new BCSSW edge detector (EDBSW) based on structural uncertainty perception and anti-noise operator fusion. We leverage the BCSSW to enhance edge detection by perceiving structural uncertainty. This approach guides the detection of modulus maxima, allowing for the extraction of edges with consistent information. By fusing anti-noise morphology operators, we effectively suppress noisy boundaries. The strategy is further refined through a reconstruction process, which enables the extraction of edges that are both resistant to noise and rich in detail. The experimental results further demonstrate the superiority of the proposed detection algorithm in noise robustness compared to other operators and different wavelets and verify the effectiveness in retaining the structural information of edges. Also, the proposed detection algorithm passes the validation of the ablation part. However, our detected edges still have low efficiency and limited generalization. Our future work will focus on constructing a generalized and concise spline wavelet and neural detection operators. 1 IEEEtran ref-journal2 Canny, J., ” A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 679–698, 1986. ref-journal3 Prewitt, J.M., ”Object enhancement and extraction,” Picture Process. and Psychopictorics, vol. 10, pp. 15–19, 1970. ref-thesis1 Sobel, L., Camera Models and Machine Perception. Stanford University, Stanford, CA, USA, Electrical Engineering Department, 1970. ref-thesis2 Roberts, L.G., Machine Perception of Three-Dimensional Solids. Cambridge, MA, Optical and Electro-optical Information Processing, MIT Press, 1965. ref-journal18 Marr, D. and Hildreth, E., ”Theory of edge detection,” Proc. of the Royal Society of London. Series B, Biol. Sci., vol. 207, pp. 187–217, 1980. ref-proceeding Gao, W., Zhang, X., Yang, L., and Liu, H., ” An improved Sobel edge detection,” in Proc. Int. Conf. Comput. Sci. Inf. Technol. Process. (ICCSIT), Jul. 2010, pp. 67–71. ref-journal4 Tian, R., Sun, G., Liu, X., and Zheng, B., ”Sobel edge detection based on weighted nuclear norm minimization image denoising,” Electronics, vol. 10, pp. 655, 2021. ref-journal5 M. Mittal et al., ”An Efficient Edge Detection Approach to Provide Better Edge Connectivity for Image Analysis,” IEEE Access, vol. 7, pp. 33240–33255, 2019. ref-journal6 Isar, A., Nafornita, C., and Magu, G, ”Hyperanalytic wavelet-based robust edge detection,” Remote. Sens., vol. 13, pp. 2888, 2021. ref-proceeding2 Jamadandi, A. and Mudenagudi, U., ”Exemplar-based underwater image enhancement augmented by wavelet corrected transforms,” in Proc. CVPR workshops, Jun. 2019, pp. 11–17. ref-journal7 Pyka, K., ”Wavelet-based local contrast enhancement for satellite, aerial and close range images,” Remote. Sens., vol. 9, pp. 25, 2017. ref-journal8 Ding, W. and Li, Z., ”Research on adaptive modulus maxima selection of wavelet modulus maxima denoising,” J. Eng., vol. 2019, pp. 175–180, 2019. ref-journal15 Wang, X., ”Moving window-based double haar wavelet transform for image processing,” IEEE Trans. Image Process., vol. 15, pp. 2771–2779, 2006. ref-journal19 Wang, F.Y., Chen, M., and Fei, Q.S., ”The improved Method for Image edge detection based on wavelet Transform with Modulus Maxima,” Adv. Mat. Res., vol. 850, pp. 897–900, 2014. ref-journal20 Zhou, X., Wang, Y., Zhu, Q., Mao, J., Xiao, C., Lu, X., and Zhang, H., ”A surface defect detection framework for glass bottle bottom using visual attention model and wavelet transform,” IEEE Trans. Iudustr. Inform., vol. 16, pp. 2189–2201, 2019. ref-journal21 Fu, Z., Song, S., Wang, X., Li, J., and Tai, H.M., ”Imaging the topology of grounding grids based on wavelet edge detection,” IEEE Trans. Magn., vol. 54, pp. 1–8, 2018. ref-proceeding6 Cui, B. and Jiang, H., ”An image edge detection method based on haar wavelet transform,” in Proc. Int. Conf. Artif. Intell. Comput. Eng. (ICAICE), Nov. 2020, pp. 250–254. ref-journal9 Gu, Y, et al., ”An improved wavelet modulus algorithm based on fusion of light intensity and degree of polarization,” Appl. Sci., vol. 12, pp. 3558, 2022. ref-journal10 You, N., Han, L., Liu, Y., Zhu, D., Zuo, X., and Song, W., ”Research on Wavelet Transform Modulus Maxima and OTSU in Edge Detection,” Appl. Sci., vol. 13, pp. 4454, 2023. ref-journal11 Shui, P.L. and Wang, F.P., ”Anti-impulse-noise edge detection via anisotropic morphological directional derivatives.,” IEEE Trans. Image Process., vol. 26, pp. 4962–4977, 2017. ref-proceeding3 Yin, Z., Liu, Z., and Huang, M., ” An improved morphological edge detection algorithm,” in Proc. Int. Conf. Geol. Mapping Remote. Sens. (ICGMRS), Apr. 2022, pp. 144–149. ref-journal12 Dollár, P. and Zitnick, C.L., ” Fast edge detection using structured forests,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, pp. 1558–1570, 2014. ref-proceeding4 Hallman, S. and Fowlkes, C.C., ”Oriented edge forests for boundary detection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1732–1740, 2015. ref-proceeding5 Zhou, C., Huang, Y., Pu, M.; Guan, Q., Huang, L., and Ling, H., ”The treasure beneath multiple annotations: An uncertainty-aware edge detector,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 15507–15517, 2023. ref-journal13 Chen, J. and Cai, Z., ” A new class of explicit interpolatory splines and related measurement estimation,” IEEE Trans. Signal Process., vol. 68, pp. 2799–2813, 2020. ref-book1 Chui, C.K., An introduction to wavelets. Cambridge, USA, Academic press, 1992. ref-journal14 Graps, A., ” A new class of explicit interpolatory splines and related measurement estimation,” IEEE Comput. Sci. Eng., vol. 2, pp. 50–61, 1995.
http://arxiv.org/abs/2406.09185v1
20240613144757
Thoracic Surgery Video Analysis for Surgical Phase Recognition
[ "Syed Abdul Mateen", "Niharika Malvia", "Syed Abdul Khader", "Danny Wang", "Deepti Srinivasan", "Chi-Fu Jeffrey Yang", "Lana Schumacher", "Sandeep Manjanna" ]
cs.CV
[ "cs.CV" ]
A formation pathway for terrestrial planets with moderate water content involving atmospheric-volatile recycling Jonas Müller1,2,4 Bertram Bitsch3,4 Aaron David Schneider5,6 Received <date> /Accepted <date> ================================================================================================================ empty empty § ABSTRACT This paper presents an approach for surgical phase recognition using video data, aiming to provide a comprehensive understanding of surgical procedures for automated workflow analysis. The advent of robotic surgery, digitized operating rooms, and the generation of vast amounts of data have opened doors for the application of machine learning and computer vision in the analysis of surgical videos. Among these advancements, Surgical Phase Recognition(SPR) stands out as an emerging technology that has the potential to recognize and assess the ongoing surgical scenario, summarize the surgery, evaluate surgical skills, offer surgical decision support, and facilitate medical training. In this paper, we analyse and evaluate both frame-based and video clipping-based phase recognition on thoracic surgery dataset consisting of 11 classes of phases. Specifically, we utilize ImageNet ViT for image-based classification and VideoMAE as the baseline model for video-based classification. We show that Masked Video Distillation(MVD) exhibits superior performance, achieving a top-1 accuracy of 72.9%, compared to 52.31% achieved by ImageNet ViT. These findings underscore the efficacy of video-based classifiers over their image-based counterparts in surgical phase recognition tasks. § INTRODUCTION Robot-assisted surgery has become increasingly adopted in recent years and has expanded the scope of treatment options for patients <cit.>. The advent of robotic surgery, digitized operating rooms, and the generation of vast amounts of data have opened doors for the application of machine learning and computer vision in the analysis of surgical videos. Within this field of surgical data science, automating data analysis is crucial to simplify complexity and maximize data utility, enabling new opportunities. For example, automating intraoperative assistance and cognitive guidance for surgeons in real-time offers potential benefits, as does providing automated and enhanced feedback for trainees. This is especially useful for fields like thoracic surgery, where several reported cases of intraoperative catastrophes still occur and require conversion to an open thoracotomy <cit.>. Moreover, autonomously analyzing surgical data has the potential to optimize entire surgical workflows. Among these advancements, Surgical Phase Recognition(SPR) stands out as an emerging technology that has the potential to recognize and assess the ongoing surgical scenario, summarize the surgery, evaluate surgical skills, offer surgical decision support, and facilitate medical training. In this paper, we analyse and evaluate both frame-based and video clipping-based phase recognition on thoracic surgery dataset consisting of 11 classes of phases. Specifically, we apply ImageNet ViT for image-based classification, and VideoMAE and Masked Video Distillation(MVD) for video-based classification. We show that video-based SPR performs significantly better than image-based SPR, achieving a top-1 accuracy of 72.9%, compared to 52.31% achieved by ImageNet ViT. § METHODOLOGY We employ three models for SPR: ImageNet-pretrained Vision Transformer (ViT) <cit.>, Video Masked Autoencoder (VideoMAE) <cit.>, and Masked Video Distillation (MVD) <cit.>. While ImageNet ViT is an image-based model, the latter two are video-based models specifically designed for video understanding tasks. For VideoMAE and MVD, we utilized a ViT-L backbone model that was pretrained on the Kinetics-400 dataset <cit.>. This dataset is a large-scale video dataset commonly used for pretraining video models. While both models employ masked feature modeling and utilize an encoder-decoder transformer architecture, MVD introduces an additional feature. This feature involves the use of transfer learning to transfer knowledge from pretrained image and video teacher models to a designated student model, referred to as the student encoder. By leveraging the knowledge learned by these models, we aim to enhance the performance of our models on the surgical phase classification task. To adapt the pretrained models to our specific task, we fine-tuned VideoMAE and MVD for 100 epochs by monitoring the Top-1 Accuracy metric to compare the performance of checkpoints between epochs. This facilitated the selection of the best-performing model for each architecture. Fig. <ref> illustrates an overview of the architecture used. To evaluate the generalization capability of our models, we split the 17 patient dataset into two subsets: a training+validation set with 13 cases and a test set with 4 cases. Importantly, the surgical cases included in the test set were completely unseen during the training process, ensuring an unbiased evaluation of the models' performance on new data. To further enhance the robustness of our models, we implemented an overlapping split strategy for the 13 cases in the training+validation set. Comparing the performance of ImageNet ViT, VideoMAE, and MVD on this challenging dataset, we aim to identify the most effective approach for SPR. The combination of pretrained models, fine-tuning, and a resiliently designed train-test split enables us to thoroughly evaluate the capabilities of these models and their potential for real-world application in surgical video analysis. § EXPERIMENTS §.§ Dataset The Surgical Phase dataset used in this work consists of 17 videos averaging 2.18 hours each and 11 classes of surgical phases, sourced from diverse patients at Massachusetts General Hospital(MGH). Fig. <ref> shows the frequency distribution of the class labels. To prepare the data for our experiments, we converted the video dataset into smaller clips of 10 seconds from a single surgical phase. We employ the sliding window method with a stride of 10 seconds to generate these clips, thereby guaranteeing that no data is repeated or overlapped. §.§ Metrics To evaluate the performance of our models, we employed two widely used metrics: Top-1 Accuracy and Top-5 Accuracy. Top-1 Accuracy measures the percentage of video clips for which the model correctly predicts the surgical phase as its top prediction. On the other hand, Top-5 Accuracy considers a prediction as correct if the ground truth surgical phase is among the model's top 5 predictions. These metrics provide a comprehensive assessment of the models' ability to accurately classify surgical phases in video clips. §.§ Results Table <ref> provides the Top-1 and Top-5 Accuracy for all the models that we experimented. MVD achieves a Top-1 Accuracy of 72.930%, surpassing ImageNet ViT by a significant margin of 20.62 percentage points and VideoMAE by 4.32 percentage points. Similarly, MVD attains a Top-5 Accuracy of 94.144%, demonstrating its superiority over ImageNet ViT and VideoMAE by 5.684 and 2.074 percentage points, respectively. The superior performance of MVD can be attributed to its ability to effectively capture temporal dependencies and learn discriminative features from video data. By leveraging the power of masked video modeling and distillation techniques, MVD achieved highest Top-1 and Top-5 Accuracy which demonstrate its effectiveness in accurately identifying surgical phases. § CONCLUSIONS AND FUTURE WORK In this work, we presented an analysis of Thoracic surgery video data to achieve Surgical Phase Recognition(SPR) using fine-tuned MVD model. MVD outperformed other models, making it a promising approach for SPR. In the near future we plan to generalize this model to other surgical procedures and use the predicted phases to summarize a surgery. -12cm IEEEtran
http://arxiv.org/abs/2406.09273v1
20240613161110
QCD constraints on isospin-dense matter and the nuclear equation of state
[ "Ryan Abbott", "William Detmold", "Marc Illa", "Assumpta Parreño", "Robert J. Perry", "Fernando Romero-López", "Phiala E. Shanahan", "Michael L. Wagman" ]
hep-lat
[ "hep-lat", "nucl-th" ]
InQubator for Quantum Simulation (IQuS), Department of Physics, University of Washington, Seattle, WA 98195, USA Departament de Física Quàntica i Astrofísica and Institut de Ciències del Cosmos, Universitat de Barcelona, Martí i Franquès 1, E08028, Spain Departament de Física Quàntica i Astrofísica and Institut de Ciències del Cosmos, Universitat de Barcelona, Martí i Franquès 1, E08028, Spain Fermi National Accelerator Laboratory, Batavia, IL 60510, USA NPLQCD collaboration MIT-CTP/5729 § ABSTRACT Understanding the behavior of dense hadronic matter is a central goal in nuclear physics as it governs the nature and dynamics of astrophysical objects such as supernovae and neutron stars. Because of the non-perturbative nature of quantum chromodynamics (QCD), little is known rigorously about hadronic matter in these extreme conditions. Here, lattice QCD calculations are used to compute thermodynamic quantities and the equation of state of QCD over a wide range of isospin chemical potentials. Agreement is seen with chiral perturbation theory predictions when the chemical potential is small. Comparison to perturbative QCD calculations at large chemical potential allows for an estimate of the gap in the superconducting phase, and this quantity is seen to agree with perturbative determinations. Since the partition function for an isospin chemical potential, μ_I, bounds the partition function for a baryon chemical potential μ_B=3/2μ_I, these calculations also provide rigorous non-perturbative QCD bounds on the nuclear equation of state over a wide range of baryon densities for the first time. QCD constraints on isospin-dense matter and the nuclear equation of state Michael L. Wagman June 17, 2024 ========================================================================= The determination of the internal structure of neutron stars presents a long-standing and important challenge for nuclear theory. Since neutron stars were first predicted in the 1930s and observed 30 years later, many models for the structure of their interiors have been proposed, including various phases of nuclear matter, mesonic condensates and hyperonic matter, and deconfined quark cores <cit.>. As observational data and terrestrial probes of the relevant nuclear densities are not sufficiently constraining, most of these possibilities for the neutron star equation of state (EoS) remain viable. From a theoretical perspective, it is expected that neutron star interiors can be described by the Standard Model of particle physics, however in a regime where the strong interactions are non-perturbative. The numerical technique of lattice quantum chromodynamics (LQCD) is applicable at such large couplings, but is beset by a notorious sign problem at nonzero baryon chemical potential <cit.>, prohibiting its direct application. Consequently, theoretical approaches to the nuclear EoS are based on models and interpolations between phenomenological constraints from nuclear structure and perturbative QCD (pQCD) calculations that are valid at asymptotically large chemical potentials (see for example, Refs. <cit.>). In light of this, any rigorous information that can impact such analyses is of paramount importance. In this work, the first non-perturbative QCD constraint on the nuclear EoS with complete quantification of systematic uncertainties is presented. These calculations build upon the proof-of-principle, single lattice spacing results of Ref. <cit.> with improved methodology, increased statistical precision, an extrapolation to the continuum limit, and an interpolation to the physical quark masses, enabling a systematically-controlled result to be achieved for the first time. The pressure and other thermodynamic properties of low-temperature isospin-dense matter are determined over a wide range of densities and chemical potentials, spanning all scales from hadronic to perturbatively-coupled. At small values of the isospin chemical potential, μ_I, the results are found to agree with chiral perturbation theory (χPT) <cit.> at next-to-leading order (NLO) <cit.>. At large μ_I, the results are seen to agree with pQCD with pairing contributions <cit.>. The comparison of next-to-next-to-leading-order (NNLO) pQCD predictions <cit.> for the pressure with the continuum limit of the LQCD calculations provides a determination of the superconducting gap as a function of μ_I. This is seen to agree with the leading-order perturbative calculation of the pairing gap <cit.>, but is more precise. Additionally, a robust conclusion from this study is that the speed of sound in isospin-dense matter significantly exceeds the conformal limit of c_s^2/c^2≤1/3 over a wide range of μ_I. A Bayesian model mixing approach that combines χPT, LQCD and pQCD information provides a determination of the zero-temperature EoS for isospin-dense QCD matter valid at all values of the isospin chemical potential for the first time. From simple path-integral relations <cit.>, the determination of the pressure in isospin-dense matter provides a non-perturbative, model-independent bound on the pressure of isospin-symmetric QCD matter at nonzero baryon chemical potential and hence on the nuclear EoS. The current results therefore provide a systematically-controlled QCD bound at all densities, and the impact on neutron star phenomenology is briefly discussed. Thermodynamic relations: Thermodynamic quantities are accessed in this work by building an approximation to the grand canonical partition function, valid at low-temperature. The grand canonical partition function is defined at a temperature, T=1/β, and isospin chemical potential, μ_I, by Z(β, μ_I) = ∑_s e^-β (E_s - μ_I I_z(s)), where the sum is over all states, s, and E_s and I_z(s) correspond to the energy and z-component of isospin of a given state, respectively. Since states of different I_z but the same I are approximately degenerate, contributions from states with I_z< I are suppressed by O(e^-βμ_I) relative to those with I_z=I and can therefore be neglected. Additionally, only the ground state for each isospin contributes at low temperature. The summation can therefore be approximated in terms of these I_z=I ground states, which can be labeled by their isospin charge n=I=I_z, and truncated at some n_ max giving Z(β→∞, μ_I) ≃∑_n=0^n_ max e^-β (E_n - μ_I n). E_0=0 is chosen, and this approximation is valid for values of μ_I such that the truncation at n_ max does not affect the result significantly. For an observable 𝒪 that only depends on the energy and particle number of the system, the thermodynamic expectation value of 𝒪 can be computed as ⟨𝒪(E,n)|_⟩β,μ_I = 1/Z(β, μ_I)∑_n 𝒪(E_n,n) e^-β (E_n - μ_I n) . The energy density can be computed using the quantity E_n/V, while the number density can be computed from the expectation value of n/V, where V is the volume of the system. Derivatives of observables can also be computed using ∂/∂μ_I⟨𝒪|_⟩β,μ_I = β( ⟨n 𝒪|_⟩β,μ_I - ⟨n|_⟩β,μ_I⟨𝒪|_⟩β,μ_I), which can be obtained directly by differentiating Eq. (<ref>). This leads to the following expressions for the pressure, P(β,μ_I)=∫_0^μ_I⟨ n⟩_β,μ/Vd μ, and speed of sound defined by the isentropic derivative of the pressure with respect to the energy density, ϵ, 1/c_s^2 = ∂ϵ/∂ P = 1/⟨n|_⟩β,μ_I∂/∂μ_I⟨E|_⟩β,μ_I. Previous work <cit.> studied isospin-dense matter through a canonical partition function approach by using the thermodynamic relation μ_I = dE_n/dn to determine the isospin chemical potential from the extracted energies. Other studies have added the isospin chemical potential directly to the QCD action <cit.>. The primary advantage of the method used here in comparison to the canonical approach is that μ_I enters as an input to the calculation of thermodynamic quantities rather than being derived from the isospin charge of the LQCD data and therefore is not subject to statistical and systematic uncertainties. Color-superconducting gap: At large isospin chemical potential, asymptotic freedom guarantees the validity of pQCD and the resulting prediction of a color-singlet superconducting state at zero temperature <cit.>. In this state, Cooper-pairs of quark–anti-quark fields condense, leading to a superconducting gap with order parameter ⟨d_aγ_5u_b⟩ = δ_abΔ, where a and b are color indices. The gap Δ can be computed perturbatively <cit.>, with the next-to-leading order result given by Δ=b̃μ_I exp(-π^2+4/16) exp(-3 π^2/2 g), where b̃=512 π^4 g^-5 and g = √(4 πα_s(μ_I)) is the strong coupling at the scale μ_I. Notably, the prefactor of 1/g in the exponent of Eq. (<ref>) is smaller than the analogous coefficient in the baryon-density case by a factor of 1/√(2) <cit.>, leading to an exponential enhancement of the gap and its effects in isospin-dense QCD. If pQCD is reliable for a given μ_I and μ_B=3μ_I/2, then the isospin-dense gap bounds the baryonic gap, in which there is significant phenomenological interest <cit.>. The nontrivial background in the presence of the gap induces a change in the pressure of the system. This change can be computed perturbatively, as has been done at NLO in Ref. <cit.> with the result δ P ≡ P(Δ)-P(Δ=0) =N_c/2 π^2μ_I^2 Δ^2(1+g/6). This difference allows for an indirect extraction of the gap by comparing the lattice QCD pressure with the pressure derived in pQCD without pairing. LQCD calculations: Following the methods and analysis techniques developed in Ref. <cit.>, the energies of systems of isospin charge n∈{1,…,6144} are determined from two-point correlation functions C_n(t) = ⟨(∑_ xπ^-( x, 0))^n ∏_i=1^n π^+( y_i, t) ⟩, calculated on four ensembles of gauge field configurations whose parameters are shown in Table <ref>. Here, π^-( x,t)=π^+( x,t)^† = -d( x,t) γ_5 u( x,t) is an interpolating operator built from u and d quark fields that creates states with the quantum numbers of the π^-. The correlation functions are computed using the symmetric polynomial method of Ref. <cit.> from sparsened <cit.> quark propagators computed using a grid of 512 source locations on a single timeslice of each configuration. The relevant ground-state energies, E_n, are determined from an analysis of the t-dependence of the effective energy functions a E_eff^(n)(t) = logC_n(t)/C_n(t - 1) = ϑ_n(t) - ϑ_n(t-1) + σ_n^2(t)/2 - σ_n^2(t-1)/2 , where a is the lattice spacing and ϑ_n(t) and σ_n(t) are the mean and standard deviation of log C_n(t) evaluated over bootstrap resamplings, respectively <cit.> (N_b=2000 bootstrap resamplings are used on each ensemble to assess the statistical uncertainties and address correlations). As in Ref. <cit.>, on each bootstrap the energy is given by the value of the effective mass on a randomly chosen time inside the effective mass plateau region. Given the energies determined on each ensemble for systems of isospin charge n∈{1,…,6144}, Eqs. (<ref>) and (<ref>) are used to determine the pressure, the energy density, and speed of sound. The action used in these calculations is perturbatively improved, so discretization effects are O(a^2, g^2 a). The mass dependence of quantities evaluated over the range of quark masses used in the calculations is expected to be described linearly in m_ud∼ m_π^2. Each quantity X∈{P/P_ SB,ϵ/ϵ_ SB,c_s^2} (where P_ SB = ϵ_SB / 3 = μ_I^4 / 32 π^2 is the pressure of a Stefan-Boltzmann gas) is fit with forms including arbitrary μ_I dependence and terms O(a^2, a^2μ_I, a^2μ_I^2,(m_π^2-m_π^2)), where m_π = 139 MeV, with coefficients independent of μ_I. The systematic uncertainty from the extrapolation is assessed by combining fits with all possible subsets of the terms above through model averaging <cit.>. The systematic uncertainty and the statistical uncertainty are combined under the bootstrap procedure. The calculated pressure is shown in Fig. <ref> for each lattice ensemble as well as for the continuum-limit, physical-quark-mass interpolation. The LQCD results for the different volumes, lattice spacings and quark masses used in the calculations are seen to agree with each other within uncertainties and also with the physical mass, continuum limit extraction. The LQCD pressure agrees with NLO χPT at small values of the chemical potential and can be compared with NNLO pQCD <cit.> at large values of the chemical potential. The mild tension seen between LQCD and pQCD for μ_I∈[1500,2250] MeV potentially indicates the presence of a superconducting gap <cit.> (it could alternatively be an indication of the breakdown of pQCD, although NLO and NNLO pQCD results are in agreement over this range). The corresponding speed of sound is seen to exceed the conformal limit of c_s^2/c^2=1/3 over a wide range of the isospin chemical potential (see Fig. <ref> below). While this behavior was seen in Refs. <cit.> (and also in two-color QCD <cit.>), the present results confirm that this is not a lattice artifact and that such behavior is possible in strongly interacting QCD matter. This suggests that the assumption that the speed of sound remains below this value in baryonic matter is questionable. Given the LQCD calculation of the pressure, a determination of the superconducting gap can be made by subtracting the pQCD calculation of the pressure in the absence of the gap. In the range of chemical potentials where pQCD is a controlled expansion, this determines the gap, accurate to the same order as the perturbative subtraction. Figure <ref> shows the extracted gap found using the NNLO pQCD pressure subtraction as well as a comparison to the pQCD gap in Eq. (<ref>) evaluated at scales Λ̅=μ_I×{0.5,1.0,2.0} as a guide to uncertainty. As can be seen, the gap extracted from the LQCD calculations is in agreement with the perturbative gap for μ_I∈[1500,3250] MeV but is considerably more precisely determined than the uncertainty from perturbative scale variation. Since there is agreement with the perturbative estimate, the gap is also most likely larger than the corresponding gap for baryonic matter. Equation of state for isospin-dense mater: The continuum-limit lattice QCD calculations presented above span isospin chemical potentials from just above the pion mass to values where pQCD appears to converge. Consequently, by combining the LQCD results with χPT and pQCD, the zero temperature EoS of isospin-dense matter can be described for all μ_I with uncertainties quantified using Bayesian inference. The functional dependence of each overlapping theoretical constraint on μ_I is modelled by a correlated Gaussian distribution. The ensemble of constraints is combined via a Gaussian Process (GP), following similar work for the nuclear EoS <cit.>. Theoretical uncertainties of χPT are estimated from the difference between the NLO and LO results, and uncertainties in pQCD are assessed from scale variation over Λ̅∈μ_I×[0.5,2.0]. Figure <ref> shows the GP-model results for the speed of sound in comparison to the three theoretical inputs. With a complete quantification of the isospin-dense equation of state, phenomenological implications such as the existence of pion stars <cit.> and the isospin effects that distinguishes pure neutron matter from symmetric nuclear matter <cit.> can be further investigated. Constraining the nuclear equation of state: The partition function of two-flavor QCD with an isospin chemical potential, μ_I, can be written in terms of the path integral Z_I(β,μ_I) = ∫_β[d A]det𝒟(-μ_I/2)det𝒟(μ_I/2) e^-S_G = ∫_β[d A]|det𝒟(μ_I/2)|^2 e^-S_G, where A is the gluon field, 𝒟(μ) ≡ D/+m-μ_qγ_0 is the Dirac operator with chemical potential μ_q, S_G is the gauge action, and ∫_β[dA] indicates integration over gauge fields with period β in the temporal direction. As first shown in Refs. <cit.>, this partition function bounds the partition function of two-flavor QCD with equal chemical potentials for u and d quarks Z_B(β,μ_B)=∫_β[d A] Re[det𝒟(μ_B/N_c)]^2 e^-S_G as Z_B(β,μ_B) ≤ Z_I(β,μ_I=2 μ_B/N_c). By the monotonicity of the logarithm, the above inequality directly translates into an inequality between the pressures of the two media as a function of the energy density. Consequently, the isospin-dense EoS bounds the nuclear EoS for symmetric nuclear matter. At large values of the quark chemical potentials, where pQCD is valid, this bound becomes tight as differences between the partition functions enter only at O(α_s^k) for k≥3 <cit.>. This bound was explored in Ref. <cit.> based on the previous lattice QCD results <cit.> at a single lattice-spacing and unphysical quark masses. Here, Fig. <ref> presents updated bounds based on the continuum limit lattice QCD results at the physical quark masses, χPT, and perturbative QCD through the GP-model. While the bounds from isospin-dense matter do not significantly constrain phenomenological nuclear equations of state within the uncertainties that are typically presented <cit.>, the bounds are independent of modeling uncertainties that enter the nuclear EoS in the regions that are unconstrained by nuclear structure or pQCD calculations. Summary: In this letter, a determination of the equation of state of isospin-dense matter for the complete range of isospin chemical potential at zero temperature is presented for the first time. To achieve this, continuum limit LQCD calculations are combined with pQCD calculations and χPT through a model-mixing approach in overlapping regions of isospin chemical potential. Comparison to pQCD enables a determination of the superconducting gap, and QCD inequalities translate the isospin-dense EoS into rigorous bounds on the nuclear EoS relevant for astrophysical environments. We are grateful to Yuki Fujimoto and Sanjay Reddy for discussions. The ensembles used in this work were generated through the combined efforts of the JLab, William and Mary, Los Alamos, and MIT groups. We particularly thank Balint Joó for assistance with the generation of the gauge configurations and quark propagators used in this work. The calculations were performed using an allocation from the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program using the resources of the Oak Ridge Leadership Computing Facility located in the Oak Ridge National Laboratory, which is supported by the Office of Science of the Department of Energy under Contract DE-AC05-00OR22725. This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. We acknowledge USQCD computing allocations and PRACE for awarding us access to Marconi100 at CINECA, Italy. This work is supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/) and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under grant Contract Number DE-SC0011090. RA, WD and PES are also supported by the U.S. Department of Energy SciDAC5 award DE-SC0023116. FRL acknowledges financial support from the Mauricio and Carlota Botton Fellowship. MI is partially supported by the Quantum Science Center (QSC), a National Quantum Information Science Research Center of the U.S. Department of Energy. PES is also supported by the U.S. DOE Early Career Award DE-SC0021006. AP and RP acknowledge support from Grant CEX2019-000918-M and the project PID2020-118758GB-I00, financed by the Spanish MCIN/ AEI/10.13039/501100011033/, and from the EU STRONG-2020 project under the program H2020-INFRAIA-2018-1 grant agreement no. 824093. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This work made use of Chroma <cit.>, QDPJIT <cit.>, QUDA <cit.>, JAX <cit.>, NumPy <cit.>, SciPy <cit.>, and matplotlib <cit.>. utphys
http://arxiv.org/abs/2406.08587v1
20240612184728
CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery
[ "Xiaoshuai Song", "Muxi Diao", "Guanting Dong", "Zhengyang Wang", "Yujia Fu", "Runqi Qiao", "Zhexu Wang", "Dayuan Fu", "Huangxuan Wu", "Bin Liang", "Weihao Zeng", "Yejie Wang", "Zhuoma GongQue", "Jianing Yu", "Qiuna Tan", "Weiran Xu" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Composition Tracking for Collisions Between Differentiated Bodies in REBOUND [ June 17, 2024 ============================================================================ § ABSTRACT Computer Science (CS) stands as a testament to the intricacies of human intelligence, profoundly advancing the development of artificial intelligence and modern society. However, the current community of large language models (LLMs) overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first bilingual (Chinese-English) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 5K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbenchhttps://github.com/csbench/csbench. § INTRODUCTION Serving as the cornerstone of the modern information revolution, the significance of computer science (CS) extends from the early days of electronic computers to today's advancements in artificial intelligence (AI) <cit.>. As a new milestone in AI, large language models (LLMs) <cit.> represented by ChatGPT <cit.> and GPT-4 <cit.> are not limited to the natural language processing (NLP) community, showing vast potential in fields including education, industry, and science <cit.>. However, enabling LLMs to effectively utilize computer science knowledge and serve humanity more efficiently is one of the key challenges on the path to the future intelligent era <cit.>. Understanding the performance of LLMs in computer science is fundamental to the research and application of LLMs within the field. Despite studies like MMLU and C-Eval <cit.> covering a wide range of fields including CS, their broad scope implies that CS is merely a component within the multiple categories of science and engineering, overlooking the importance of thoroughly evaluating the CS field. Moreover, such evaluation result can further guide the development of LLMs, offering practical insights to advance the corresponding capabilities. Recently, a series of studies have devoted on actively assessing and analyzing the capabilities of LLMs in mathematics, coding, and logical reasoning <cit.>. Unfortunately, efforts on LLMs in cross-capability evaluation is quite scarce. Considering the intersection of computer science with coding, mathematics, and reasoning abilities, we have grounds to believe that cross-capability research and analysis in CS can effectively propel the comprehensive development of the LLM community. Here, we are particularly interested in two research questions for evaluating LLMs' proficiency in computer science field: RQ1: How do LLMs perform in the field of computer science and what are the challenges and potential directions for improvement? RQ2: What are the relationship between the abilities of LLMs in computer science, mathematics, and code programming? As the bedrock for exploration, we first propose CS-Bench, the first benchmark dedicated to evaluating the performance of LLMs in the field of computer science. CS-Bench features high-quality, diverse task forms, varying capacities, and bilingual evaluation. Firstly, CS-Bench comprises approximately 5,000 carefully curated test items spanning 26 sections across 4 key CS domains. Diverging from conventional benchmarks consisting solely of multiple-choice (MC) questions <cit.>, CS-Bench includes 4 tasks: multiple-choice, assertion, fill-in-the-blank (FITB), and open-ended, to better simulate real-world scenarios and assess the robustness of LLMs to different task formats. In addition to knowledge-type questions assessing LLMs' mastery of CS knowledge, reasoning-type questions further evaluate LLMs' ability to apply CS knowledge for reasoning. Lastly, by supporting bilingual evaluation in Chinese and English, CS-Bench enables the appraisal of LLMs' adeptness in addressing CS challenges across different language contexts. In response to RQ1, we evaluate over 30 mainstream LLMs on CS-Bench. Our main findings are: (1) CS-Bench effectively differentiates the capabilities of LLMs in the CS field while also posing significant challenges to the best-performing GPT-4/ GPT-4o. (2) LLMs exhibit a consistent logarithmic growth pattern in scale and a linear growth pattern in scores on the CS-Bench. By establishing the scale-score fitting function, smaller models can be used to predict and guide the development of larger-scale models. (3) Further error type analysis indicates that the primary reason for the limited performance of LLMs is the lack of domain knowledge, and the CS-specific reasoning is difficult to achieve merely by enhancing general reasoning abilities, necessitating targeted reinforcement. In response to RQ2, we perform a detailed analysis of the relationship of General LLMs' ability in three domains: mathematics, coding, and computer science, as well as the performance of code- and math-specific expert LLMs on CS-Bench. We observe consistent trends in the overall performance of the general LLMs across CS-Bench and scores in benchmarks related to mathematics and coding, indicating a strong correlation between LLM's computer science proficiency and its mathematical and programming abilities. Furthermore, despite a decline in general capabilities, some expert LLMs still exhibit improvements in certain areas of CS, such as data structures and algorithms, with more pronounced knowledge and reasoning capabilities evident in supplementary smaller-scale models. To summarize, our contributions are as follows: * We introduce CS-Bench, the first benchmark dedicated to evaluate the performance of LLMs in the field of computer science. CS-Bench supports both Chinese and English, covers four key areas with 26 subfields, and includes a diverse range of task formats. * Utilizing CS Bench, we conduct a comprehensive evaluation of mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvement. * We conduct exploratory experiments on LLMs’ cross-ability and find a strong relationship between their CS proficiency and mathematical and programming abilities. Moreover, the expertise in mathematics and programming of expert LLMs can improve performance in specific CS subfields. § CS-BENCH §.§ Design Principle The objective of CS-Bench is to robustly assess the knowledge and reasoning capabilities of LLMs in different linguistic contexts within the field of computer science. To this end, our benchmark adheres to the following guidelines: (1) Coverage of key domains: it covers key areas of CS with finer subfields for specificity. (2) Diverse task forms: questions vary in format to simulate diverse real-world user queries. (3) CS-specific reasoning: it evaluates CS logical and arithmetic reasoning in addition to CS knowledge. (4) Multilinguality support: it supports assesses LLMs’ performance in different language environments. Based on these criteria, CS-Bench focuses on bilingual evaluation in Chinese and English, covering four domains: Data Structure and Algorithm (DSA), Computer Organization (CO), Computer Network (CN), and Operating System (OS). Twenty-six fine-grained subfields, diverse task forms, and divisions of knowledge and reasoning are further developed to enrich the dimensions of assessment and simulate real-world scenarios. §.§ Data Collection r0.5 Comparison of perplexity (PPL) across evaluation datasets. The PPL of English and Chinese datasets is calculated on Llama2-7B-base and Qwen1.5-7B-base, respectively. “MC” denotes multiple-choice, and “ALL” denotes all tasks. 0.75 English Dataset PPL Chinese Dataset PPL TruthfulQA (MC) <cit.> 7.73 C-Eval <cit.> 11.47 MMLU <cit.> 9.54 CMMLU <cit.> 13.62 CS-Bench (MC) 11.86 CS-Bench (MC) 13.31 CS-Bench (ALL) 13.3 CS-Bench (ALL) 16.95 Data Sources. Diverse data sources are key to achieving the sample diversity of CS-Bench. Our raw data originates from three sources: (1) Computer science-related questions obtained from publicly available online channels, such as professional exams and practice tests[e.g., <https://github.com/CodePanda66/CSPostgraduate-408>]. (2)Knowledge-type questions obtained through the initial manual extraction and subsequent adaptation of blog articles from various computer-related websites[e.g., <https://www.wikipedia.org/>,<https://www.cnblogs.com/>, <https://www.csdn.net/>]. (3) Construction of teaching materials and examination papers authorized by the authors' institutions. The latter two categories constitute the vast majority (over 70%) of the data, and these data are not directly exposed on the internet, effectively reducing the likelihood of LLMs encountering these questions during pre-training. We compare the perplexity <cit.> of models on CS-Bench and several prominent evaluation datasets in Table <ref>. In both English and Chinese, the perplexity of CS-Bench is comparable to or even higher than that of other datasets, further indicating the high quality of CS-Bench samples and the rarity of data leakage instances. Data Processing. The data processing relies on a team composed of five members, each holding a bachelor's degree in computer science and receiving appropriate compensation. Initially, we parse questions and answers for each sample from the data sources either automatically or manually. Subsequently, we manually label questions with knowledge-type or reasoning-type tags depending on whether in-depth reasoning and calculation are required. For reasoning-type questions, we attempt to collect explanations from the data sources whenever possible; otherwise, we handle them through cross-annotation and verification among team members. We first construct Chinese data, then translate it into English using GPT-4, supplemented by manual checks, to create English data. Finally, we conduct thorough manual checks on the entire dataset to ensure quality. As this benchmark pertains to objective knowledge and reasoning in the field of computer science, the annotation content is not influenced by regional or cultural differences among annotators. Statistics. CS-Bench is an evaluation benchmark supporting bilingual assessment, encompassing a total of 26 subfields across 4 domains, with a cumulative total of 4838 samples. These samples encompass various task formats including multiple-choice, assertion, fill-in-the-blank, and open-ended questions. Besides, CS-Bench assesses both knowledge-type and higher-order reasoning-type questions, with each reasoning question accompanied by an explanation. To validate the effectiveness of models, we randomly sample 10% of the data for validation, using the remaining 90% for testing. The statistics of CS-Bench are shown in Figure <ref>, with detailed exposition provided in Appendix <ref>. § EXPERIMENT §.§ Experimental Setup Evaluation Protocols. Due to the diverse task formats in CS-Bench, we first design question templates for each task type. For comprehension tasks (MC and Assertion), we use regex to match LLM's predictions and then calculate their accuracy against the ground-truth answers. For generation tasks (FITB and Open-ended), due to the diversity of ground-truth answers, we score LLM's predictions by GPT-4 using standard answers in CS-Bench as references. In detail, FITB questions are scored as either 0 or 1, while the score range for Open-ended questions is 1-10, which is then linearly mapped to a range of 0.1 to 1. Finally, scores are weighted based on the quantity of each type to derive the ultimate overall score. It is worth emphasizing that while employing GPT-4 for scoring generation tasks may introduce a certain threshold for evaluation, its primary purpose is to simulate diverse task formats in real-world scenarios. Therefore, we encourage isolating comprehension tasks from CS-Bench to facilitate automatic evaluation with no need for GPT-4. We provide the details of the evaluation setup in Appendix <ref>, where we also verify the validity of GPT-4 scoring through its consistency with manually scored results. Models. We evaluate nearly 30 models in different sizes from 12 model families. For open-source models, we selected Gemma-2B/7B <cit.>, Llama2-7B/13B/70B <cit.>, Llama3-8B/70B <cit.>, ChatGLM3-6B <cit.>, Baichuan2 (v2.0)-7B/13B <cit.>, InternLM2-7B/20B <cit.> , Qwen1.5-4B/7B/14B/72B/110B <cit.>, Mistral-7B (v0.2) <cit.>, Mixtral-8×7B (v0.1) <cit.>, and DeepSeekLLM-7B/67B <cit.>. For closed-source commercial models, we utilized PaLM-2 (palm-2-chat-bison) <cit.>, Claude-2.1 <cit.>, Claude-3 (opus) <cit.>, as well as GPT-3.5, GPT-4 (0125 version) <cit.> and GPT-4o <cit.>. To ensure the instruction-following abilities, we employ the official chat or instruction-tuned versions for all models. Details on these models are provided in Appendix <ref>. §.§ Main Results Table <ref> presents the overall results of all foundation models directly answering questions under the zero-shot setting [Due to space constraints, the results and analysis on CS-Bench (CN) are provided in Appendix <ref>.]. In summary, the overall scores of models range from 39.86% to 72.29%, demonstrating CS-Bench's effectiveness in distinguishing between the abilities of various models in the field of CS while also posing significant challenges to the best-performing existing models. Subsequently, we conduct a comprehensive analysis of the experimental results from various aspects including Foundation Models, Knowledge & Reasoning, Domains, and Task Formats. Comparison between Foundation Models. Firstly, the closed-source models GPT-4/GPT-4o represent the highest standard on CS-Bench, being the only two models exceeding 70% proficiency. Secondly, the disparity between the leading open-source and closed-source models is not significant. Notably, premier open-source models such as Qwen1.5-110B and Llama3-70B have surpassed previously strong closed-source models like GPT-3.5 and Claude-2.1, drawing close to Claude-3 in performance. Thirdly, newer models demonstrate significant improvements compared to earlier versions. For example, among models with scales below 10B, Llama3-8B performs the best, rivaling previous much larger-scale models and even surpassing Llama2-70B, indicating significant potential for compression in model parameters <cit.>. Lastly, while performance variations exist among models of different families at the same scale, models within the same family continue to improve with increasing scale on CS-Bench (see detailed scale analysis in Section <ref>). Comparison of Knowledge and Reasoning. Overall, all models perform worse on reasoning (average 44.63%) compared to knowledge scores (average 60.52%), indicating that reasoning poses a greater challenge to LLMs compared to knowledge. As shown in Figure <ref>, there is a strong positive correlation between reasoning scores and knowledge scores. However, this correlation is not absolute. For instance, PaLM-2 has a higher knowledge score but a lower reasoning score compared to Claude-2.1, showing PaLM-2's weakness in applying knowledge. Furthermore, more powerful LLMs demonstrate a stronger ability to use knowledge for reasoning compared to weaker LLMs. This is reflected in the much lower reasoning scores of weaker models relative to their knowledge scores. However, as the model's capability increases, the growth in reasoning scores is more pronounced than that of knowledge scores, gradually bridging the gap between knowledge and reasoning abilities. Comparison between Domains. First, regarding knowledge scores in Table <ref> and Figure <ref> (a), models generally perform best in DSA and worst in OS, which we attribute mainly to differences in the scale of pretraining data and the varying learning capabilities induced by model size. Second, the demand for reasoning ability varies across different domains, as evidenced by the gap between knowledge and reasoning scores. A notable example is GPT-4o, which shows close knowledge and reasoning scores in OS, while exhibiting extreme differences in DSA, with the highest and lowest scores, respectively. We further explore LLMs' performance in fine-coursed subfields in Appendix <ref> and explore the impact of Code and Math abilities on different CS domains in Section <ref>. Comparison between Tasks. As shown in Figure <ref> (b) and Table <ref>, given the varying initial random scores, LLMs generally performs best on Assertion questions (average 63.11% across all models), followed by MC questions (average 54.92%), Open-ended questions (average 49.1%), and performs worst on FITB questions (average 41%). However, the variation in task format sensitivity is highly pronounced in weaker models, while stronger models can mitigate the disparities caused by different task formats, exhibiting robustness. For instance, Llama2-7B scores only 26.19% on Open-ended reasoning but 60.61% on Assertion reasoning, whereas GPT-4 scores comparably on both Open-ended reasoning (68.94%) and Assertion reasoning (67.68%). §.§ Qualitative Analysis Relationship between Scores and Model Scales. To investigate how the performance of models varies with the increase in parameter size, we examine several model families and plot the results in Figure <ref> (a). It can be observed that although different families exhibit distinct performances, models within the same family consistently show improvement as the parameter size increases. However, as the model parameter size continues to increase, the performance gains from scaling diminish, resulting in diminishing returns in efficiency. For instance, the score in Qwen1.5 improves by 16.19% from 0.5B to 7B, by 7.11% from 14B to 72B, and by only 2.66% from 72B to 110B. Additionally, as shown in Figure <ref> (b), when the parameter scale grows exponentially, the score approximately increases linearly. This indicates that in the CS field, the model's performance also follows a logarithmic scale pattern. Given the substantial computational resources required for large-scale models, we aim to establish the relationship between model scales and scores to predict the performance of larger-scale models in the CS field by fitting smaller-scale model scores. Due to space limitations, the specific design and implementation of the fitting function are provided in Appendix <ref>. Overall, we fit the functions of Llama2 and Qwen1.5 series based on models ranging from 7B to 70/72B. We validate the fitting function on Qwen-1.5 110B, where the predicted value (67.83%) closely matches the actual value (67.95%), enabling further predictions for theoretical models, even up to 1000B. Comparison between Zero-shot, Few-shot and COT Prompting. To investigate the impact of few-shot prompts and chain of thought (COT <cit.>) on model performance, we evaluate model's performance under 5-shot answer-only (AO) and 5-shot COT prompts in Figure <ref> (c), where the prompt samples are sampled from the validation set and match the domain of the test questions. Given that model-generated results under 0-shot COT often don't adhere to specific formats, making regular matching difficult, we omit 0-shot COT experiments, similar to C-Eval. Additionally, for Open-ended questions, since the answers include detailed explanations, 5-shot COT is the same as 5-shot AO. For all tested models, the 5-shot prompts show improvement compared to 0-shot, with average increases of 1.47% for 5-shot AO and 2.00% for 5-shot COT, respectively. Moreover, the efficacy of few-shot prompts in bringing improvements appears more pronounced in some robust models such as GPT-3.5 and GPT-4, owing to their superior in-context learning capabilities. Analysis of Error Types. To delve into the roots of LLMs' failure on CS-Bench and offer pathways toward improvement, we acquire the solution process of model errors under 5-shot COT, and utilize GPT-4 to categorize each error type in MC questions in Figure <ref>. It should be emphasized that models may cause joint errors, resulting in more than one error type assigned to a single answer. In general, from Llama2-7B all the way to GPT-4, the total number of errors continues to decrease for both knowledge-type and reasoning-type questions. For knowledge-type questions, both single concept errors and concept confusion show a decreasing trend. Initially, some completely wrong concepts transitioning to partially erroneous ones and subsequently being eliminated, thus exhibiting an initial rise followed by a decline in partial concept errors. For reasoning-type questions, we observe that a significant portion of errors still fall under the category of knowledge-based mistakes. While stronger models have evidently reduced arithmetic reasoning errors for reasoning inaccuracies, there hasn't been much change in logic reasoning errors specific to the CS field. Our analysis highlights that reinforcing CS knowledge concepts is the most direct and effective approach to improving LLMs’ performance in the field of CS. Furthermore, significant improvements in CS reasoning performance are challenging to achieve solely by enhancing general reasoning abilities and mathematical reasoning, necessitating CS-specific reinforcement. More details can be found in <ref>. §.§ What's the Relationship between CS, Math, and Code abilities of LLMs? To explore the relationship between CS proficiency and the mathematical and coding capabilities of models, we investigate (1) the performance of general LLMs across the fields of Math, Code, and CS, and (2) the performance of LLMs specialized in Code and Math within the field of CS. Exploration on General Models. In Figure <ref>, we illustrate how the models' performance on CS-Bench varies with increasing scores on the Math datasets (GSM8K <cit.>, MATH <cit.>) and Code datasets (HumanEval <cit.>, MBPP <cit.>). We observe that the overall trend in CS-Bench performance closely aligns with changes in Math and Code scores, as indicated by a Pearson correlation coefficient <cit.> exceeding 0.9. Besides the general enhancement of diverse competencies that superior models typically bring, we consider this evidence to suggest a close correlation between CS proficiency and abilities in Math as well as Code. r0.5 < g r a p h i c s > The score changes on CS-Bench as LLM's Math/Code score increases. p denotes Pearson correlation coefficient. We obtain the scores on Math/Code datasets from <cit.>. Next, we examine models with inconsistent patterns between CS and Math/Code. In the Math domain, Qwen1.5-7B outperforms Llama2-70B in both GSM8K and MATH, yet in CS-Bench, Llama2-70B surpasses Qwen1.5-7B. In the Code domain, Mixtral-8×7B performs better than Qwen1.5-32B on HumanEval and MBPP, whereas the opposite is observed on CS-Bench. Given the NLP community's sustained focus on the Code and Math domains, some recently released models have been trained on a large amount data in these domains, leading to smaller-scale models outperforming much larger-scale ones (e.g., Qwen1.5-7B surpassing Llama2-70B). However, in the CS domain, due to insufficient attention and training data, even excellent small-scale models struggle to surpass much larger-scale models. This also indicates that CS-Bench has not been overfitted during LLM pretraining, making it a fairer benchmark for measuring model performance differences. Exploration on Expert Models. We present the results of the Math and Code expert LLMs in Tables <ref> and <ref>. Compared to general Chat LLMs, expert LLMs usually sacrifice other abilities to boost proficiency in Math or Code, which is reflected in the lower overall performance of most expert LLMs. Therefore, we are more concerned with identifying the specific aspects of CS where Math and Code models show improvement. Regarding mathematics, InternLm-Math-7B improves InternLm2-7B's performance in CO, CN, and OS reasoning tasks, while DeepseekMath exhibits significant improvements across all domains. According to <cit.>, DeepseekMath effectively maintains general knowledge and reasoning ability during specialization. Conversely, MAammoTH and WizardMath perform poorly due to just fine-tuning on limited mathematical datasets, resulting in a significant decline in general knowledge and reasoning. The score changes in LLMs suggest that OS is most closely linked to mathematics, followed by CO, and lastly DSA and CN. In terms of Code, many Code models show significant improvements in DSA (especially knowledge) and OS (especially reasoning), such as CodeLlama and Dolphcoder. This indicates that the disciplines of DSA and OS are more closely related to code, thus enhancing knowledge and reasoning abilities in these directions, while CO and CN have lower relevance, leading to a decrease in scores. Finally, we observe that the enhancement brought about by small-scale expert LLMs compared to larger-scale LLMs is more pronounced (see CodeLlama-7B/13B, WizardCoder-7B/13B). We attribute this to the supplementary need for specific knowledge and reasoning capabilities in small-scale LLMs, whereas large-scale LLMs already encompass a greater breadth of knowledge and stronger reasoning abilities, resulting in diminishing gains from further training in specific domains. § RELATED WORK Exploration of LLMs in Computer Science. Given the powerful capabilities of LLMs, recent research has explored their potential applications across various industries and scientific fields, including finance <cit.>, autonomous driving <cit.>, robotics <cit.>, medicine <cit.>, and chemistry <cit.>. Currently, studies exploring LLMs in the field of computer science fall into two main categories. The first category includes broad evaluation benchmarks covering various fields, such as MMLU <cit.>, CMMLU <cit.>, C-Eval <cit.>, Xiezhi <cit.>, and M3KE <cit.>. However, computer science constitutes only a small fraction of these benchmarks, accounting for less than 5% and lacking detailed CS-specific analysis. The second category focuses solely on exploring specific applications of LLMs within computer science, such as network topology <cit.>, cybersecurity <cit.>, and software engineering <cit.>. Nonetheless, there has been a persistent lack of comprehensive evaluation of LLMs' foundational knowledge and reasoning abilities in computer science. To address this gap, we propose CS-Bench and conduct a thorough evaluation of LLMs, providing guidance for understanding and improving their performance in the CS field. Evaluation of LLMs' Capabilities. Evaluating and understanding the capabilities of LLMs is a major focus within the NLP community. Researchers have extensively explored the capabilities of LLMs including planning <cit.>, multilingual processing <cit.>, instruction following <cit.>, and cross-domain generalization <cit.>. Recently, there has been growing interest in LLMs' abilities in mathematics <cit.>, code programming <cit.>, and logical reasoning <cit.>. While individual capabilities have been well-studied, research on their integrated application and interrelationships remains sparse. Different from <cit.>, which investigates interactions between abilities during the supervised fine-tuning phase, we choose computer science as our research context. Given that computer science inherently integrates coding, mathematics, and reasoning, we utilize CS-Bench in this paper to deeply explore the relationship between LLMs‘ performance in computer science and their mathematical and coding abilities, aiming to advance cross-capability research and integrated analysis of LLM abilities. § CONCLUSION In this work, we introduce CS-Bench, the first benchmark specifically designed to systematically analyze the knowledge and reasoning capabilities of mainstream LLMs in the field of computer science. Our evaluation of over 30 models highlights that even the top-performing GPT-4o has significant room for improvement in computer science. Further score-scale experiments and error type analyses provide directions for enhancing LLMs in the field. Moreover, our investigation into the relationship between computer science, mathematics, and coding demonstrates their close interconnections and provides valuable insights into LLMs’ cross-abilities and applications. unsrt Appendix § LIMITATIONS In this paper, we introduce CS-Bench, providing a comprehensive evaluation of LLMs and exploring the relationships between model capabilities. However, there are still some limitations to this paper. (1) Coverage Limitations: Although CS-Bench has made significant strides in comprehensiveness of CS evaluations compared to existing work, given the breadth of computer science, our evaluations cannot cover the entire scope of computer science knowledge. Furthermore, our assessment content focuses on university-level content, examining LLM's mastery of basic subjects in computer science, rather than specific computer science-related research scenarios. (2) Evaluation Limitations: In the CS-Bench evaluation experiments, we employ GPT-4 scoring to assess generative tasks such as fill-in-the-blank and open-ended tasks. This might lead to certain evaluation thresholds and costs. However, such issues only constitute about 20% of CS-Bench. Additionally, we provide an evaluation scheme that separates comprehension tasks from CS-Bench, allowing for automatic evaluations without the need for GPT-4. (3) Language Limitations: CS-Bench are primarily focused on Chinese and English-dominated language environments, ensuring comprehensive and in-depth evaluations in these two language environments. However, for other non-Chinese and English language environments, its support and coverage are relatively weak, and further optimization and improvement are needed. § BROADEN IMPACT Societal Impact. CS-Bench is anticipated to play a significant role in the field of computer science. LLMs, trained and evaluated with the aid of CS-Bench, can enhance the work efficiency of relevant professionals, enabling them to complete computer-related tasks, such as code review, error detection, and algorithm optimization, more quickly and accurately. Although this might result in the disappearance of some repetitive jobs, it could also create new career opportunities. In the realm of education, the CS-Bench dataset can serve as an effective teaching tool, assisting teachers in better explaining complex computer science concepts and techniques, and also enabling students to better understand and master this knowledge through practice. Ethics Statement. We ensure adherence to applicable laws and ethical guidelines during the process of data collection, annotation, and usage, providing adequate compensation to all our crowd workers. As this benchmark pertains to objective knowledge and reasoning in the field of computer science, the annotation content is not influenced by regional or cultural differences among annotators. Moreover, our dataset does not contain any personally identifiable information or offensive content. The authenticity and accuracy of CS-Bench have been thoroughly verified, providing a reliable basis for evaluating LLMs. CS-Bench is intended solely for academic and research purposes. Any commercial use or other misuse deviating from this purpose is strictly prohibited. We urge all users to respect this provision to maintain the integrity and ethical use of this valuable resource. § MORE DETAILS ON CS-BENCH In <ref>, we provide a detailed explanation of the design motivation and statistics for CS-Bench. In <ref>, we present the distribution of question and answer lengths for each task in CS-Bench. In <ref>, we provide a case example for each type under each dimension of CS-Bench. §.§ Detailed Design Motivation and Statistics of CS-Bench We elaborate on the design motivation of CS-Bench and statistics under each dimension as follows. Evaluation Content. To ensure comprehensive coverage of fundamental and critical areas in computer science, we select the four most foundational and prevalent domains within the field of computer science as the core content of the CS-Bench dataset. These four domains are as follows: Data Structure and Algorithm, investigating data organization and algorithmic efficiency; Computer Organization, focusing on hardware composition and foundational system operation; Computer Network, involving the analysis of network communication and data transmission; Operating System, delving into system resource management and process control. As depicted in Figure <ref> (a), these four disciplines exhibit a roughly uniform distribution. Furthermore, we subdivide the disciplines into 26 granular chapters, allowing CS-Bench to furnish more nuanced evaluation outcomes for models and provide comprehensive guidance for model refinement. We summarize these chapters in Table <ref>. Task Format. To better simulate the diverse forms of problems encountered in the real world, we introduce assertion, fill-in-the-blank, and open-ended questions in addition to multiple-choice questions. Specifically, multiple-choice and assertion questions correspond to understanding tasks in CS, while fill-in-the-blank and open-ended questions correspond to generation tasks in CS. Although assessing generation tasks using GPT-4 incurs certain costs, it is important to emphasize that this component represents only a minority (fill-in-the-blank: 10.67%, open-ended: 7.81%), whereas comprehension tasks relying on rule-based scoring constitute the majority (multiple-choice: 61.22%, assertion: 20.3%). Therefore, if resources are limited, we recommend considering the independent use of understanding tasks from CS-Bench for evaluation purposes. Knowledge/Reasoning. The design goal of CS-Bench is not only to assess the mastery of knowledge in the field of CS but also to evaluate the model's ability to reason using CS knowledge. Therefore, each dataset is labeled with “knowledge” or “reasoning”, corresponding to simple questions requiring knowledge recall and challenging questions necessitating knowledge inference, respectively. As shown in Figure <ref> (c), knowledge-based questions account for 63.58%, while reasoning-based questions account for 36.42%. Language. To assess the ability of LLMs in addressing CS problems in various linguistic environments, and to adapt CS-Bench for the evaluation of a wider range of LLMs, CS-Bench comprises bilingual Chinese-English data, with each language accounting for 50%. The English data is obtained through translation by GPT-4, followed by manual verification of processed Chinese data. §.§ Distribution of Word Lengths Due to CS-Bench containing both English and Chinese languages, we separately compute the distributions of word lengths for questions and answers in CS-Bench (English) and CS-Bench (Chinese) across various task formats, as illustrated in Figure <ref> and Figure <ref>. For Multiple-Choice questions, the question length includes both the question itself and the four options. Since Multiple-Choice and Assertion questions are comprehension tasks, the answers consist of only one character (A/B/C/D or True/False). For generation tasks, Fill-in-the-blank answers are relatively short, with an average word length of approximately 2, whereas Open-ended questions typically yield longer answers as they entail detailed explanatory processes. §.§ CS-Bench Examples We present samples from various domains in Table <ref>, samples of different task formats in Table <ref>, samples of knowledge and reasoning types in Table <ref>, and samples from different languages in Table <ref>. § MORE DETAILS ON EXPERIMENT SETUP In <ref>, we present the question templates used to prompt models for each type of task. In <ref>, we show the prompts used for GPT-4 to score models' answers to fill-in-the-blank and open-ended questions, and validate the effectiveness of GPT-4's automatic scoring through consistency experiments with human scoring. In <ref>, we detail the experimental environment used to implement model inference. In <ref>, we introduce all the evaluated model families. §.§ Details of Template for Each Task Format We present the templates for querying LLMs with various question formats in Table <ref>. §.§ Details of GPT-4 Scoring GPT-4 Scoring Prompt. In Table <ref>, we present the prompts utilized to instruct GPT-4 in scoring the outputs of LLMs in CS generation tasks, encompassing both Fill-in-the-blank and Open-ended questions. Consistency between GPT-4 Scoring and Manual Scoring. To assess the effectiveness of GPT-4 scoring in evaluating LLM responses, we conduct a consistency experiment between GPT-4 prediction scores and manual scores. For Fill-in-the-blank and Open-ended types, we randomly sample 100 instances from the GPT-4 scoring samples and employ three human annotators to score these predicted results. In Table <ref>, we report the consistency scores among human annotators (measured by Cronbach's alpha), as well as the consistency scores between the average human annotation scores and GPT-4 scoring (measured by Pearson correlation coefficient). The excellent consistency between human and GPT-4 scores validates the effectiveness of GPT-4 scoring. §.§ Details of Inference Implementation For all open-source models, we utilize a cluster with 8 NVIDIA A100-80GB GPUs to run the inference, and we use vLLM <cit.> for inference acceleration, applying the corresponding chat templates and the same hyper-parameters: batch size=1, temperature=0, top-p=1.0, and max_tokens=2048. For all closed-source models with API access, we also adopt the generation scheme with temperature=0, and simply run the inference with CPUs, which typically completes within a day. During the evaluation of GPT-4, we also applied the setting of temperature=0. To avoid error bias, we conducted the experiments 3 times and took the average of the scores. For models supporting web search or tool calls, we disable these features to ensure a fair comparison. §.§ Details of the Models being Evaluated Gemma <cit.> is a family of lightweight, open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. The Gemma model excels on academic benchmarks in language understanding, reasoning, and security. Gemma publishes models in two sizes (2 billion and 7 billion parameters) . Llama2 <cit.> is an upgraded version of Llama developed by MetaAI. It utilizes more robust data cleaning and mixing techniques, and up-samples sources closest to factual information, which can enhance knowledge and reduce hallucinations. Additionally, it employs Grouped-Query Attention technology to lessen reliance on memory. Llama3 <cit.> is the latest generation of large language models developed by MetaAI. The training dataset for Llama 3 is seven times larger than that used for Llama 2, with the amount of code included being four times that of Llama 2. Compared to previous versions of the model, it has seen a tremendous enhancement in reasoning, code generation, and instruction following capabilities. Llama3-Chinese <cit.> is an instruction-tuned language model for Chinese and English users with various abilities such as roleplaying and tool-using built upon the Meta-Llama-3-8B-Instruct model. ChatGLM3 <cit.> is a next-generation conversational pre-trained model jointly released by Zhipu AI and KEG Lab of Tsinghua University. ChatGLM3-6B adopts a newly designed Prompt format, in addition to regular multi-turn dialogue. It also natively supports complex scenarios such as function call, code interpretation. Baichuan2 <cit.> is a large-scale multilingual model developed by Baichuan Company. It adopts several advanced techniques in its design and training process, including Rotary Position Embedding, a novel position encoding technique, SwiGLU activation function, and memory efficient attention mechanism. Compared with Baichuan1, its performance has been greatly improved. InternLM2 <cit.> is an open-source large-scale language model developed by Shanghai AI Laboratory. This model has good processing ability for ultra long texts and adopts COOL RLHF technology. It solves human preference conflicts through a conditional reward model and performs multiple rounds of online RLHF to improve the model's alignment ability. Qwen1.5 <cit.> is a family of language models developed by Alibaba. It has features such as SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Qwen 1.5 series models have strong basic capabilities including language understanding. Mistral-7B <cit.>, a 7-billion-parameter language model designed for superior performance and efficiency, which is developed by Mistral AI. Mistral 7B leverages Packet Query Attention (GQA) for faster inference, combined with Sliding Window Attention (SWA) to efficiently process sequences of arbitrary length while reducing inference costs. Mixtral-8×7B <cit.> is a Sparse Mixture of Experts (SMoE) language model developed by Mistral AI. Its architecture is the same as that of the Mistral 7B, except that each layer consists of 8 feedforward blocks (i.e., experts). Mixtral has demonstrated exceptional abilities in math, code generation, and tasks that require multilingual understanding. DeepSeekLLM <cit.> is a family of models released by DeepSeek-AI, and its core architecture borrows from the Llama model. This family of models employs Multi-Head Attention (MHA) and Group Query Attention (GQA) techniques, which significantly enhance their performance and efficiency. Furthermore, DeepSeekLLM demonstrates strong bilingual capabilities in both Chinese and English. PaLM-2 <cit.> is the higher-performance successor to PaLM released by Google, which differs in terms of dataset mixing. Compared to the first-generation PaLM version, it uses a smaller model but performs more training calculations. It also relies on more diverse pre-training targets. Claude Claude2.1<cit.> and Claude3 <cit.> are AI models developed by Anthropic, showcasing advanced language understanding and generation capabilities. Utilizing the constitutional AI framework, Claude models are designed to ensure helpfulness and trustworthiness. GPT GPT-3.5 <cit.>, GPT-4 <cit.> and GPT-4o <cit.>, released by OpenAI, are part of the GPT-series models enhanced by a three-stage reinforcement learning with human feedback (RLHF) algorithm. This algorithm not only improves the models' ability to follow instructions but also significantly reduces the generation of harmful or toxic content. Additionally, GPT-4 supports image inputs and achieves human-level performance on various benchmarks. GPT-4o, the latest model developed by OpenAI, boasts powerful real-time reasoning, language interaction, and multimodal capabilities. GLM-4 <cit.> is a new generation base large model developed by Zhipu AI. It has strong tool calling and multi-modal capabilities, as well as strong mathematical reasoning ability and code generation ability. ERNIE <cit.> ERNIE3.5 and ERNIE4 are large language models developed by Baidu. ERNIE3.5 is capable of processing text data in multiple languages and has a good understanding and representation ability for entities and relationships in text. Ernie 4 has adopted more advanced knowledge graph information and more advanced knowledge integration technology, further improving the performance of the model. § MORE DETAILS ON EXPERIMENT In <ref>, we present detailed performance of the models on CS-Bench (EN), including the leaderboard, task formats, and domains. In <ref>, we describe and validate the design of the scale-score fitting function. In <ref>, we evaluate models' performance on CS-Bench (CN) and compare the differences in performance between the English and Chinese contexts. In <ref>, we conduct case studies to better understand the specific details of the models' failures on CS-Bench. §.§ Details of Model Performance The Leaderboard on CS-Bench (EN). We visualize the results of LLMs on CS-Bench (EN) in Figure <ref>. Detailed Performance on Each Task Format. We present models' performance on four types of tasks in Table <ref> and visualize the results in Figure <ref>. Detailed Performance on Each Subfield. In Figure <ref>, we visualize the models' knowledge and reasoning performance across the four domains of CS-Bench. Subsequently, we focus on the models' performance in 26 fine-grained subfields. Table <ref> presents the results of eight representative models. Firstly, we can observe significant variations in scores across different subfields within the same domains for the models. Taking the DSA domain as an example, Llama2-70B scores range from 45.44% to 76.67% across different chapters (average 56.93%), while GPT-3.5 scores range from 55.17% to 80.00% (average 60.67%). Secondly, the performance of different models in the same subfield is generally consistent compared to the average scores. For instance, all models perform above the average scores in the “Overview” and “Stack, Queue, and Array” subfields of DSA but below average in the “Tree” and “Graph” subfields. These detailed scores allow us to understand which content poses greater challenges for the models and provides guidance for improving the models' performance in computer science by strengthening these weaker subfields. We further observe that although the overall scores of models from the same family increase with scale, not all chapters follow this pattern. As shown in Figure <ref>, the Llama2 series exhibits a trend of scores increasing with scale in most subfields (17 out of 26 subfields); however, there are some exceptions. For instance, Llama2-7B performs exceptionally well in the “string” chapter of DSA, while Llama2-13B excels in the “Data Representation and Operation” chapter of CO, surpassing the performance of Llama2-70B. §.§ Scale-Score Fitting Function for CS-Bench To enhance CS performance, large-scale models are often utilized; however, these models demand more computational resources for both training and deployment inference. Therefore, it is desirable to establish a relationship between model scale and CS performance, enabling the prediction of theoretically larger models' scores on CS-Bench based on the performance of smaller-scale models. The established fitting function should adhere to the following criteria: 1. The score should monotonically increase with the increase in model scale, approaching 0 as the scale approaches 0, and approaching 1 (100%) as the scale approaches infinity. 2. As illustrated in Figure <ref> (a), when the model scale varies exponentially, the score should exhibit an approximately linear trend. 3. Due to variations in performance and change slopes among different model families at the same scale, the fitting function needs to incorporate model-family-specific hyperparameters. Guided by these criteria, we experiment with various functions and find the following function to satisfy the conditions and work best: Score=1-1/θ_1 log_10(θ_2 ·Scale+1)+1 Where θ_1 and θ_2 are hyperparameters specific to the model family. To validate the effectiveness of the function, we estimate hyperparameters based on the minimum mean square error on small-scale models and predict performance scores on larger-scale models. For the Qwen1.5 family, we use models of 7, 14, 32, and 72B to predict the 110B model's performance. For the Llama2 series, we predict the 70B model's performance based on 7B and 13B. As depicted in Figure <ref> (b), for Qwen1.5 110B, the predicted score (67.83%) closely matches the true value (67.95%). For Llama2-70B, with only two reference data points, the predicted score (55.08%) deviates from the true value (52.52%) by only 2.56%. §.§ Performance of Models on CS-Bench (Chinese) We assess models that support Chinese on CS-Bench (CN). The foundation models include the LLama3 and GPT-4 series, which are not specifically optimized for Chinese, as well as Chinese-oriented open-source models, including ChatGLM, Baichuan2, InternLm2, Qwen1.5 and llama3-chinese series. We also evaluate Chinese-oriented closed-source models, including GLM-4 and ERNIE-3.5/4. Details of these models are provided in Appendix <ref>. As shown in Table <ref> and Table <ref>, the scores of these models on CS-Bench(CN) range from 40.45% to 70.26%. Despite not being specifically optimized for Chinese, GPT-4o still achieves the best performance. Among the Chinese-oriented models, ERNIE-4 outperforms GPT-4, achieving performance close to GPT-4o. Additionally, ERNIE-3.5 and GLM-4 score similarly, slightly lower than GPT-4's performance in Chinese. Notably, Llama3-8B-chinese surpasses Llama3-8B by 2.44%, highlighting the importance of adapting models to specific languages. We further compare the performance of the models on CS-Bench(EN) and CS-Bench(CN) in Figure <ref>. Compared to English, the GPT and Llama3 series, which are not optimized for Chinese, perform worse on Chinese context. For instance, Llama3-8B experiences a decrease of 7.68% on Chinese, and Llama3-70B drops by 4.38%. Although some Chinese-oriented models also show slight decreases in performance in the Chinese context, such as InterLm2-20B, the decline is much less significant than that of the Llama3 series. Moreover, the Qwen1.5 series even demonstrates improved performance on Chinese tasks. Finally, we observe that larger models within the same family are less affected by different languages, as reflected in Baichuan2-7/13B, Internlm2-7/20B, and Llama3-8/70B. §.§ Case Study of Error Types We first introduce the error types of knowledge-type questions and reasoning-type questions in Table <ref> and Table <ref>. To facilitate a better understanding of each error type, we provide examples of each error type made by GPT-3.5 in knowledge-based and reasoning-based questions in Table<ref> and <ref>, respectively. Additionally, Table <ref> presents several examples that contain multiple error types.
http://arxiv.org/abs/2406.08361v2
20240612160532
Dissipation bounds precision of current response to kinetic perturbations
[ "Krzysztof Ptaszynski", "Timur Aslyamov", "Massimiliano Esposito" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
krzysztof.ptaszynski@ifmpan.poznan.pl Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg City, Luxembourg Institute of Molecular Physics, Polish Academy of Sciences, Mariana Smoluchowskiego 17, 60-179 Poznań, Poland timur.aslyamov@uni.lu Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg City, Luxembourg massimiliano.esposito@uni.lu Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg City, Luxembourg § ABSTRACT The precision of currents in Markov networks is bounded by dissipation via the so-called thermodynamic uncertainty relation (TUR). In our work, we demonstrate a similar inequality that bounds the precision of the static current response to perturbations of kinetic barriers. Perturbations of such type, which affect only the system kinetics but not the thermodynamic forces, are highly important in biochemistry and nanoelectronics. We prove that our inequality cannot be derived from the standard TUR. Instead, it implies the standard TUR and provides an even tighter bound for dissipation. We also provide a procedure for obtaining the optimal response precision for a given model. Dissipation bounds precision of current response to kinetic perturbations Massimiliano Esposito June 17, 2024 ========================================================================= Introduction.—Among the most fundamental results of statistical physics are the relations between the system response to external perturbations and stationary thermodynamic observables. Close to equilibrium, such a relation is given by the seminal fluctuation-dissipation theorem (FDT) linking the linear response to external forces and equilibrium fluctuations <cit.>. The link between dissipation and fluctuations has recently been generalized to a far-from-equilibrium regime <cit.>. In particular, for Markov jump processes, the theory of stochastic thermodynamics <cit.> gave rise to the thermodynamics uncertainty relation (TUR) linking the entropy production rate σ̇ which measures dissipation to the average of any current 𝒥 and its variance ⟨⟨𝒥⟩⟩ as <cit.> 𝒥^2/⟨⟨𝒥⟩⟩≤σ̇/2 . This relation shows that to reduce the fluctuations, one needs to pay by increased dissipation. In other words, it describes the thermodynamic cost of current precision 𝒥^2/⟨⟨𝒥⟩⟩. It can also be used to infer the value of entropy production, which is often directly inaccessible, using measurable currents and their fluctuations <cit.>. In our work, we get closer to the original formulation of FDT by providing a thermodynamic trade-off between the current fluctuations and the static linear current response, rather than the current itself. Recently, static responses for Markov jump processes and chemical reaction networks have attracted notable attention <cit.> due to their importance in biophysical applications (proofreading, sensing <cit.>). Specifically, we focus on the response to kinetic perturbations, i.e., perturbations affecting only the kinetics of transitions between the system states but not the thermodynamic forces. Perturbations of this type are crucial in chemistry (including biochemistry), as they correspond to the control of kinetic rates by changing the concentration of a catalyst (e.g., enzyme) <cit.>. Among others, kinetics may play a crucial role in determining the direction of motion of molecular motors <cit.> and providing the driving mechanism of Maxwell demons <cit.>. Chemical kinetics can also be controlled by magnetic fields, e.g., via the radical pair mechanism <cit.>, which is hypothesized to be the basis for magnetoreception and other magnetic field effects in biology <cit.>. Another important example of kinetic perturbations appears in the field of nanoelectronics, where it corresponds to the adjustment of tunnel barriers in quantum dots <cit.> or potential barriers in CMOS devices <cit.> by gate voltages. Crucially, the response of currents to kinetic perturbations vanishes at equilibrium, where the state of the system depends only on its thermodynamics, and not on its kinetics. This leads to the intuitive expectation that the response to kinetic perturbations requires some thermodynamic cost. Indeed, certain thermodynamics bounds on the response to kinetic perturbations have been obtained for Markov jump processes <cit.>, chemical reaction networks <cit.>, and nonequilibrium diffusion <cit.>. However, they apply to static system observables and their fluctuations rather than to currents. In this Letter, we provide numerical evidence to conjecture a new bound on the current response to kinetic perturbations. We call it the response thermodynamic uncertainty relation (R-TUR). We can prove it in special cases detailed below, e.g., for unicyclic networks, linear response regime close to equilibrium, and for single edge perturbations. To formulate that bound, we parameterize the transition rates of the opposite Markov jumps ± e as W_± e=exp[B_e(ε)± S_e/2] , where B_e and S_e parameterize the symmetric and asymmetric part of the transition rate, respectively. The former term characterizes the kinetic barriers, while the latter is the entropy change in the reservoir due to the jump e. Only the kinetic part B_e is assumed to depend on the control parameter ε (which can correspond, e.g., to the enzyme concentration <cit.> or the gate voltage <cit.>). The bound reads ( d_ε𝒥)^2/⟨⟨𝒥⟩⟩≤b_max^2σ̇/2 , where d_ε𝒥≡ d𝒥/d ε is the static response of any current 𝒥 to the parameter ε and b_max=max_e |∂_ε B_e| is the maximal rate of change of kinetic barriers in the system. If we perturb a single barrier corresponding to a given edge e, our bound (<ref>) simplifies to (d_B_e𝒥)^2/⟨⟨𝒥⟩⟩≤σ̇/2 . Although the above bounds look similar to TUR (<ref>), we prove that they cannot be derived from TUR. Instead, TUR is a consequence of our result. Moreover, we develop the optimization procedure that transforms R-TUR into a bound for entropy production that is tighter than the optimized TUR <cit.>. However, in some specific cases R-TUR can be derived from TUR. In particular, <ref> can be proven for unicyclic networks and the linear response regime close to equilibrium. Furthermore, <ref> can be derived when the current response at the perturbed edge is considered, i.e., when 𝒥=j_e. Framework.—We consider a continuous-time Markov process describing stochastic jumps between N discrete states of the system (corresponding, e.g., to chemical configurations, charge states of nanoelectronic devices, or molecule conformations). It is described by the directed graph 𝒢, where the nodes of 𝒢 correspond to the system states and the edges ± e∈ℰ to transitions between states. We focus on the steady state π given by the condition 𝕎·π = 0 , where π=(…,π_i,…)^⊺ is the vector of state probabilities π_i (with ∑_iπ_i=1). 𝕎 is here the rate matrix with non-diagonal elements W_t(± e)s(± e)=W_± e(ε) describing the transition rate from the state s(± e) (the source of the edge ± e) to the state t(± e) (the tip of the edge ± e) and with the diagonal elements W_ii=-∑_j ≠ iW_ji. The bound (<ref>) can be applied to an arbitrary current observable, namely, 𝒥=∑_ex_e j_e, where j_e = W_+eπ_s(+e)-W_-eπ_s(-e) are the edge currents and x=(…, x_e, …)^⊺ is an arbitrary vector. Using the chain rule for the static response of 𝒥 we arrive at d_ε𝒥 = ∑_e x_e d_ε j_e = ∑_e x_e ∑_e' (d_B_e'j_e)∂_ε B_e' =∑_e'b_e' d_B_e'𝒥=b^⊺∇𝒥 , where we introduce b =(…, ∂_ε B_e,…)^⊺ known from the model parameterization and ∇𝒥=(…, d_B_e𝒥, …)^⊺ where d_B_e'𝒥 = ∑_e x_e d_B_e' j_e is the static response to the edge parameters B_e'. The entropy production rate can be calculated as σ̇=∑_e j_e S_e <cit.>. Numerical evidence.—We start by providing numerical evidence for our bound. Let us first rewrite <ref> by expressing d_ε𝒥 via <ref> and using the expression ⟨⟨𝒥⟩⟩=x^⊺ℂx <cit.>, with the covariance matrix ℂ of the edge currents defined in <ref>. We obtain ∀_β, x: m(β,x) ≡( d_ε𝒥)^2/⟨⟨𝒥⟩⟩ b_max^2 = (β∇𝒥)^2/x^⊺ℂx ≤σ̇/2 , where β=b/b_max with b_max=max_e |b_e| and |β_e|≤ 1. From <ref>a one can see that the bound (<ref>) with random vectors β and x indeed holds, but is very loose. Thus, we now look for β and x that maximize m(β,x) for a given model. Bound optimization.—We notice that <ref> holds for an arbitrary linear combination of j_e and an arbitrary dependence on ε, which implies that both x and b are arbitrary vectors. Therefore, for a given graph 𝒢 and rate matrix 𝕎, the optimized version of <ref> reads μ≡max_βmax_x m(β,x) ≤σ̇/2. The problem max_xm(β,x) is a standard convex optimization of a quadratic form, and has been previously applied to TUR (<ref>) in Ref. <cit.>. The optimal vector x^* reads x^*(β) = (d_εj^⊺ℂ^-1 d_εj)^-1ℂ^-1d_εj , which results in m(β,x^*) for the inner maximum in <ref>. To account for possible zero eigenvalues, the matrix ℂ^-1 is here the Drazin inverse <cit.>, which for real symmetric (and generally Hermitian) matrices is equivalent to Moore–Penrose pseudoinverse <cit.>. However, as shown in <ref>b, the optimization <ref> does not greatly improve the tightness of the bound <ref>. To do this, we also need to optimize β. The problem max_βm(β,x) for a fixed x is solved simply by β_e^*=sign (d_B_e𝒥) due to the constraint |β_e| ≤ 1. Therefore, the optimal vector β^* is a certain combination of ± 1 elements. Thus, joint optimization over β and x can be performed by maximizing m(β,x^*) over all possible vectors β being combinations of ± 1 elements. As shown in <ref>a in this case the bound (<ref>) can become tight, especially close to equilibrium, i.e., for small values of the entropy production. Relation between R-TUR and TUR.—We now prove that the conjectured R-TUR (<ref>) cannot be derived from the standard TUR (<ref>). Instead, it implies the validity of TUR. To that end, we recall the formulation of R-TUR via <ref> (which is equivalent to <ref>) and show that the standard TUR is equivalent to a weaker version of that bound, ∀_x: m(1,x) = (1∇𝒥)^2/⟨⟨𝒥⟩⟩ ≤σ̇/2 , where β=1 (vector of ones) corresponds to homogeneous perturbation of the kinetic rates. This can be proven by noting that 1∇𝒥 = ∑_e d_B_e𝒥=∑_e' x_e'∑_e d_B_e j_e'=∑_e' x_e' j_e'=𝒥. Here, in the second-last step, we used the summation response relation from Ref. <cit.> applied to <ref>: ∑_e d_B_e j_e' = j_e' . Inserting 1∇𝒥=𝒥 into <ref> we obtain the standard TUR (<ref>). Since <ref> is a weaker version of <ref>, R-TUR implies TUR, while TUR does not necessarily imply R-TUR. Consequently, in the generic case, R-TUR cannot be derived from TUR. Quantitative comparison with TUR.—We now show that for most Markov networks <ref> is indeed a weaker condition than <ref>. Consequently, for a given Markov network, the optimal response precision (d_ε𝒥)^2/⟨⟨𝒥⟩⟩ can be greater than the optimal current precision 𝒥^2/⟨⟨𝒥⟩⟩. As shown in Ref. <cit.>, the latter can be maximized as ℓ≡max_x𝒥^2/⟨⟨𝒥⟩⟩= m(1,x^*) ≥𝒥^2/⟨⟨𝒥⟩⟩ , where <ref> also implies ℓ≤σ̇/2; here we use the above proven relation m(1,x)=𝒥^2/⟨⟨𝒥⟩⟩. We thus obtain the chain of inequalities 𝒥^2/⟨⟨𝒥⟩⟩≤ℓ≤μ≤σ̇/2 , with the equality ℓ =μ for β^* = 1 (i.e., when the optimal β corresponds to homogeneous perturbation). This implies that the optimized R-TUR (<ref>) provides a tighter bound for the entropy production than the standard TUR. We demonstrate this in <ref>b. Indeed, for certain models the ratio ℓ/μ is close to 0, which implies that the optimized R-TUR (<ref>) is far tighter than the standard TUR. Only in less than 10% of the instances ℓ =μ, i.e., R-TUR (<ref>) is optimized for homogeneous perturbation, which makes it equivalent to the optimized TUR (<ref>). Proof for local responses.—We have proven that in general R-TUR cannot be derived from TUR. However, this is possible for some specific cases. The first example is the response of a single edge current 𝒥 = j_e to the perturbation of that edge. Applying the results of Ref. <cit.> to the rate parameterization (<ref>) (which implies ∂_B_ej_e = j_e), we arrive at 0 ≤ j_e^-1 d_B_e j_e ≤ 1. In conjunction with TUR (<ref>), this implies Eq. (<ref>): (d_B_e j_e)^2/⟨⟨ j_e ⟩⟩≤ j_e^2/⟨⟨ j_e ⟩⟩≤σ̇/2 . Proof for unicyclic networks.—<ref> can also be proven for unicyclic networks, i.e., systems in which each state i ∈{0,…,N-1} is a source of a single directed edge pointing to the tip i+1 (with i defined modulo N). In such a case, all edge currents j_e=j are equal to each other and 𝒥 =∑_e x_e j_e=x_Σ j, where x_Σ=∑_e x_e. Moreover, we have sign d_B_e𝒥 = sign x_Σ d_B_ej = sign x_Σ j , where we use <ref> for d_B_ej/j≥ 0. We can always define the orientation of 𝒢 so that x_Σ j is positive, which implies β_e^*=sign(d_B_e𝒥)=1 (see <ref>), i.e., μ is maximized for homogeneous perturbation. Then, as discussed below <ref>, μ=ℓ≤σ̇/2, which proves <ref>. Proof for the linear response regime.—Finally, R-TUR (<ref>) can be proved to hold close to equilibrium, where the currents are linear in the applied thermodynamic forces. To define this regime, let us parameterize the asymmetric transition rate parameters in <ref> as S_e=F_e+E_s(e)-E_t(e) <cit.>, where s(e) and t(e) are the source and the tip of the edge e, E_i are node parameters (which may sometimes be interpreted as state energies), and F_e are the nonconservative thermodynamic forces. In the absence of forces, the system relaxes to the equilibrium state π_i^eq =e^-E_i/Z, with Z=∑_i e^-E_i, which is independent of B_e. Close to equilibrium (i.e., for small values of nonconservative forces) the currents respond linearly to applied forces as j=𝕃F, where F=(…,F_e,…)^T and 𝕃 is the positive semidefinite Onsager matrix. Consequently, the current observables and the entropy production can be calculated as 𝒥 = x^T 𝕃F and σ̇=F^T 𝕃F. Furthermore, in this regime, the covariance matrix of edge currents is related to Onsager matrix as ℂ=2 𝕃 by virtue of the fluctuation-dissipation theorem <cit.>. Let us also define the matrix 𝕃_β≡ d_ε𝕃/b_max=∑_e β_e d_B_e𝕃, which describes the kinetic response of Onsager matrix. Inserting the above expressions for ℂ and σ̇ into <ref>, using β∇𝒥=∑_e β_e d_B_e𝒥=x^T 𝕃_βF, and applying optimization (<ref>), we find that R-TUR (<ref>) holds if and only if ∀_F: F^T 𝕃_β𝕃^-1𝕃_βF≤F^T 𝕃F . We recall that 𝕃^-1 here denotes the Moore-Penrose pseudoinverse of 𝕃. This is equivalent to 𝕃-𝕃_β𝕃^-1𝕃_β≽ 0 , where 𝔸≽ 0 means that the matrix 𝔸 is positive semidefinite. To prove the above inequality, we note that 𝕃-𝕃_β𝕃^-1𝕃_β is the Shur complement 𝕄/𝕃 of the matrix 𝕄 = [ 𝕃 𝕃_β; 𝕃_β 𝕃 ] . Thus, <ref> holds if 𝕄≽ 0. Since eigenvalues of 𝕄 correspond to that of 𝕃±𝕃_β <cit.>, 𝕄≽ 0 if and only if 𝕃±𝕃_β≽ 0. To prove the latter occurrence, in Appendix <ref> we show that the responses d_B_e𝕃 are positive semidefinite and obey the summation response relation ∑_e d_B_e𝕃=𝕃 . As a consequence, 𝕃±𝕃_β=∑_e (1 ±β_e) d_B_e𝕃 is positive semidefinite because 1 ±β_e ∈ [0,2] ≥ 0 and d_B_e𝕃≽ 0. This proves <ref>, and thus R-TUR (<ref>). We also note that <ref> is saturated for homogeneous perturbation β=1. Thus, in the linear response regime, the optimized R-TUR (<ref>) is equivalent to the optimized TUR (<ref>). (We note that, in contrast to the unicyclic case, this does not imply that homogeneous perturbation will always yield the optimal response precision when considering a generic rather than optimized current observable.) Using <ref>, both are optimized for the current observable corresponding to the entropy production: 𝒥=σ̇. Indeed, in the linear response regime, ⟨⟨σ̇⟩⟩=2 σ̇, and thus TUR (<ref>) becomes saturated: σ̇^2/⟨⟨σ̇⟩⟩=σ̇/2 <cit.>. Furthermore, using d_B_e𝕃≽ 0, <ref> and σ̇=F^T 𝕃F, we find that close to equilibrium the entropy production response itself, and not only its precision, is bounded by dissipation as 0 ≤ d_B_eσ̇≤σ̇ with ∑_e d_B_eσ̇ = σ̇ , |d_εσ̇| ≤ b_maxσ̇ . Finally, we note that the Onsager matrix can be represented in the basis of measurable currents (e.g., charge, heat, or chemical currents) conjugated to fundamental thermodynamic forces (e.g., temperature or chemical potential gradients) <cit.>. Such physical Onsager matrix can be expressed as 𝕃_𝕏=𝕏𝕃𝕏^T, where 𝕏 is the matrix that connects measurable currents to edge currents <cit.>. Using d_B_e𝕃≽ 0 and <ref>, its kinetic response can also be bounded as 𝕃_𝕏≽ d_B_e𝕃_𝕏≽ 0 with ∑_e d_B_e𝕃_𝕏=𝕃_𝕏 , b_max𝕃_𝕏≽ d_ε𝕃_𝕏≽ -b_max𝕃_𝕏 . This enables one to experimentally bound b_max using only measurable currents. Final remarks.—Our bound establishes a fundamental thermodynamic cost for controlling currents by modulating the system's kinetics. It may prove useful in optimizing the energetic cost needed to accurately control nanodevices such as molecular machines or nanoelectronic devices. It may also be used for thermodynamic <cit.> or model <cit.> inference. In particular, when all parameters b_e are known (e.g., when the control parameter ε affects only some known transition rates), the empirical current responses and fluctuations may provide a better estimate of the entropy production than the standard TUR. On the other hand, when the model details are unknown but the entropy production is accessible, the inequality (<ref>) can be used to bound the transition rate responses b_e. K.P., T.A. and M.E. acknowledge the financial support from, repectively, project No. 2023/51/D/ST3/01203 funded by the National Science Centre, Poland, project ThermoElectroChem (C23/MS/18060819) from Fonds National de la Recherche-FNR, Luxembourg, project TheCirco (INTER/FNRS/20/15074473) funded by FRS-FNRS (Belgium) and FNR (Luxembourg). § NUMERICAL SIMULATIONS To calculate the response ∇𝒥 we use the results of Ref. <cit.> as follows ∇^⊺𝒥 =[∑_e x_e d_B_e'j_e ]_e'=x^⊺ℙ𝕁 , where the matrix ℙ is the projection matrix with analytical expression <cit.> and 𝕁=[∂_B_e'j_e]_{e,e'}=diag(…,j_e,…) the diagonal Jacobian matrix. Thus, to simulate <ref>, we only need the numerical values of the steady state π satisfying <ref>. To find the covariance matrix of the edge currents ℂ, we follow the method from Ref. <cit.> and define the matrix 𝕎^ϕ(q) with nondiagonal elements W^ϕ_t(± e)s(± e)(q)=W_t(± e)s(± e)exp(± q_e) and diagonal elements the same as 𝕎. The elements of the covariance matrix are defined as C_ee' = ∂/∂ q_e∂/∂ q_e'λ(q)|_q=0 , where λ is eigenvalue of the matrix 𝕎^ϕ with the largest real part. In simulations of <ref>, we used the method of finite differences to numerically calculate λ(q). § PROOF OF D_B_E𝕃≽ 0 AND EQ. (<REF>) To prove d_B_e𝕃≽ 0 we use the expression for Onsager matrix from Ref. <cit.>, 𝕃=12𝕂ℍ^-1𝕂^T with ℍ=𝕂^T 𝕋^-1ℍ . Here 𝕂 is the cycle matrix that depends only on the system topology and not on transition rates, while 𝕋=diag(…,τ_e^eq,…) is the matrix with equilibrium traffics τ_e^eq=W_+eπ^eq_s(+e)+W_-eπ^eq_s(-e) at the diagonal. We note that Ref. <cit.> actually considered the Onsager matrix describing current responses to edge affinities ℱ_e rather than forces F_e defined here; however, the Onsager matrix for both force definitions is the same, as it is related to the covariance matrix via the fluctuation-dissipation theorem <cit.> ⟨⟨σ̇⟩⟩ =2 σ̇⇔𝕃=ℂ/2. We now use the expressions d_B_eℍ^-1=-ℍ^-1 (d_B_eℍ) ℍ^-1, and the equality d_B_eτ_e'^eq=δ_ee'τ_e^eq that holds because probabilities π_i^eq do not depend on B_e; the latter yields d_B_e𝕋^-1=-𝕋_e^-1 with 𝕋_e=diag(…,0,τ_e^eq,0,…). We get d_B_e𝕃=12𝕂ℍ^-1𝕂^T 𝕋_e^-1𝕂ℍ^-1𝕂^T=2 𝕃𝕋_e^-1𝕃 . This is a positive semidefinite matrix, as this is a sandwich of the positive semidefinite diagonal matrix 𝕋_e^-1 between the matrix 𝕃 and its transpose 𝕃^T=𝕃, which concludes the proof of d_B_e𝕃≽ 0. Summing <ref> over edges, we further obtain <ref>: ∑_e d_B_e𝕃=12𝕂ℍ^-1ℍℍ^-1𝕂^T=𝕃 . Here we use ∑_e 𝕋_e^-1 = 𝕋^-1 and the relation ℍ^-1ℍℍ^-1=ℍ^-1 that holds for Moore-Penrose pseudoinverse.
http://arxiv.org/abs/2406.08160v1
20240612125120
Chemistry3D: Robotic Interaction Benchmark for Chemistry Experiments
[ "Shoujie Li", "Yan Huang", "Changqing Guo", "Tong Wu", "Jiawei Zhang", "Linrui Zhang", "Wenbo Ding" ]
cs.RO
[ "cs.RO" ]
[ [ ===== § ABSTRACT The advent of simulation engines has revolutionized learning and operational efficiency for robots, offering cost-effective and swift pipelines. However, the lack of a universal simulation platform tailored for chemical scenarios impedes progress in robotic manipulation and visualization of reaction processes. Addressing this void, we present Chemistry3D, an innovative toolkit that integrates extensive chemical and robotic knowledge. Chemistry3D not only enables robots to perform chemical experiments but also provides real-time visualization of temperature, color, and pH changes during reactions. Built on the NVIDIA Omniverse platform, Chemistry3D offers interfaces for robot operation, visual inspection, and liquid flow control, facilitating the simulation of special objects such as liquids and transparent entities. Leveraging this toolkit, we have devised RL tasks, object detection, and robot operation scenarios. Additionally, to discern disparities between the rendering engine and the real world, we conducted transparent object detection experiments using Sim2Real, validating the toolkit's exceptional simulation performance. The source code is available at https://github.com/huangyan28/Chemistry3Dhttps://github.com/huangyan28/Chemistry3D, and a related tutorial can be found at https://www.omni-chemistry.comhttps://www.omni-chemistry.com. § INTRODUCTION Chemistry is a constantly evolving and experimental discipline<cit.>. The birth of a new substance or material often requires thousands of experiments. Therefore, chemical experiments are unfriendly to researchers. The tedious and repetitive nature of this work not only imposes immense labor intensity on researchers and chemical engineers (Some chemical engineers often work up to 50 hours a week in the US, says the Bureau of Labor Statistics<cit.>) but also poses threats to their physical health due to exposure to harmful chemicals. In addition, many experiments consume large amounts of resources, with global consumption of chemicals calculated to exceed €5.77 trillion in 2022 alone<cit.>. In today's rapidly advancing era of embodied intelligence technology, proposing a 3D simulator that includes robot operations and chemical reaction processes is imperative, which not only improves the efficiency of experiments and reduces the cost of experiments but also liberates human beings from heavy scientific experimental tasks. Chemical experiments contain many chemical manipulation and visual detection tasks, which are dangerous if the robot is trained directly in a real environment. Although considerable research has been conducted into robot simulation systems<cit.>, a dedicated chemical 3D simulation system for robots has yet to be proposed. Current research on chemical robots mainly focuses on algorithmic aspects, such as organic synthesis methods<cit.> and reinforcement learning(RL)<cit.> to improve yield. This is because chemistry and robotics are interdisciplinary fields, and designing a chemical 3D simulation system tailored for robots requires addressing numerous challenges, including but not limited to: (1) Immature rendering engines for liquids and transparent objects: Chemical experiments involve many liquids and transparent objects, and achieving efficient and realistic rendering engines is difficult<cit.>. (2) Vast chemical reaction databases: Chemistry is a complex discipline covering various reaction types such as organic, inorganic, liquid, solid, and gas; thus, implementing chemical simulation requires extensive database support<cit.>. (3) Complex calculation methods and parameters: Real chemical reactions involve multiple parameters such as heat, temperature, pH, color, etc.; visualizing these parameters requires a deep understanding of chemistry and complex calculation methods<cit.>. Therefore, to realize a chemical 3D simulation system tailored for robots, it is necessary to span multiple disciplines, such as chemistry, computer graphics, and robotics, and address the various technical challenges mentioned above. The advancements in 3D rendering technology and the development of large language models (LLM) have presented us with new opportunities. Omniverse<cit.>, introduced by NVIDIA, is an open virtual collaboration and simulation platform encompassing a wide array of 3D modeling tools, renderers, animation tools, and physics engines. It enables robots to create more realistic and interactive virtual environments<cit.>. Therefore, leveraging NVIDIA's Omniverse simulator, we propose a high-performance simulating toolkit for chemical experiments named Chemistry3D, as shown in Fig.<ref>. This toolkit allows robots to conduct organic, inorganic, and various other experiments within 3D environments. Furthermore, to enhance the versatility of the simulator, we have opened convenient data interfaces, enabling operators to add unknown chemical reactions to the database effortlessly. The contributions of this paper are as follows: (1) Novel 3D scene for chemical experiments: We introduce a pioneering 3D scene tailored for chemical experiments. This scene not only features a plethora of chemical containers and robots but also facilitates functions such as transparent object simulation and fluid simulation. (2) Establishment of a comprehensive chemical dataset: We have curated a chemical dataset comprising over 1,000 inorganic reactions and 100,000 organic reactions. This dataset not only yields intermediate products for chemical simulation but also provides real-time feedback on changes in temperature, color, and pH during the experimental process. (3) Performance validation through various experiments: To validate the performance of our simulation, we conducted experiments including object grasping training based on RL, chemical experiment operations guided by LLM, and transparent object detection based on Sim2Real techniques. These experiments demonstrate the significant potential applications of our scene in the field of machine learning. § RELATED WORK In addressing the challenges of chemical reaction visualization and operation in the simulation environment, a variety of specialized tools and research efforts have been established. We conducted a comparative analysis of Chemistry3D and other tools in the domains of robotics and chemistry, as is summarized in Table <ref>. Notably, benchmarking robotic manipulation aspects of chemical experiments in simulation has not been previously addressed. Existing efforts typically focus on chemical reaction generation or robotic operations within specific scenarios. Specifically, our proposed work focuses more deeply on robot manipulation and embodied intelligence. Traditionally, chemical reaction simulators primarily emphasize the molecular level. Tools from Interactive Chemistry<cit.> simulate molecular collisions, while techniques based on Computational Fluid Dynamics (CFD)<cit.> could be used to simulate gas reactions. Combining these with data-driven techniques, MoleculeNet<cit.> offers a large-scale benchmarking platform for molecular machine learning, proposing methods for feature characterization. Additionally, numerous databases in organic chemistry, such as ORD<cit.> and ORDerly<cit.>, focus on structured reaction characterization. ChemSpider<cit.> serves as a chemical substance information retrieval tool. In inorganic chemistry, Chenaxon<cit.> and RXN for Chemistry<cit.> are used for reaction prediction and synthesis pathway optimization. ChemReaX<cit.> provides examples based on a small sample database and includes information on thermochemistry and reaction intermediates. Few simulations integrate chemical experiments with robotics. For operations, existing works tend to focus on specific tasks. For instance, Robot Air Hockey<cit.> is employed for Sim2Real applications in playing air hockey, while Panda MuJoCo Gym<cit.> benchmarks RL tasks such as pushing, sliding, and object manipulation. A Unity-based Simulator<cit.> creates game-like scenarios for experimental manipulation for educational purposes. For perceptions, CABD<cit.> offers benchmark datasets for the image recognition of chemical apparatus. ChemGymRL<cit.> provides detailed information on the intermediates of chemical reactions for RL. For system architecture design, different pipelines are developed for enabling autonomous laboratory, utilizing machine learning (ML)<cit.> and natural language processing (NLP)<cit.> methods. Furthermore, ARChemist<cit.> designed several modular managers for processing the chemical recipe and interacting with the robots. § CHEMISTRY3D Chemistry3D encompasses three main aspects: chemistry simulation, virtual chemical laboratory environments, and robotic manipulation. Compared with previous chemistry benchmarks<cit.>, Chemistry3D supports realistic scene simulations for visual inputs and enables physical interactions between robots and objects. In terms of chemistry simulation, it aims to provide accurate and detailed models of chemical reactions. Virtual chemical laboratory environments offer scenarios that closely mimic real chemical experiments, enhancing the authenticity and applicability of simulations. Regarding robotic manipulation, Chemistry3D facilitates the use of robots in performing and optimizing chemical reactions, aiming to seamlessly integrate robotic manipulation with chemical processes. §.§ Chemistry Simulation Inorganic Reactions: The simulation of inorganic reactions is designed based on a database encompassing both reaction and chemical substance information. The database includes data on 65 different chemical substances, each characterized by its color, enthalpy values, and physical state. Additionally, it contains information on 65 fundamental reactions, specifying the reactants, products, and their stoichiometric ratios. r0.5 < g r a p h i c s > The framework of the inorganic reaction simulator. The simulator processes the input reactant components as a pipeline and output the product information on component, representation and mid-state. The simulator accepts input as a dictionary, with reactant names as keys and numbers of mole as values. The simulator is capable of performing iterative reactions, which allows for generating over 1000 possible reactions through various combinations. For the output, the component interface outputs a dictionary of the same format, detailing the remaining reactants and reaction products. Furthermore, the representation interface provides a comprehensive output dictionary, including data on the color of the reaction mixture, enthalpy changes, pH levels, temperature changes, and the physical state of the resultant substances. Precisely, the core framework of the inorganic reaction simulator is shown in Fig. <ref>. The detailed procedure within this simulator is discussed as follows. * Charge Balance Identification: The initial step in the simulation process involves verifying that the input reactants satisfy charge conservation<cit.>. This ensures that the total charge of the reactants is balanced, which is crucial for confirming the set of reactants is realistic. * Database Indexing: Following charge determination, the simulator verifies whether the identified reactants are present in the database by checking whether part of the current components can undergo a reaction. In this way, the simulator identifies the specific reactions by breaking down complex inorganic reactions into basic reactions<cit.>. The complex reaction is viewed as a sequence of these basic reactions executed in a specific order, determined by the sequential record of reactions in the database. For instance, acid-base reactions are premier to redox reactions. * Reaction Extraction: Once the inorganic reactions are identified by indexing the database, the simulator extracts the relevant reactants that are ready to react. The original input reactants is divided into two parts: the reacting one and the spectating one. By focusing on the specific ions involved, the simulator ensures that only the necessary reactions are considered in this reaction cycle, thereby streamlining the reaction prediction process. * Reaction Quantity Calculation: After reaction extraction, the simulator calculates proportional amounts of reactants based on the stoichiometric coefficients. The simulator identifies which ion is completely consumed first, thereby establishing the limiting reagent. This allows the calculation of reaction quantity, i.e., how many moles of the "reaction equation" are involved in the process. Based on the input reagents, the simulator subtracts the amount of each reactant used and adds the generated products, yielding the final composition of substances. The stoichiometric calculations are vital for accurate reaction simulations and yield predictions. * Mid-State Calculation The computation of intermediate states is based on the rate equation as Eqn. <ref>. This process involves considering the initial and final compositions of substances involved in the reaction to determine their temporal evolution<cit.>. Specifically, by transforming the reaction order into unity, the simplified rate equation reveals an exponential decay relationship between substance compositions and time, resembling a negative exponential function with an offset. Consequently, by defining suitable reaction rate constants and utilizing the initial and final compositions of substances, temporal intermediate data can be obtained as a list. rate = k · c(A)^m_a· c(B)^m_b * Concentration Calculation: The simulation proceeds by adding the total volume V of the mixture, then calculating the concentration c as Eqn. <ref>. This calculation forms the basis for subsequent concentration-dependent computations. c = n/V * Enthalpy Change Calculation: The simulator then computes the enthalpy change based on reaction quantity as Eqn. <ref>. This step involves determining the heat absorbed or released during the reaction, which is essential for understanding the thermodynamics of the process. The total enthalpy change Q is calculated by multiplying the reaction quantity N to the enthalpy change per equation Δ H, which is recorded in the database. Q = N ·Δ H * Temperature Change Calculation: Utilizing the specific heat capacity C of the solvent, the simulator calculates the temperature change Δ T resulting from the reaction. The change is determined by Eqn. <ref>, where ρ is the density and V is the volume of the solution. Temperature changes can influence reaction rates and equilibria, making this step important for dynamic analysis. Δ T = Q/C ·ρ· V * RGBA Color Calculation: The simulator assesses the color change in the reaction mixture by considering several factors: * Transparency Calculation: It first calculates the transparency of the solution based on the concentration of the reactants and products, keeping the RGB values constant. The transparency value is determined through the exponential model derived by the dilution of solution<cit.>, as Eqn. <ref>. c is the concentration of the reagent and K is a constant representing the standard value for transparency. a = 1 - 10^-K · c * State Priority Consideration: The simulator assigns priority to different physical states of the substances. If a substance exists in the solid state, it contributes to opacity, resulting in a turbid liquid. For substances in the liquid state, color mixing occurs based on RGBA values. However, the color of gaseous substances is not factored into the overall color calculation. * Color Mixing: A negative mixing model is applied to determine the resultant color of the mixture of solution without solid substance<cit.>. This model helps in accurately simulating the combined color effects of multiple reactants and products. * Spectrum to RGB Transformation: The spectrum information is included in this simulator, especially UV/Vis spectrum. The visible light spectrum of a substance determines its color appearance in a colorless transparent solution<cit.>. Therefore, the UV/Vis spectrum can be converted into RGB color values<cit.> for visual display in simulation. * pH Calculation: Finally, the simulator calculates the pH of the solution through several sub-steps: * Ionization Constant Interpolation: Initially, a table is established to record water's ionization constant (K_w) in the liquid state at standard atmospheric pressure, ranging from 0 to 100 degrees Celsius. This table serves as a reference for interpolating K_w values at any given temperature, facilitating precise pH calculations. * Electrolyte Ionization: The simulator ignores the ionization of weak electrolytes and focuses on calculating the concentration of hydrogen ions (H^+) or hydroxide ions (OH^-) from strong electrolytes in water<cit.>. This step ensures that the major contributors to the pH are considered, as Eqn. <ref>. K_w = c(H^+) · c(OH^-) * pH Determination: Based on the ionization constant and the concentrations of hydrogen or hydroxide ions, the simulator calculates the pH of the solution. Accurate pH determination is critical for understanding the acidity or basicity of the reaction environment. pH = -log c(H^+) Organic Reactions: The simulation of organic reactions integrates data from RXN for Chemistry<cit.> for reaction information and ChemSpider<cit.> for chemical substance information. Similar to the inorganic simulator, this simulator also has a component and representation interface as output. The system accepts reactants represented by SMILES<cit.> strings and outputs the corresponding products for reaction product prediction. It’s also capable of predicting reaction yields, therefore determines the component output for the specific reaction. For the representation, the simulator employs web scraping to query substance information; given a SMILES string, it retrieves a dictionary of properties and values. For more comprehensive data, users can query additional information using the CAS number<cit.> of the substance. Simulator Interface: To integrate the chemical aspects with the operational aspects seamlessly, the simulator is embedded into a container class, which features three main methods: initialization, updating, and information retrieval. The initialization method distinguishes between organic and inorganic reactions and sets the chemical components by name, amount, and volume. The updating method simulates sampling or mixing operations and can automatically conduct reactions. The output of this method represents the intermediate state of the container's contents, calculated through rate equations with an adjustable time step to suit simulation tasks in Omniverse. The information retrieval method allows access to component or representation information, enabling direct queries about concentration, color, and other properties for any container. This approach binds chemical information to simulated reagent bottles, facilitating clear demonstrations. It also aligns chemical reactions with operational actions, making the simulation intuitive. This integrated simulating method allows for accurate predictions and detailed representation calculations. It's essential for further studies in chemistry, including analysis of intermediate states and RL. §.§ Chemical Environment Chemistry3D offers an advanced and meticulously designed environment for simulating chemical experiments, as shown in Fig. <ref>. This environment, built upon the NVIDIA Omniverse platform, integrates a variety of features essential for both chemical and robotic research. Rich Chemical Assets: Chemistry3D is endowed with an extensive series of chemical containers and instruments, meticulously designed to facilitate a diverse range of chemical reactions. These encompass both organic and inorganic reactions, as well as liquid-liquid and liquid-solid interactions. This vast collection of chemical assets allows for the simulation of various chemical experiments, offering researchers and educators a versatile platform to explore different reaction dynamics. Robotic Assets: The environment is equipped with numerous robotic arms and robotic grippers, enhancing the potential for robotic simulations. These robotic systems are capable of performing precise tasks such as grasping, shaking, pouring, stirring, placing, and moving chemical containers. This capability significantly expands the possibilities for robotic experimentation and automation within the chemical laboratory setting. Fluid and Rigid Body Simulation: Chemistry3D excels in simulating both fluid and rigid body interactions with high fidelity. This capability is particularly beneficial for visualizing intricate chemical processes involved in both inorganic and organic experiments. For instance, the platform can realistically simulate the dissolution of solid compounds in a liquid. Additionally, it can accurately depict the merging of two liquids and the resultant color changes, providing a vivid representation of reaction progress and intermediate states. High-Fidelity Rendering: The platform supports highly realistic rendering and light simulation, which is essential for accurately representing chemical experiments involving transparent materials. In many chemical lab settings, instruments such as glass beakers and flasks are transparent, posing challenges for vision-based tasks. Chemistry3D excels in simulating these transparent objects, providing detailed visual representations of chemical reactions, including changes in color and clarity. This enhances the effectiveness of visual inspections and analyses, which is crucial for monitoring and understanding chemical processes. Robot Operating System: Chemistry3D inherits robust support from Isaac-Sim, including integration with ROS and ROS2. This compatibility allows for a diverse range of robotic development and experimentation, enabling users to leverage advanced robotic operating systems for controlling and simulating robotic behavior within the chemical environment. This integration is crucial for developing and testing sophisticated robotic applications and workflows. §.§ Robotic Manipulation Chemistry3D aims to integrate robotic operations with the simulation of chemical experiments. Chemistry3D is built on the Nvidia Omniverse and PhysX 5 platforms, with robotic manipulations implemented using IsaacSim. This platform enables realistic simulations of rigid bodies, fluids, soft bodies, and other materials, thereby enhancing the overall realism of Chemistry3D through accurate physics simulation and light rendering effects. Most chemical manipulations involve motion states such as grasping, moving, and pouring. Consequently, Chemistry3D proposes three distinct tasks, chemical experimental manipulation, embodied intelligence manipulation, and RL tasks, to advance the development of robotic operations within Chemistry3D. The descriptions of these tasks are as follows: Chemistry manipulation: Chemical experimental manipulations frequently involve the desired motions of target objects, e.g., pouring. Consequently, the tasks in Chemistry3D encompass a variety of chemical experimental operations. We have selected four common chemical experimental operations including picking, placing, pouring, stirring, and shaking. In our experiments, we developed simulation tasks within Chemistry3D to demonstrate these operations. Embodied intelligence: Embodied intelligence involves the interaction of semantic information between agents and humans, enabling robots to understand and perform desired chemical operations. This capability is vital for the automation of chemical processes. We designed specific chemical experiment scenarios to demonstrate that the development of embodied intelligence is possible in Chemistry3D. These scenarios showcase the potential for robots to autonomously observe the environment and complete specified tasks within the platform. RL task: RL has recently achieved success across a wide range of continuous control problems. Chemical experimental manipulations are often continuous processes, making RL an essential tool for integrating chemistry experiments with robotic simulation. IsaacGym is a well-regarded simulation environment specifically designed to support robot learning. The Omniverse Isaac Gym Reinforcement Learning (OmniIsaacGymEnvs) for Isaac Sim repository provides RL examples compatible with IsaacSim and has become a widely adopted environment for RL research. Utilizing this repository, we implemented a reward function setup similar to the provided examples and successfully achieved the picking of chemical containers. The RL task demonstrates that Chemistry3D can support RL research in robotic manipulation. § EXPERIMENTS In this section, we investigate the capabilities of chemical reaction simulation and robotic manipulation within the Chemistry3D. All experiments are conducted within a simulated chemistry environment. As shown in Fig. <ref>, we have carried out four experiments based on Chemistry3D, covering chemical manipulation, visual Sim2Real, embodied intelligence, and RL. Additionally, we have integrated the chemical simulator to perform both organic and inorganic experiments. §.§ Chemical Experiments In chemical experiments, we focus on inorganic and organic reactions, integrating robotic operations within IsaacSim to enhance the experimental process. (See Supplementary Material for more details.) Inorganic Experiment: We selected redox reactions due to their notable color and state changes. Experiments involved potassium permanganate (KMnO4) with ferrous chloride (FeCl2), and hydrochloric acid (HCl) with iron(II) oxide (FeO). The center of mass of reactants determines contact points, triggering color and state transformations. Chemistry3D outputs detailed reaction data such as temperature, enthalpy change, and pH at each time step. Organic Experiment: Focused on the simulation of mid-state products rather than color changes, we synchronized reaction steps with IsaacSim simulation. Using the reaction between Br2 and C20H12 as an example, mid-state products are generated upon reactant contact. This enables real-time optimization of final products and robotic manipulations. §.§ Chemical Manipulation As shown in Fig. <ref>(a), chemical manipulation often involves tasks such as picking and placing. Our experiments demonstrated the effective deployment of robotic picking and pouring operations in chemical processes. We designed modular operations, including picking, pouring, shaking, stirring, and placing, managed by a Controller Manager. This manager ensures the sequential and integrated execution of these operations. We successfully combined picking, pouring, and placing operations, as illustrated in our experimental results. (See Supplementary Material for more details.) §.§ Visual Sim2Real To demonstrate Omniverse's capability in implementing an effective Sim2Real system, we selected complex transparent chemical containers for our experiments, as shown in Fig. <ref>(b). We introduced 3D-scanned transparent models that are common in chemical labs. These models were used to test Sim2Real transfer for semantic segmentation and object detection tasks. We presented two simulation datasets and integrated Segmentation Models PyTorch<cit.> for algorithm comparisons.(See Supplementary Material for more details.) High-fidelity visualization in Omniverse enabled effective training of encoder-decoder networks, maintaining robust segmentation in real-world scenarios. We selected several mainstream encoder-decoder combinations for quantitative experiments. As illustrated in Table <ref>, quantitative comparisons using Intersection over Union (IoU), Pixel Accuracy (PA), F1-Score, and F2-Score showed that the EfficientNet-DeepLabV3 combination outperformed others, achieving top scores across all metrics. Additionally, we further validated the Sim2Real ability by performing an object detection task. For this task, we selected YOLO<cit.> as our algorithm. The model trained within the simulation environment was evaluated for object detection in both simulated and real-world environments. The results are consistent with results in the semantic segmentation task, confirming that Chemistry3D effectively supports visual Sim2Real. §.§ Embodied Intelligence To evaluate Chemistry3D in the context of Embodied AI, we designed a scene replicating an inorganic chemistry experiment as shown in Fig. <ref>(c). This setup included containers with KMnO4, FeCl2, and empty beakers. We developed agents for robotic manipulations, allowing the robot to observe, predict potential chemistry reactions, and execute tasks via natural language commands. The robot acquires environmental data by accessing the positional and labeling information of objects. Based on its chemical knowledge base from Chemistry3D, the robot can predict potential chemical reactions. Upon receiving task directives from a human operator, it utilizes a Large Language Model (LLM) to generate and strategically plan the necessary operations, thereby ensuring the stability and accuracy of experimental procedures. (See Supplementary Material for more details.) §.§ Reinforcement Learning OmniIsaacGymEnvs facilitates complex RL tasks in Chemistry3D. We demonstrated the capability of RL research by setting the RL task of picking as shown in Fig. <ref>(d). Using Proximal Policy Optimization (PPO)<cit.> as the algorithm, The experiment involved 2048 environments, 3500 epochs, and a learning rate of 5 × 10^-4, applied consistently across multiple experiments. We plotted the reward and success rate curves, showing robust outcomes. The results confirmed that robotic arms could successfully grasp chemical containers within Chemistry3D. (See Supplementary Material for more details.) § CONCLUSION We presented a 3D robot simulation toolkit based on NVIDIA's Omniverse platform for chemical experiments. This system encompasses various chemical containers and robotic models, supporting transparent objects and fluid simulations. We have established an extensive chemical dataset that provides real-time feedback on various parameter changes during experiments. Through RL tasks, large language modeling, and Sim2Real experiments, we have demonstrated the significant potential of this system in machine learning applications. This system enhances the visualization and interactivity of chemical experiments and offers a new tool for interdisciplinary research in chemistry and robotics, promising to advance related fields. § SUPPLEMENTARY MATERIAL §.§ Mathematical Principle of Inorganic Reaction Simulator This section reveals the mathematical principles in the inorganic reaction simulator. For clear mathematical representation, first we define a single reagent with its amount as base unit 𝐫. The input reactant is defined as a set 𝐑, which satisfies: 𝐑 = {𝐫_1,...,𝐫_𝐢}. An ionic chemical reaction is defined as Eqn.<ref>, where 𝐑 represents the set of reactants and 𝐏 represents the set of products. Noted that if the reaction solvent is water, the reagent should be expressed as its ionized results. In this way, the reaction 𝐂 contains component and stoichiometric information of a chemical reaction. 𝐂=[𝐑,𝐏] The complex reaction 𝐂_𝐤 is viewed as a sequence of these basic reactions 𝐂_𝐤_1,...,𝐂_𝐤_𝐧, executed in a specific order, determined by the sequential record of reactions in the database. The composition of reactions should obey the following equation: 𝐂_𝐤 = [⋃_𝐢=1^𝐧𝐂_𝐤_𝐢[0],⋃_𝐢=1^𝐧𝐂_𝐤_𝐢[1]] ≡ [⋃_𝐢=1^𝐧𝐑_𝐤_𝐢,⋃_𝐢=1^𝐧𝐏_𝐤_𝐢] The database of ionic chemical reactions is constructed as a set 𝐃_𝐂 including numerous reactions 𝐂_𝐢, which satisfies: 𝐃 = {𝐂_1,...,𝐂_𝐢} After the simulator accepts the input reactant, it checks whether part of the current components can undergo a reaction 𝐂_𝐦, i.e., if they are present in the database, which satisfies: 𝐂_𝐦[0] ≡ 𝐑_𝐦 ⊆ 𝐑_𝐢𝐧𝐩𝐮𝐭 Therefore, the data structure used for database indexing must process entries sequentially from the beginning. This step is essential to ensure that all reactants and products are recognized and that their properties are well-defined within the system. To start reacting, the original input reactants 𝐑 is divided into two parts: the reacting one 𝐑̂ and the spectating one 𝐑̃. These two parts can be expressed as: 𝐑_𝐢𝐧𝐩𝐮𝐭 = 𝐑̂∪𝐑̃,𝐑̂∩𝐑̃ = ∅ After extracting the relevant reactions, the simulator calculates proportional amounts of reactants 𝐑̅ based on the stoichiometric coefficients. The reaction quantity N is determined as the minimum stoichiometric coefficients in proportional amounts of reactants. The calculation should satisfy: 𝐑̂𝐬𝐭𝐨𝐢𝐜𝐡𝐨𝐦𝐞𝐭𝐫𝐲⟶𝐑̅, 𝐍 = 𝐦𝐢𝐧{𝐑̅} Finally, the simulator subtracts the consumed amount of each reactant from input reagents and adds the generated products, yielding the final composition of substances 𝐑_𝐧𝐞𝐰. The updated component set is calculated as follows: 𝐑_𝐧𝐞𝐰 = 𝐑_𝐢𝐧𝐩𝐮𝐭 - 𝐍 (𝐂_𝐦[0] - 𝐂_𝐦[1]) §.§ Inorganic Reaction Database The database of the inorganic reaction simulator is primarily divided into two parts: inorganic reaction information and chemical substance information. The inorganic reaction information refers to the data for ionic reactions, defined by their reactants, products, and stoichiometric coefficients. Specifically, the sequence of reactions in the inorganic reaction information database should correspond to the reaction order based on the Gibbs free energy of the reactions<cit.>. The database includes 65 fundamental reactions<cit.>, categorized into four major types: Acid-Base, Double Displacement, Redox, and Complexation, as shown in Table 1. The chemical substance information encompasses the symbol representation, color, enthalpy change, and state of the substances. The database includes 69 types of chemicals and ions, covering all substances involved in the aforementioned reactions, as listed in Table 2. Color is represented in RGB format, serving as the reference color for display along with the transparency value<cit.>. Enthalpy change is given as the standard molar enthalpy of formation of the substances. States of substances include solid (s), liquid (l), gas (g), and solution (aq). §.§ Spectrum to Color Conversion The simulator supports the conversion of visible spectra to RGB colors for substance characterization. The visible light spectrum of a substance determines its color appearance in a colorless transparent solution<cit.>. If the emission spectrum of a substance is provided, the simulator can calculate and convert the spectral data to the CIE color space and subsequently to the RGB color space. For example, the emission spectrum of a blackbody radiator at 1000K can be illustrated as Fig. <ref>. The yielding RGB color is calculates as , which could be verified in other ways<cit.>. §.§ Chemical Experiment In our chemical experiments, we focused on integrating robotic operations within the IsaacSim environment to enhance the visualization and analysis of both inorganic and organic reactions as shown in Fig. <ref>. Inorganic Experiment The inorganic experiment selected the reactions of potassium permanganate (KMnO4) with ferrous chloride (FeCl2) and hydrochloric acid (HCl) with iron(II) oxide (FeO) as the subjects of investigation. In the simulation environment, a robotic arm was configured to perform tasks such as gripping and pouring. The robotic arm identified the location and labels of the objects within the scene and executed the corresponding chemical operations. During the inorganic experiment, Chemistry3D monitored the center of mass of the reactants to determine whether the reaction had occurred. It also provided the output of intermediate products after the reaction. Additionally, Chemistry3D supported the acquisition of data on the temperature, enthalpy change, and pH values of the reactants. In Chemistry3D, significant color changes were observed in the reaction between potassium permanganate and ferrous chloride, while the reaction between hydrochloric acid and iron(II) oxide exhibited the phenomenon of solid dissolution. These color changes and state information could be visualized in Chemistry3D. These two experiments demonstrated that Chemistry3D could realize and visualize reactions between solids and liquids as well as between liquids. Organic Experiment In the organic experiment, most reactions do not exhibit significant color changes. The intermediate products in organic reactions are crucial for research. In Chemistry3D, we chose to simulate the formation of intermediate products in organic reactions. We adopted the same experimental setup as used in inorganic reactions to study the reaction between bromine (Br2) and pyrene (C20H12). During the simulation, the chemical simulation was synchronized with the robotic arm simulation, with detailed data on reactants and products at each step output to the terminal. Through this information, we can further optimize our reaction parameters and improve robotic manipulations to enhance the yield of the final product. §.§ Chemical Operation Chemical manipulation typically involves specified operations on chemical containers, such as gripping, pouring, stirring, shaking, and placing. In Chemistry3D, we have modularly designed these chemical operations to enable the completion of specific, comprehensive chemical tasks. As shown in Fig. <ref>, We developed a class called Controller Manager to manage these operations. In our setup, each chemical operation is associated with a corresponding controller (Controller). For each instantiated Controller, the Controller Manager ensures that individual chemical operations are performed sequentially and in conjunction. In Sections <ref>, we have integrated the tasks of gripping, pouring, and placing through the Controller Manager, achieving a modular design for robotic manipulations. To illustrate, consider the process of placing an object, which can be decomposed into five distinct stages: vertical ascent, horizontal translation, vertical descent, gripper release, and a subsequent vertical ascent. This sequence assumes that the robotic arm maintains a secure grip on the object throughout the entire placing action. As outlined in Algorithm <ref>, the entire motion process is segmented into multiple phases, with distinct time steps assigned to each phase to achieve adjustable motion speeds at different stages. Notably, to ensure stable operation of the robotic arm during the placing or grasping processes, intermediate points are incorporated within the vertical ascent and descent phases. This approach is designed to prevent deviation from the predetermined trajectory, thereby minimizing the risk of collision. After defining a specific controller, we introduce the Controller Manager class, which is responsible for managing all the controllers. The method is used to add instantiated controllers, while the method specifies the execution order of tasks. Finally, the method is employed to run all the controllers, thereby driving the robotic arm. First, the method is defined in Algorithm <ref>. This method is responsible for adding a new controller instance to the list of controllers managed by the . The method takes two arguments: , which is a string representing the name of the controller, and , which is the instance of the controller to be added. The method first checks if a controller with the same name already exists in the list of controllers. If it does, it adds an index to the name to ensure uniqueness. Once a unique name is ensured, the method appends a dictionary containing the and to the list. Next, we have the method, which is shown in Algorithm <ref>. This method is used to add a new task to the list of tasks managed by the . The method takes two arguments: , which is a string representing the type of the controller for the task, and , which is a dictionary containing the template for the task parameters. This ensures that the robotic arm performs the tasks in the correct order. Finally, the method is detailed in Algorithm <ref>. This method is responsible for executing the current task using the current controller. It takes one argument: , which is a dictionary containing the current observations from the simulation. The method first checks if all tasks are completed. If they are, it prints a message and pauses the simulation. If not, it retrieves the current task and controller. It then ensures that the controller exists. If the controller does not exist, it prints an error message and returns. Next, the method generates task parameters using the . It then calls the method of the controller with the generated task parameters to get the actions. If the controller type is 'pour', the method performs additional checks and updates related to the pouring process, such as checking if the reaction has been activated, and updating the simulation container color. Finally, the method applies the actions to the robot and checks if the controller is done. If the controller is done, it moves to the next task by incrementing the Utilizing the Controller and ControllerManager frameworks, we have achieved a modular approach to managing robotic arm movements and control operations. This design enhances the maintainability and extensibility of the codebase, facilitating efficient and flexible transitions between different controllers and task allocations. §.§ Visual Sim2Real Within the Omniverse platform, we selected transparent objects commonly found in chemical laboratories and introduced 3D-scanned transparent models. To verify the efficacy of transparent object simulation for direct Sim2Real transfer, we constructed two simulation datasets of transparent objects as shown in Fig. <ref> focused on Semantic Segmentation and Target Detection tasks. In our dataset production process, we utilized seven real transparent objects scanned in 3D and applied glass material using Omniverse. As illustrated in the Fig. <ref> , we incorporated a variety of complex colors, outdoor scenes, indoor scenes, and simple colors, resulting in a total of over 1,500 backgrounds. This approach was designed to ensure a sufficiently complex and diverse set of backgrounds. To enhance the dataset's generalizability, we dynamically adjusted the number of light sources, light intensity, and color temperature, aiming to maximize the number of reflective spots on the transparent objects during illumination. Specifically, we used three light sources to illuminate the objects, intentionally creating as many reflective spots as possible on their surfaces. Additionally, we applied randomized settings to the position and angle of the transparent objects, placing single or multiple objects in each image to enhance complexity. The Fig. <ref> presents a sample of the generated dataset. The performance of Sim2Real is highly dependent on the quality of the simulated environment's visualization. Utilizing the high-fidelity visualization capabilities provided by Omniverse, we aim to demonstrate the Sim2Real capabilities for transparent objects achievable with Chemistry3D. We trained several mainstream encoder-decoder networks within the simulation environment and deployed them directly into real-world scenes to compare their semantic segmentation performance with that of TGCNN<cit.>. Our network combinations include ResNet<cit.> with UNet<cit.>, VGG<cit.> with UNet++<cit.>, and EfficientNet<cit.> with DeepLabV3<cit.>, each known for their robust performance in various segmentation tasks. As illustrated in Fig. <ref>, these networks maintain robust segmentation capabilities across both simple and complex backgrounds in real-world environments. Chemistry3D also supports quantitative comparisons of network performance. Additionally, we further validated the Sim2Real ability of vision tasks oriented towards transparent objects by performing a object detection task. For this purpose, we selected the YOLO<cit.> algorithm. The performance of the network, trained within the simulation environment, was evaluated for target detection in both simulated and real-world environments. As illustrated in Fig. <ref>, the results are consistent with those observed in the semantic segmentation task, confirming that Chemistry3D effectively supports Sim2Real for transparent-object vision tasks. §.§ Embodied Intelligence To evaluate the development capabilities of Chemistry3D in embodied intelligence tasks, we initially designed a chemical experiment scene. The laboratory setup included a table equipped with containers of KMnO4 and FeCl2, as well as two empty beakers. Within the overall framework, we constructed agents for robotic control. These agents were responsible for acquiring environmental information, generating robotic operation tasks, initializing different motion controllers, and managing robotic operations through the Controller Manager. The agents acquired information about the experimental scene, enabling the robot to observe interactive objects and generate potential chemical reactions based on its chemical reaction knowledge base. Subsequently, as shown in Fig. <ref>, we utilized natural language input to direct the robot to complete the relevant chemical experiment tasks. §.§ Reinforcement Learning OmniIsaacGymEnvs, integrated within IsaacSim, facilitates complex reinforcement learning tasks in Chemistry3D. To demonstrate the potential for developing reinforcement learning tasks in Chemistry3D in our experiment, we designated picking as the RL task as shown in Fig. <ref>. We utilized a reward function setup similar to the provided examples, and employed Proximal Policy Optimization (PPO)<cit.> as the reinforcement learning algorithm. In the reinforcement learning experiment, we configured the number of environments to 2048, the number of epochs to 3500, and the learning rate to 5 × 10^-4. In OmniIsaacGymEnvs, various hyperparameters(HP) related to simulation, environment, and training can be adjusted to optimize the overall effect of reinforcement learning training. Table  <ref> presents a selection of these HP along with the default configurations employed in our task. This customization allows for a more tailored approach to reinforcement learning, potentially enhancing the training outcomes. This training configuration was consistently applied across three separate realizations. As illustrated in the Fig. <ref>, we plotted the reward curve and the success rate curve during training, using the average of the three realizations as the central curve and the standard deviation among the three experiments to represent the curve width. The robustness of the reinforcement learning task outcomes is evident from these results. Fig. <ref> illustrates the training process, including the reward curve and success rate information for the grasping task. ll ll ll Simulation Environment and Training Parameters Category HP Category HP Category HP Environment Simulation Train Num_envs 2048 domain_randomization False Multi_gpu True envSpacing 3.0 dt 1/120.0 seed 42 episodeLength 500 default_physics_material: device gpu RotationNoise 0.0 static_friction 1.0 learning_rate 5 × 10^-4 PositionNoise 0.0 dynamic_friction 1.0 max_epochs 3500 restitution 0.0 save_frequency 100 add_ground_plane True gravity_mag -9.81 unsrt
http://arxiv.org/abs/2406.08167v1
20240612125619
Optical Investigations of Coherence and Relaxation Dynamics of a Thulium-doped Yttrium Gallium Garnet Crystal at sub-Kelvin Temperatures for Optical Quantum Memory
[ "Antariksha Das", "Mohsen Falamarzi Askarani", "Jacob H. Davidson", "Neil Sinclair", "Joshua A. Slater", "Sara Marzban", "Daniel Oblak", "Charles W. Thiel", "Rufus L. Cone", "Wolfgang Tittel" ]
quant-ph
[ "quant-ph", "physics.app-ph", "physics.optics" ]
These authors contributed equally to this work. Corresponding Author: wolfgang.tittel@unige.ch QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The Netherlands ICFO - Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The Netherlands QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The Netherlands John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA Division of Physics, Mathematics, and Astronomy, and Alliance for Quantum Technologies, California Institute of Technology, Pasadena, California 91125, USA QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The Netherlands QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The Netherlands Institute for Quantum Science and Technology, and Department of Physics & Astronomy, University of Calgary, Calgary, Alberta, T2N 1N4, Canada Department of Physics, Montana State University, Bozeman, Montana 59717, USA Department of Physics, Montana State University, Bozeman, Montana 59717, USA QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The Netherlands Department of Applied Physics, University of Geneva, 1211 Geneva 4, Switzerland Constructor University, 28759 Bremen, Germany § ABSTRACT Rare-earth ion-doped crystals are of great interest for quantum memories, a central component in future quantum repeaters. To assess the promise of 1% Tm^3+-doped yttrium gallium garnet (Tm:YGG), we report measurements of optical coherence and energy-level lifetimes of its ^3H_6 ↔ ^3H_4 transition at a temperature of around 500 mK and various magnetic fields. Using spectral hole burning, we find hyperfine ground-level (Zeeman level) lifetimes of several minutes at magnetic fields of less than 1000 G. We also measure coherence time exceeding one millisecond using two-pulse photon echoes. Three-pulse photon echo and spectral hole burning measurements reveal that due to spectral diffusion, the effective coherence time reduces to a few μs over a timescale of around two hundred seconds. Finally, temporal and frequency-multiplexed storage of optical pulses using the atomic frequency comb protocol is demonstrated. Our results suggest Tm:YGG to be promising for multiplexed photonic quantum memory for quantum repeaters. Optical Investigations of Coherence and Relaxation Dynamics of a Thulium-doped Yttrium Gallium Garnet Crystal at sub-Kelvin Temperatures for Optical Quantum Memory Wolfgang Tittel June 17, 2024 =================================================================================================================================================================== § INTRODUCTION Cryogenically cooled rare-earth-ion-doped crystals (REIC) <cit.> are promising candidates for optical and quantum memory applications <cit.>, as demonstrated in a series of recent publications, including storage of quantum states over long times <cit.>, with high efficiencies <cit.>, in multiple modes <cit.>, and of entangled memories <cit.>. Quantum memories have been realized using various physical systems such as hot and cold atomic vapor <cit.>, single-trapped atoms, and diamond colour centers <cit.>, with each system having its own advantages. The interest in using rare-earth-doped solids <cit.> is due to their long coherence lifetimes <cit.> observed for both optical and spin transitions, and their large optical inhomogeneous broadening in conjunction with long population lifetimes of hyperfine transitions. For example, these properties are well suited to implement storage protocols based on atomic frequency combs (AFCs), which enables storage with a large time-bandwidth product and allows multiplexing. Quantifying the time-dependent spectroscopic properties of REICs at temperatures near absolute zero is both of fundamental interest and a prerequisite for their use as quantum memories. Thulium-doped crystals <cit.> are desirable candidates for spectrally-multiplexed quantum memories with fixed storage time <cit.>, an approach that relies on the two-level atomic frequency comb (AFC) protocol <cit.>. These crystals possess the simplest energy level structure that allows for spectral tailoring as needed to create AFCs: both the ground and excited states split under the application of a magnetic field into two long-lived Zeeman levels, which allows for persistent spectral hole burning <cit.>. Furthermore, the optical coherence time, which determines the maximum storage time, is generally long, e.g. 119 μs in Tm:YAG <cit.>. In this work, we investigate the spectroscopic properties of a 1% thulium-doped ytterbium gallium garnet (Tm^3+:Y_3Ga_5O_12 or Tm:YGG) crystal at temperatures as low as 500 mK and various magnetic fields. We measure the optical coherence times T_2 and lifetimes of all energy levels that are relevant for the AFC protocol by means of two-pulse photon echo (2PPE) and time-resolved spectral hole burning (SHB) techniques. We find a sub-kHz wide homogeneous linewidth Γ_h (with Γ_h=1/π T_2) - the third narrowest reported for any optical transition after Erbium and Europium <cit.> - as well as hyperfine levels (Zeeman levels) within the electronic ground state with up to 300 sec lifetime. Furthermore, we observe and investigate the magnetic field-induced broadening of spectral holes over time, which we explain qualitatively in the context of spectral diffusion. Pathways towards improved long-term coherence are suggested. The exceptional homogeneous linewidth together with a 56 GHz inhomogeneously broadened absorption profile <cit.> and long-lived atomic levels makes Tm:YGG a promising candidate for multimode quantum memories that enable spectrally multiplexed quantum repeaters. Our paper is organized as follows. Sections 2 and 3 discuss the properties of the Tm^3+:YGG crystal used in our experiment and describe the experimental setup. In Section 4, measurements of Zeeman-level lifetimes and optical coherence are presented. Then, in Section 5, we study spectral diffusion in the presence of magnetic fields over different timescales. Toward this end, the results of three-pulse photon echo experiments and spectral hole burning measurements are presented, and plausible underlying decoherence mechanisms are proposed and discussed. In Section 6, we demonstrate microsecond-long storage of 10 subsequent laser pulses and simultaneous storage of 3 different spectral modes using the AFC echo scheme. We end the paper with a conclusion and an outlook. § TM:YGG MATERIAL PROPERTIES For our measurements, we use a 25 mm × 5 mm × 5 mm (a × b × c) single crystal of 1% Tm^3+:Y_3Ga_5O_12 from Teledyne FLIR Scientific Materials (Bozeman, MT). Y_3Ga_5O_12 (YGG) is a cubic crystal in which all naturally occurring isotopes of the host ions Y and Ga feature nuclear spin: I = 1/2 for ^89Y and I = 3/2 for ^69Ga and ^71Ga <cit.>. Yttrium has a natural abundance of 100% with a free-nucleus gyromagnetic ratio of 2.1 MHz/T. For gallium, the natural abundance of ^69Ga is 60% and a free-nucleus gyromagnetic ratio of 10.2 MHz/T, while the ^71Ga isotope has a natural abundance of 40% with a free-nucleus gyromagnetic ratios of 13 MHz/T <cit.>. Tm^3+ ions substitute Y^3+ without charge compensation in six crystallographically equivalent but orientationally inequivalent sites of D_2 point symmetry. The transition between the lowest crystal-field levels of the electronic ground state, ^3H_6, and the optically excited state, ^3H_4, occurs at a wavelength of 795.325 nm in vacuum. A simplified energy-level diagram of Tm^3+:YGG is shown in the inset of Fig. <ref>. Tm^3+ is a non-Kramers ion with a [Xe]4f^12 electron configuration, the energy levels are electronic singlets, and the angular momentum is quenched by the crystal field, resulting in no first-order hyperfine interaction. Due to the I= 1/2 nuclear spin of Tm^3+, the quadrupole interaction is zero and the second-order magnetic hyperfine interaction vanishes in the absence of a magnetic field. When an external magnetic field is applied, nuclear Zeeman and electronic Zeeman interactions combine with a second-order hyperfine interaction to produce an “enhanced" effective nuclear magnetism that splits the ground and excited states into pairs (m_I = ±1/2) of sublevels. We refer to these levels as Zeeman levels, but the splitting mechanism includes all interactions mentioned above. The Zeeman level structure is hidden within the ∼ 56 GHz inhomogeneous broadening of the ^3H_6↔^3H_4 optical transition. § EXPERIMENTAL DETAILS A schematic of our setup is presented in Fig. <ref>. For our measurements the crystal is mounted on the coldest stage of an adiabatic demagnetization refrigeration (ADR)-based cryostat, reaching a temperature below 600 mK. The light propagates along the 25 mm long ⟨110⟩ axis of the crystal, and its polarization is linear at the crystal input. The polarization evolves inside the crystal due to birefringence stemming from residual crystal strain. A magnetic field is applied parallel to the ⟨ 111 ⟩-direction using a superconducting solenoid. The field strength is detected using a Hall sensor mounted directly above the crystal. The optical pulse sequence for our measurements is obtained from a continuous-wave external-cavity Toptica DLPro tunable diode laser emitting at 795.32 nm (vacuum) to address the ^3H_6 → ^3H_4 transition of Tm^3+ ions in the crystal. A programmable pulseblaster controls the timing sequence. The light is gated and shaped with a single-pass free-space acoustic-optic modulator (AOM). The 1st order diffracted light beam from the AOM is sent through a series of optical components (waveplates and polarizing beam-splitter) and a phase modulator (PM) where the frequency of the light is shifted through a serrodying signal <cit.> created by an arbitrary waveform generator (AWG). Optical transmission through the crystal is detected using an amplified silicon photodetector and a digital oscilloscope. A small fraction of light from the diode laser is used to frequency lock the laser to a high-finesse cavity using the Pound-Drever-Hall method <cit.>. For improved stability, the frequency stabilization setup is kept on a separate optical breadboard that is mechanically isolated from the main optical setup. After passing through an electro-optic modulator (EOM) that is used to create frequency sidebands, the light is sent through a series of optical components for proper coupling into the temperature-stabilized high-finesse optical cavity through an optical fiber that is kept under vacuum. The reflected signal is measured using a photodiode, creating a feedback loop via a PID controller to suppress laser frequency fluctuations. This allows us to obtain a narrow laser linewidth of a few tens of kilohertz. § SPECTROSCOPIC RESULTS §.§ Spectral hole burning measurements To determine the lifetime of different energy levels and the Zeeman levels of Tm^3+:YGG, we employ a widely used spectroscopic technique known as time-resolved spectral hole burning (SHB). We set up a hole-burning sequence composed of the following steps. First, the ions in the crystal are optically pumped from the ground to the excited state using a 50 ms-long monochromatic laser pulse. After a waiting time that varies between a few tens of microseconds and a few hundred milliseconds, the hole is read out with an attenuated 8 ms long pulse that is frequency chirped over 2 MHz. By measuring the area of the spectral hole for varying waiting times, the lifetimes of the various levels that lie between the excited state and the ground state can be extracted. An example of a decay curve is shown in Fig. <ref>a. Fitting this spectral hole decay using a double-exponential function reveals a ^3H_4 excited state lifetime T_1 = 1.99± 0.71 ms and a ^3F_4 bottleneck level lifetime T_b = 55.14±4.67 ms, which agrees with previous results <cit.>. Next, to measure the ground state ^3H_6 Zeeman level lifetime, a magnetic field is applied along the ⟨111⟩ axis of the crystal with the sample at a temperature of T = 500 mK. Again, by measuring the persistent spectral hole decay as a function of waiting time for different magnetic fields, varying from a few hundred Gauss to a few thousand Gauss, we obtain the ground state Zeeman level lifetime as a function of different magnetic field strengths, which is shown in Fig. <ref>. Due to the D_2 point symmetry of the Tm^3+ ions in the YGG lattice, there are six subgroups of magnetically inequivalent Tm^3+ ions that have different local site orientations. Different subgroups of ions can be selectively addressed by choosing the orientation of the magnetic field as well as the propagation direction and the optical polarization of the light <cit.>. For our experiment, the magnetic field is oriented along the ⟨111⟩ direction and the light is propagating along the ⟨110⟩ direction. In this configuration, two of the six subgroups of magnetically inequivalent Tm^3+ ions interact with the light <cit.>. Thus, we observe two different sets of Zeeman level lifetimes <cit.> as indicated in Fig. <ref>b. and Fig. <ref>c. respectively. It's worth noting that in our experiment, we obtain a combined optical response from these two sites. However, it is possible to selectively choose either of the sites by appropriately aligning the polarization of the light <cit.>. The persistence of Zeeman levels enables the creation of long-lived spectral holes, which, in turn, allow tailoring the absorption profile of the crystal and to prepare persistent atomic frequency combs (AFC) <cit.>, a well-known quantum memory protocol used to store single photons in an inhomogeneously broadened material. §.§ Optical coherence time measurements To study the optical coherence properties in Tm^3+:YGG, we employ two-pulse photon echo (2PPE) spectroscopy <cit.>. In a 2PPE sequence, two excitation pulses separated by a waiting time t_12 are sent into an inhomogeneously broadened ensemble of resonant Tm^3+ ions. They prepare a coherent superposition of the ground and excited electronic states. This gives rise to a coherent burst of radiation—a photon echo—at time t_12 after the second pulse. The variation of the echo intensity as a function of t_12 can be written as: I(t_12) = I_0 e^-4πΓ_ht_12 where I_0 is the initial echo intensity at t_12=0 and Γ_h is the homogeneous linewidth, which is inversely proportional to the coherence time T_2 : Γ_h = 1/π T_2 In order to extract the effective homogeneous linewidth as a function of the magnetic field, we vary the magnetic field from 100 G to 1000 G and measure two-pulse photon echo decays at a temperature of 500 mK and a wavelength of 795.32 nm. We fit all measured photon echo decays using the exponential function described in Eq. <ref>. The magnetic field dependence of the coherence time T_2 and of the homogeneous linewidth Γ_h is presented in Fig. <ref>b. They reflect the dominant magnetic field-dependent decoherence processes <cit.> which limit the performance of the crystal as a quantum memory. We find that the introduction of a few hundred Gauss magnetic fields improves the coherence time T_2 from a zero-field value of 552 μs (see Fig. <ref>a) to a maximum of around 1.1 ms, corresponding to a minimum homogeneous linewidth Γ_h of around 0.26 kHz at around 200 G <cit.>. For larger fields, the coherence time decreases to around 0.8 ms and remains approximately constant up to 1 Tesla (see Fig. <ref>b). § SPECTRAL DIFFUSION Spectral diffusion results in broadening of the homogeneous linewidth Γ_h as a function of time because each Tm^3+ ion experiences a slightly different dynamic crystalline environment. Spectral diffusion is expected due to the presence of gallium and yttrium in the YGG lattice, both of which have nuclear spins and may couple to Tm^3+ ions. The application of a magnetic field generally reduces the impact of spectral diffusion by inhibiting nuclear spin flips <cit.>. Two well-known physical mechanisms that can be accountable for spin flips are phonons (spin-lattice relaxation) <cit.>, and spin-spin relaxation through magnetic dipole-dipole interaction <cit.>, which causes pairs of anti-parallel spins to flip simultaneously (spin flip-flops). These correlated spin flips can randomize the local spin orientations <cit.>. For the range of magnetic fields and temperatures examined in our work, the nuclear spin splittings are comparable to homogeneous linewidths and nuclear quadrupole splitting, resulting in homo and heteronuclear transition resonances <cit.>. The magnitude of the resulting magnetic field fluctuations at the Tm sites is sufficient to cause up to a MHz of decoherence, depending on the timeframe of the fluctuations. This fluctuating magnetic field within the YGG crystal can be due to dynamic interactions between host nuclear spins (^69Ga, ^71Ga, Y^3+) and paramagnetic impurities. In order to characterize the spectral diffusion beyond the 2PPE timescales in the presence of a magnetic field, we employ three-pulse photon echo (3PPE) spectroscopy and spectral hole burning. §.§ Three-pulse photon echo (3PPE) measurements The effective homogeneous linewidth and the time evolution of spectral diffusion-induced decoherence can be extracted from three-pulse photon echo measurements. In a 3PPE sequence, the first two excitation pulses, separated by a waiting time t_12, are sent into an inhomogeneously broadened ensemble of absorbers to create a frequency-dependent periodic modulation of the population in the ground and excited states. Then, after a time delay t_23, a third pulse is applied, resulting in the emission of an echo a time t_12 after the third pulse. To investigate spectral diffusion of Tm^3+ ions at timescales up to 100 ms, we performed 3PPE measurements at a temperature T = 500 mK and a magnetic field of a few hundred Gauss. For our measurements, the separation time t_12 between the first two pulses is held constant at 60 μs, and the echo intensity is measured as a function of time delay t_23 between the second and third pulse, with t_23 varying between 100 μs and 100 ms. The echo intensity is given by <cit.> I(t_23)=I_0 I_pop^2(t_23) e^-4 t_12πΓ_h(t_23) where I_0 is a scaling coefficient and I_pop captures the effect of the reduction of contrast of the atomic grating caused by population decay, which, consequently, reduces the echo intensity. For Tm^3+:YGG, I_pop(t_23) ≈ C_1 e^-t_23 / T_1+C_B e^-t_23 / T_B+C_Z e^-t_23 / T_Z, where C_1, C_B, C_Z are constants. T_1, T_B, and T_Z are the excited state lifetime (^3H_4), the bottleneck level lifetime (^3F_4), and the Zeeman level lifetime (^3H_6 (m_I = 1/2)), measured in Section 4 A. Γ_eff (t_23) is the time-dependent “effective" homogeneous linewidth. It captures all diffusion processes that influence the rare-earth spins caused by magnetic dipole-dipole interactions. Following <cit.>, the functional form of Γ_eff (t_23) can be written as Γ_eff (t_23) = Γ_0 +γ_TLS log(t_23/t_0) + 1/2Γ_SD (R_SD t_12 + 1 - e^-R_SD t_23) where Γ_0 is the homogeneous linewidth at the minimum measurement timescale t_0 (160 μs in our experiment), Γ_SD is the maximum broadening of the homogeneous linewidth (or the spectral diffusion linewidth), and R_SD describes the characteristic diffusion rate of linewidth broadening. The values of these parameters are determined by the details of the diffusion mechanisms <cit.>. We also consider spectral diffusion due to thermally activated low-energy dynamic structural fluctuations, often described as two-level systems (TLS) <cit.>, with γ_TLS being the TLS mode coupling coefficient. To characterize the effects of spectral diffusion, we fit the measured echo decays using Eq. <ref> and extract the effective homogeneous linewidth, which is plotted in Fig. <ref> for T = 500 mK and for two different magnetic fields as a function of the time-delay t_23. Fitting each curve to Eq. <ref> yields a homogeneous linewidth Γ_0 of a few hundred Hz and that the spectral diffusion saturates at a maximum value of a few kHz. Based on the magnitude of the spectral diffusion parameters, Γ_SD and R_SD (Γ_SD = 5.82±0.52 kHz, R_SD = 0.20±0.01 kHz at 200 G and Γ_SD = 4.84±0.72 kHz, R_SD = 0.22±0.04 kHz at 500 G), it is likely that the dominant source of spectral diffusion stems from nuclear spin flips of neighboring gallium in the host lattice. We also find that the contribution of low-energy TLS modes to linewidth broadening is not very pronounced at this temperature and that the spectral diffusion parameters in the assessed temporal regime show little magnetic field dependence. Additional measurements across a broad range of parameters are necessary to conclusively confirm the existence of broadening attributed to TLS. §.§ Long-term spectral diffusion: Magnetic-field-dependent spectral hole broadening In the presence of a magnetic field, spectral diffusion is known to occur over timescales on the order of Zeeman-level lifetimes, which, in our crystal, are many seconds. Since many optical signal processing applications rely on similarly long-lived spectral features <cit.>, we investigate spectral diffusion on such timescales. However, since the characteristic timescales for three-pulse photon echo measurements are much smaller, we instead observe the broadening of persistent spectral holes, created by changing the Zeeman level population out of equilibrium. We operate at a temperature of 600 mK and magnetic fields between 1 and 6 kG. The burning duration and the waiting time are adjusted to maximize the initial hole depth. Once the hole burning is completed, the spectral hole is read out by a weak probe pulse. Then, we determine the spectral hole widths for a series of different magnetic fields. And finally, we observe how the spectral hole broadens as a function of time, see Fig. <ref>a. Assuming no power broadening, the spectral hole width is proportional to the homogeneous linewidth (Γ_spectral-hole = 2 Γ_h), and the observed behavior can be described by <cit.> Γ_spectral-hole (t) ∝Γ_0 + 1/2Γ_SD(1- e^-R_SDt) Upon fitting the spectral hole broadening depicted in Fig. <ref>a with Eq. <ref>, we find that the spectral diffusion linewidth Γ_SD increases from approximately  36 kHz to about  105 kHz as the applied magnetic field strength is increased from 1 kG to 6 kG, as shown in Fig. <ref>b. Furthermore, it is worth noting that, regardless of the magnetic field strength, the broadening exhibits a nearly constant rate of approximately 0.01 Hz, as highlighted in Fig. <ref>c. It is important to emphasize that, within the time scales, magnetic field, and temperature under consideration, the relaxation rate R_SD is significantly slower than the population decay rate I_pop in Eq. <ref> and is therefore not observable using 3PPE measurements. The relationship governing the spectral diffusion linewidth, Γ_SD and the applied magnetic field can be expressed as follows:<cit.>, Γ_SD (B,T) = Γ_max(B) ^2(gμ_bB/2k_BT) where, g represents the g-factor associated with the spins present in the crystal lattice., μ_B denotes the Bohr magneton, B is the external magnetic field, k_B is the Boltzmann constant, T corresponds to the temperature under consideration, and Γ_max(B) represents the maximum frequency broadening of the optical transition resulting from magnetic dipole-dipole interactions, which also depends on the magnetic field B. In Tm^3+:YGG, the g-factors of thulium (0.01), gallium (7.14 × 10^-4 and 9.2× 10^-4 for the two isotopes), and yttrium (1.43× 10^-5) spins are notably small <cit.>. We must consider the magnitude of the ^2 term compared to that of Γ_max (B) in Eq. <ref>. Moreover, for the range of temperatures and magnetic fields examined in our work, the ratio of the thermal distribution of atomic population in Zeeman levels remains relatively constant. Consequently, the hyperbolic secant term on the R.H.S of Eq. <ref> remains effectively constant. As a result, the spectral diffusion predominantly depends on the magnitude of the magnetic dipole moment induced in the Tm^3+ ions by the external magnetic field, which is directly proportional to Γ_max(B). Below, we outline potential underlying physical processes that could lead to variations of the magnetic field at the Tm^3+ ion locations, and thus, contribute to the spectral diffusion: * Ion-ion coupling: Spectral diffusion due to Tm^3+-Tm^3+ interaction can play a role in the observed broadening of the linewidth. The low Tm^3+ ion concentration of 1% in this YGG crystal signifies that the average distance between thulium ions is unlikely to cause broadening of the observed magnitude but an excitation-induced interaction between Tm^3+ ions <cit.> or strain-mediated Tm^3+-Tm^3+ interaction <cit.> may cause this to happen. This is similar to the situation observed in Tm:YAG, which has the same site symmetry and also has negligible magnetic or electric dipole-dipole interactions, but still exhibits very strong instantaneous spectral diffusion that is comparable in magnitude to the magnetic interaction strength. * Phonon-induced spin-flips: As the magnetic field increases, the phonon density of states increases quadratically, leading to an increase in the probability of interaction between the phononic modes of the crystalline lattice and thulium ions <cit.>. * Quadratic Zeeman effect: The quadratic Zeeman effect arises from mixing of the crystal-field levels due to the applied magnetic field. This causes shifts in the energy levels in both the ground and excited states, leading to a shift of the optical transition frequency that is proportional to B^2 and strongly orientation and site dependent. The observed spectral broadening can also originate from the interaction between thulium ions and neighboring host ions. The neighboring nuclear spins surrounding the Tm^3+ ion are yttrium Y^3+ and two isotopes of gallium, ^69Ga and ^71Ga. Since Y^3+ nuclear spins are weakly magnetic, it is plausible that there exists a highly concentrated spin bath comprised of gallium nuclear spins, each possessing a moderate nuclear magnetic moment, which can cause variations in the magnetic field experienced by the thulium ions through the quadratic Zeeman effect <cit.>. We have extensively investigated quadratic Zeeman spectral diffusion in Tm:YGG in our prior work <cit.>. * Fluctuating external magnetic field: An additional contribution may be a noisy current supply for the superconducting solenoid magnet that would cause magnetic field variations of Γ_SD or stray radio-frequency fields that could drive nuclear spin flips. § MULTIPLEXED OPTICAL STORAGE The spectroscopic investigations show that Tm:YGG is suitable for quantum memory applications. To verify this conjecture we demonstrate spectrally and temporally multimode atomic frequency comb (AFC)-based storage of laser pulses. The two-level atomic frequency comb (AFC) protocol <cit.> is well known and well established for storage of quantum light as well as classical laser pulses in REI-doped solids <cit.>. It relies on shaping the inhomogeneously broadened absorption profile of an ensemble of absorbers into a series of equally spaced teeth by means of persistent spectral hole burning with peak separation Δ. The absorption of a photon by such a comb yields a collective atomic excitation that can be described by a so-called Dicke state: |Ψ⟩=1/√(N)∑_j=1^N C_j e^-i 2 πδ_j t e^i k z_j|g_1, …, e_j, …, g_N⟩ where |e_j⟩ represents the jth atom being in the excited state, δ_j is the detuning of its atomic transition frequency with respect to the central frequency of the absorbed photon, z_j is its position measured along the propagation direction of the light, k the wavevector, and C_j its absorption probability amplitude. The collective excitation described by the Dicke state subsequently dephases but, due to the periodicity of the comb, rephasing of the atomic excitations will occur at time 1/Δ. This results in a photon-echo-like re-emission of the stored light. An important figure of merit of an AFC is its finesse, which is defined as the ratio between the peak separation and the FWHM of the absorption peaks, F = Δ/γ. Unlike other storage protocols, the multimode capacity of the AFC memory protocol does not depend on the optical depth of the storage medium, making this protocol a natural choice for multiplexed quantum memories <cit.>. Its large temporal multimode capacity can readily be combined with multiplexing in frequency and space. In the time domain, the number of temporal modes that can be stored using the AFC scheme is proportional to the number of comb teeth, which depends on the total bandwidth and the periodicity Δ. In the frequency domain, the number of spectral modes depends on the bandwidth per spectral channel and the total absorption bandwidth of the rare-earth crystal. In the following subsection, we demonstrate a multimode AFC memory in both the temporal and spectral domains. While we perform the experiments using classical optical pulses, it is known that the AFC protocol also allows storing quantum states of light such as qubits with high fidelity <cit.>. §.§ Simultaneous storage of subsequent temporal modes To demonstrate multimode AFC storage in time, we prepare a 10 MHz broad AFC with finesse F = 2 by optically pumping Tm^3+ ions to long-lived Zeeman levels. To avoid spontaneous emission noise due to population decay from the ^3H_4 excited-state level, we wait 10 ms (five times the 2 ms lifetime of this level). Then, we create a sequence of 10 subsequent pulses of 200 ns duration and 100 ns spacing and send them into the memory. An attenuated replica—a train of AFC echoes—appears after 1/Δ = 5 μs storage time as described in Fig. <ref>(a). Due to the limited optical depth of the crystal, not all the input light is absorbed. Thus, in AFC storage experiments, the AFC echo is generally accompanied by non-absorbed (transmitted) light. The AFC efficiency, η, is defined as the ratio between the intensity of the AFC echo and the input pulse – in our case around 1% <cit.>. This value agrees with the theoretical storage efficiency η_theo=0.98±0.11 %, estimated from the Gaussian-shaped AFC peaks using η_theo = e^-d̃d̃^2e^-7/F^2e^-d_0 <cit.>. Here, d̃ = d_1/F, d_1 is the peak absorption depth, d_0 is the background absorption depth, and F is again the finesse of the AFC. An illustration of a 1 MHz-wide AFC with a finesse of 2, tailored for a 5 μs storage time, is presented in Fig. <ref>(c), along with the relevant absorption parameters d_0 and d_1. The reduced efficiency in our experiment is due to the imperfect optical pumping caused by technical issues such as finite laser linewidth and vibration of the cryostat, which can be especially significant for AFCs with μs long storage times, as well as small optical depth. Note that the latter can be overcome using an impedance-matched cavity <cit.>. §.§ Simultaneous storage of different spectral modes In order to create spectrally multiplexed AFCs, we utilize two phase modulators (PMs) in series, where one of the PMs generates sidebands spaced by the desired frequency interval between neighboring AFCs while the other PM creates AFCs within each of these frequency bands. Both operations are performed using serrodyne optical phase modulation <cit.>. In this way, we program and create simultaneously three 1 MHz-wide AFCs with 4, 6, and 8 μs of storage time, centered at 0, 50, and 100 MHz frequency detuning within the inhomogeneous absorption linewidth of the thulium ions. As shown in Fig. <ref>(b), each AFC receives an optical pulse whose central frequency is matched to that of the AFC and re-emits the respective echo at the pre-programmed storage time (different colors in Fig. <ref>(b) represent the three different frequency detunings). The spectral read-out of the individual AFC re-emissions is implemented using a filter cavity at the output of the crystal and by varying the resonant frequency of the cavity to spectrally match each frequency mode. § DISCUSSION AND CONCLUSION In summary, we investigate the spectroscopic properties of a 1% thulium-doped ytterbium gallium garnet (Tm:YGG) crystal at temperatures as low as 500 mK. Our measurements reveal millisecond-long optical coherence times and Zeeman level lifetimes extending into the hundreds of seconds. We investigate and discuss plausible reasons for spectral diffusion on short and long timescales. We demonstrate a multimode optical memory both in the temporal and spectral domains with microsecond-long storage time. With an optical T_2 of 1.1 ms in Tm:YGG, it stands as the third-longest reported, surpassed only by Er:YSO, and Eu:YSO. However, Er:YSO suffers from the strong paramagnetism of Er^3+ ions, therefore requiring a high magnetic field of a few Tesla to show good properties <cit.>. On the other hand, Eu:YSO suffers from a more complicated hyperfine structure, the presence of two natural isotopes with similar abundance, and a technologically inconvenient transition wavelength <cit.>. For quantum repeaters, the importance of a long T_2 time is always needed, regardless of the degree of freedom used for multiplexing. This is because an extended optical T_2 time enables longer storage of photonic quantum states in optically excited coherence, resulting in longer elementary quantum repeater link lengths, which, in turn, reduces the number of Bell state measurements required to connect such links <cit.>. In addition, a long optical storage time reduces the deadtime of a quantum memory in which quantum information is stored in terms of spin coherence <cit.>. This results in an increased throughput of a repeater-based quantum communication link <cit.>. The primary advantage of the Tm^3+-doped system for quantum memory is the operating wavelength in the near-infrared because it is easily accessible with laser diodes. Also, Tm^3+ is the only rare earth that has a nuclear spin of I = 1/2 with 100% natural abundance, giving one of the simplest energy level structures necessary for quantum memory implementation. In particular, the trivalent Tm^3+-ion has an even number of electrons, it is a non-Kramers ion with no first-order electronic magnetism while still providing a suitable hyperfine structure with a GHz-level energy-level splitting that offers long-lived, optically addressable states with lower sensitivity to the magnetically induced decoherence. Merely a few hundred Gauss of magnetic field is required to achieve a long optical coherence time, rendering it a suitable candidate for long-lived quantum memories <cit.>. This stands in contrast to Kramers ions, such as erbium, where a high magnetic field around a few Tesla is necessary to achieve a long optical coherence time and to reduce spectral diffusion <cit.>. Tm:YGG exhibits reduced spectral diffusion and instantaneous spectral diffusion compared to other known Tm-doped materials such as Tm:YAG and Tm:LiNbO_3. A detailed investigation <cit.> reveals a sensitivity to excitation-induced decoherence that is lower by more than two orders of magnitude compared to Tm:YAG and Tm:LiNbO_3. It shows that the Tm:YGG system features the longest optical coherence lifetimes and the lowest levels of excitation-induced decoherence observed for any known thulium-doped material. Our results confirm that Tm:YGG is a promising material candidate for multimode, long-lived, and AFC-based quantum memories. However, more detailed spectroscopic measurements are required to understand the material’s full potential, such as if the coherence properties can be further improved by using a different magnetic field orientation <cit.>, by growing non-birefringent YGG crystals, and by optimizing the material composition for specific quantum memory implementations. For example, it is possible to increase the inhomogeneous broadening by co-doping selected impurities or introducing static crystal strain to further increase the spectral multiplexing capacity of the memory <cit.>. While high-quality rare-earth gallium garnet crystals can be readily grown for long-established technologically important cases, such as Gd_3Ga_5O_12 (GGG) used extensively as a substrate in the semiconductor industry and Tb_3Ga_5O_12 (TGG) used in most bulk commercial optical isolators <cit.>, much less development work has been carried out for Y_3Ga_5O_12 (YGG). Discussing these efforts extends beyond the scope of this manuscript. Polarization rotation resulting from birefringence within the crystal can negatively impact the polarized interaction of light with different Tm sub-sites <cit.>. This limits the ability to selectively interact only with chosen subsets of Tm ions, each of which has a different Rabi frequency. Furthermore, since the applied external magnetic field has a different projection on each of the sub-sites, the nuclear hyperfine structure and dynamics of each sub-site will also be significantly different. As a result, for a general orientation, only some of the Tm sub-sites will contribute to the long-lived spectral hole burning, depending on both the orientation of the magnetic field and the optical polarization. Consequently, there are only a few specific combinations of orientations of the magnetic field and polarization relative to the crystal axes that maximize the interaction, resulting in improved optical depth and memory efficiency. Additionally, the strain-induced birefringence is also indicative of the presence of defects in the grown materials. Hence, growing improved low-strain crystals without the observed birefringence can be expected to result in further improved optical coherence time as well as nuclear spin-state lifetimes. In parallel, optical pumping strategies must be optimized <cit.>. Furthermore, technical developments such as sub-kHz laser linewidth stabilization; isolation of the crystal against cryostat vibrations; and enhanced light-matter interaction using an impedance-matched cavity are needed for the creation of an efficient, highly multimode and long-lived optical quantum memory that enables long-distance quantum communication. § ACKNOWLEDGMENTS The authors thank G. C. Amaral, N. Alfasi, and T. Chakraborty for their experimental help. We acknowledge funding through the Netherlands Organization for Scientific Research, and the European Union's Horizon 2020 Research and Innovation Program under Grant Agreement No. 820445 and project name Quantum Internet Alliance (QIA). This material is based in part on research at Montana State University sponsored by the Air Force Research Laboratory under agreement number FA8750-20-1-1004. Current affiliations: The current affiliations of some of the authors are the following: Mohsen Falamarzi Askarani: Xanadu, Toronto, ON M5G 2C8, Canada; Jacob H. Davidson: National Institute of Standards & Technology, 325 Broadway, Boulder, CO 80305, USA; Sara Marzban: MESA+ Institute for Nanotechnology, University of Twente, 7500 AE Enschede, The Netherlands; Joshua Slater: Q*Bird b.v., Delftechpark 1, 2627 XJ, Delft. apsrev4-1
http://arxiv.org/abs/2406.08066v1
20240612103353
Two-tone spectroscopy of high-frequency quantum circuits with a Josephson emitter
[ "A. Peugeot", "H. Riechert", "S. Annabi", "L. Balembois", "M. Villiers", "E. Flurin", "J. Griesmar", "E. Arrighi", "J. -D. Pillet", "L. Bretheau" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.supr-con", "physics.app-ph", "quant-ph" ]
These authors supervised equally this work. jean-damien.pillet@polytechnique.edu landry.bretheau@polytechnique.edu ^1 Laboratoire de Physique de la Matière condensée, CNRS, Ecole Polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France ^2 Quantronics group, Université Paris-Saclay, CEA, CNRS, SPEC, 91191 Gif-sur-Yvette Cedex, France ^3 Laboratoire de Physique de l'Ecole normale supérieure, Centre Automatique et Systèmes, Mines Paris, Inria, ENS-PSL, Université PSL, CNRS, Sorbonne Université, Paris, France § ABSTRACT We perform two-tone spectroscopy on quantum circuits, where high-frequency radiation is generated by a voltage-biased superconductor-normal-superconductor Josephson junction and detection is carried out by an ancillary microwave resonator. We implement this protocol on two different systems, a transmon qubit and a λ/4 resonator. We demonstrate that this two-tone Josephson spectroscopy operates well into the millimeter-wave band, reaching frequencies larger than 80 GHz, and is well-suited for probing highly coherent quantum systems. Two-tone spectroscopy of high-frequency quantum circuits with a Josephson emitter A. Peugeot^1, H. Riechert^1, S. Annabi^1, L. Balembois^2, M. Villiers^3, E. Flurin^2, J. Griesmar^1, E. Arrighi^1, J.-D. Pillet^1*, L. Bretheau^1 Dedicated to professor for Hélène Frankowska for her 70th anniversary ===================================================================================================================================================== § INTRODUCTION Circuit Quantum Electrodynamics (cQED) is based on microwave measurements and control of superconducting circuits <cit.>. In practice however, the accessible frequencies are typically smaller than 30 GHz, due to the limited range of commercially available instruments and components and the difficulty to route high frequency signals in a cryostat. Increasing this upper limit could help to probe novel quantum systems in hybrid architectures <cit.> as well as elucidate decoherence mechanisms caused by the environment at high frequency <cit.>. It would also open new routes towards the development of high-frequency qubits that can operate at higher temperatures <cit.>. In order to take a step forward in this direction, one can exploit the AC Josephson effect <cit.> to generate high-frequency radiation directly at the cold-stage level. When biased at voltage V_J, a Josephson junction indeed radiates photons at frequency f_J = 2eV_J / h, thereby working as an on-chip tunable microwave source <cit.>. This phenomenon enables broad-band spectroscopy that extends from a few tens of MHz up to the THz regime. Its principle relies on power conservation and is quantumly described by the theory of dynamical Coulomb blockade <cit.>. Each time a photon of energy h f_J is absorbed and dissipated in the junction's electromagnetic environment, a Cooper pair of energy 2eV_J inelastically flows across the junction. Consequently, when f_J matches a resonance of the target system, one can detect a current I_J = 2e Γ_J that is directly proportionnal to the photon emission rate Γ_J. Absorption Josephson spectroscopy has been employed for characterizing macroscopic systems, such as ensembles of cobalt atoms  <cit.> or microwave resonators <cit.>, as well as elementary quantum objects, including Cooper pair boxes <cit.> and Andreev bound states <cit.>. This spectroscopy technique is however challenging as it is sensitive to the whole electromagnetic environment of the junction and relies on low-frequency measurements of small DC currents. Even more crucially, this approach is not suited to probe highly coherent quantum systems like superconducting qubits, as its detection sensitivity decreases dramatically for long coherence time. Indeed, the dissipation rate of ∼ 1 photon per coherence time interval induces an upper bound for Γ_J. A typical T_1 ∼ 100 μs would thus amount in detecting a current of a few fA, four orders of magnitude below the resolution of the best power-absorption Josephson spectrometer <cit.>. In this work, we develop a new paradigm for Josephson spectroscopy, inspired by cQED, where the detection is realized using an ancillary system (see Fig. <ref>a). First, we utilize a superconductor-normal-superconductor (SNS) Josephson junction as an on-chip microwave emitter (red). We couple such an SNS emitter to a quantum object of interest with frequency f_q (green). This system is in turn dispersively coupled to a microwave cavity that resonates at frequency f_r and acts as a detector (blue). When driven at resonance f_J = f_q, the quantum object gets excited with n_q photons, which induces a frequency shift χ n_q of the cavity that can be detected using sensitive and fast radiofrequency techniques. Following this approach, our goal is to perform two-tone Josephson spectroscopy at high-frequency. As a proof of concept, we implement this strategy to probe two coherent though very different quantum objects, a superconducting transmon qubit that resonates around 6 GHz, and electromagnetic modes of a λ / 4 resonator up to 83 GHz. § SNS JOSEPHSON JUNCTION AS A MICROWAVE EMITTER Our Josephson emitter consists of two superconducting electrodes connected by a normal metallic wire, forming an SNS Josephson junction (see micrograph in Fig. <ref>b and details of fabrication in appendix <ref>). We conduct measurements either on Nb-Au-Nb or Al-Hf-Al structures. In the latter case, although the Hf wire might actually become superconducting at the base temperature of our experiments <cit.>, it essentially behaves like a normal metal above its critical current at finite voltage. We initially characterize our devices through DC electrical measurements and extract their current-voltage characteristic I_J(V_J) (see Fig. <ref>b and appendix <ref>). Depending on the thickness, width, length and composition of the normal wire connecting the two superconductors, the critical current I_c can vary from a few hundreds nA to a few tens of μA. When biased above I_c, the junction switches and acquires a finite normal resistance R_n, ranging from about 10 to a few hundreds of Ohms depending on the devices. Due to Joule heating, retrapping occurs at a current significantly smaller than I_c. The thermalization is mostly ensured by the normal metal wings, which act as thermal baths and maintain the emitters temperature below a few hundreds of mK. In most of our designs, the I_c R_n products is around 100 µV, which is consistent with results of other experiments with similar geometries <cit.>. This suggests that our SNS junctions possess a minigap 2E_g∼ 25 GHz and are likely to exhibit additional dissipation beyond this frequency <cit.>. The operation of SNS microwave sources relies on the AC Josephson effect, as they carry, when voltage-biased, an oscillating current at the Josephson frequency f_J=2 e V_J / h. The voltage-to-frequency conversion ratio is thus given by the inverse of the magnetic flux quantum ϕ_0 ^-1=2e/h ≈ 0.5 GHz/μV. Consequently, Josephson junctions can emit microwave photons ranging from a few GHz up to a few hundreds of GHz. This upper bound is set by the value of the superconducting gap, which can reach ∼180 GHz, 1.4 THz and 1.8 THz for Al, Nb and NbN respectively, the three most commonly used materials in superconducting circuits. Beyond this limit, the amplitude of the microwave signal is expected to decrease slowly <cit.>. In the I_J(V_J) characteristic of Fig. <ref>b, the AC Josephson current is averaged out and thus invisible. To detect it, we measure the power spectral density emitted by the junction in the 4-8 GHz band by routing the signal with a bias tee to a cryogenic HEMT amplifier anchored at 4K. In the power spectra presented in Fig. <ref>c, we observe an emission peak whose frequency coincides with ϕ_0^-1V_J, up to the 1% precision of our bias circuit. This voltage dependence enables us to unambiguously attribute the signal to the AC Josephson effect. The peak has a gaussian shape with a standard deviation σ≈ 5 MHz, which we attribute to a 10 nV RMS noise on the bias voltage. Such thin linewidth, obtained thanks to significant filtering (details in appendix <ref>), is comparable to the best performances achieved so far <cit.>. Additionally, we observe a frequency dependence of the signal amplitude within the 4-8 GHz bandwidth of our detection chain, with a large decrease of the signal around 7.3 GHz, probably due to a parasitic resonance on the junction chip or in our setup. This illustrates how challenging it is to efficiently route the signal emitted by the junction in the desired direction and to determine it quantitatively. By integrating the signal from Fig. <ref> over the full linewidth and comparing it to the noise floor added by the HEMT amplifier, we find an emission power of ∼ -130 dBm. This microwave power corresponds to the emission of about 10^7 photons per second, i.e. of the order of magnitude of the read-out pulses used in circuit-QED experiments. Note that the power emitted by the SNS junction strongly depends on the environment it is embedded into and might change for each experiment. At larger voltage, we observe an additional broadband emission due to shot noise (see appendix <ref>), whose impact is negligible in the following experiments. These DC and AC measurements therefore show that SNS junctions function properly as microwave sources, at least in the 4-8 GHz range. Interestingly, SNS microwave sources offer several advantages compared to more standard tunnel Josephson junctions. First, their I_J(V_J) characteristic is mostly linear, with R_n remaining relatively small. Their impedance thus weakly changes with V_J and, as such, does not significantly affect other components to which it is coupled. This aspect is crucial for applications like two-tone spectroscopy, where a sensitive microwave cavity interacts with the junction (see appendix <ref>). Second, one can mention the relative ease of their fabrication process compared to tunnel junctions when using niobium as a superconductor <cit.>. Finally, the low backaction from the environment on their I_J(V_J) characteristics make them simpler to use as their resistance does not significantly vary depending on the objects to which they are coupled. In the following, we use these newly developped on-chip SNS microwave emitters to perform two-tone spectroscopy of quantum circuits. § TWO-TONE JOSEPHSON SPECTROSCOPY OF A TRANSMON QUBIT As a first proof of concept, we employ two-tone Josephson spectroscopy on a well-known quantum system: a transmon qubit. While its frequency f_q may be relatively low, it allows us to validate our technique by comparing it to measurements obtained using conventional microwave instrumentation. The transmon is capacitively coupled to a microwave resonator of frequency f_r for dispersive readout (see Fig. <ref>a). The qubit and cavity being off-resonantly coupled with a large enough detuning, they can be described by the Hamiltonian h f_r a^† a + h f_q |e⟩⟨ e| - h χ a^† a |e⟩⟨ e|, where a^† and a are the ladder operators of the cavity, |g⟩ and |e⟩ the ground and excited state of the qubit, and χ is the dispersive frequency shift. The cavity frequency is thus conditioned to the state of the qubit, which provides the basis for performing its two-tone spectroscopy. Details on the design and fabrication of this circuit, which was initially created for microwave photon counting are given in references <cit.>. We first characterize the transmon circuit using standard microwave instruments and thus access to its characteristic frequencies and coherence times (see appendix <ref>). We then turn to two-tone Josephson spectroscopy using our on-chip SNS microwave emitter (I_c≈400 nA, see Fig. <ref> in appendix <ref>). For this experiment, the emitter and the transmon are separated on two independent chips, each mounted in its own sample-holder, connected to each other by an SMA cable (see Fig. <ref>a). We emit Josephson radiation onto the transmon while probing the resonator via reflectometry close to its resonant frequency f_r =8.177 GHz. As we vary f_J, we observe a clear suppression of the reflection coefficient R at the qubit frequency f_q = 5.9055 GHz, demonstrating that it gets excited by the SNS microwave source (see Fig. <ref>b). The transition linewidth with σ = 5.4 MHz is much larger than the intrinsic qubit decay rate. This width is set by the voltage noise across the SNS emitter and is similar to the one measured in Fig. <ref>c. On top of that, the resonant dip displays an asymmetry, as a second transition can be excited at frequency f_J = f_q - χ, with χ = 11.8 MHz. The qubit frequency indeed depends on the number of photons n in the cavity due to the dispersive coupling. As a consequence, we observe a transition between states |g,1⟩ and |e,1 ⟩, owing to the finite thermal population p_1 ≈ 0.1 of the Fock state |n=1 ⟩. We therefore demonstrate that our on-chip SNS microwave emitter can be used to detect an elementary quantum system, a transmon qubit that resonates around 6 GHz, using two-tone Josephson spectroscopy. § TWO-TONE JOSEPHSON SPECTROSCOPY OF HIGH-FREQUENCY MODES To further test this novel detection scheme, we now probe the electromagnetic modes of a λ/4 resonator. The advantage of such a system is that it has a large number of regularly spaced modes, which is well-suited to explore two-tone Josephson spectroscopy at high frequency and over a large range. The resonator, shown in Fig. <ref>a-b, is made out of Niobium and resonates at its harmonic frequencies f_m, where m ∈ℕ is the mode index. It is inductively coupled to an RF SQUID, which consists of an SNS Josephson junction enclosed in a superconducting loop threaded by a magnetic flux Φ. This introduces some non-linearity into the circuit, which results in cross-Kerr interaction between the resonant modes. The system can thus be described by the Hamiltonian ∑_m h f_m a^† _m a_m + ∑_m,m' h χ_m,m' a^† _m a_m a^† _m' a_m', where χ_m,m' are the cross-Kerr frequencies. Therefore, when mode m gets excited with 1 photon, the fundamental mode's frequency f_0 is shifted by χ_0,m. This provides the basis to perform a two-tone Josephson spectroscopy of the high-frequency modes, while the fundamental mode of the resonator is used as a detector (see Fig. <ref>c). As before, we use a voltage-biased SNS Josephson junction as a microwave emitter. It is positioned on the same chip as the λ / 4 resonator, with which it is inductively coupled. We emit Josephson radiation while probing the fundamental mode of the resonator at frequency f_r ≈ f_0. Fig. <ref>d shows the phase shift δφ_r of the reflected signal as a function of the Josephson frequency f_J. The spectrum exhibits regularly spaced resonances at frequencies f_J≈ f_m=(2m+1)f_0, with f_0 = 4.95 GHz, which correspond to the high-frequency modes of the λ / 4 resonator (see also appendix <ref>). Other resonances appear in our signal, but are not clearly identified at this stage. We believe they might be geometric resonances of the circuit that would couple to the resonator. Additionally, a slowly varying background is noticeable. It is attributed to a flux change within the loop terminating the resonator, induced by the DC current carried by the Josephson emitter. The measured linewidth of the different transitions, which is roughly constant and about ∼ 500 MHz, is much larger than the intrinsic decay rate of the modes that we estimate to be ∼ 350 kHz based on the measured quality factor Q=14000 of the fundamental mode. It is again limited by the Josephson emission linewidth, which is here almost two orders of magnitude larger than previously observed. This larger linewidth is due to a larger voltage noise across the SNS emitter, owing to the different filtering setup needed for this experiment (see appendix <ref>). Within this architecture, we are able to detect the modes up to m=8, corresponding to 83 GHz. We don't detect modes m ≥ 9, which we attribute to dissipation in the emitter junction shunting the high-frequency Josephson radiation. In essence, the normal resistance R_N of the junction and its coupling inductance L ∼ 100 pH induce a cutoff frequency at R_N/L ∼ 15 GHz above which the signal decreases as 1/(f_J)^2, until it is too low to be detectable within a reasonable averaging time. Although it limits our sensitivity at high frequencies, this cutoff is less drastic than the LC cutoff of tunnel junction-based spectrometers, where the sensitivity is suppressed as 1/(f_J)^4 <cit.>. This cutoff frequency can be increased either by opting for an emitter with higher normal resistance or by reducing the junction's length to increase the size of its minigap. This problem is however unlikely to arise when probing a more nonlinear system, such as Andreev bound states, where the dispersive coupling χ with the resonator should be much larger <cit.>. To further confirm that the identified resonances correspond to the modes of the resonator, we adjust the flux Φ threading the RF SQUID loop. The primary consequence is the modulation of the fundamental mode's frequency f_0, with a measured magnitude of 1.5 MHz. Consequently, the higher harmonic modes f_m are also expected to modulate with Φ. However in practice, this modulation is not observable in the two-tone Josephson spectrum, as it is much smaller than our resolution, limited by the Josephson emission width. Nonetheless, changing the flux from Φ =0 to Φ = ϕ_0/2 dramatically alters the spectrum, as illustrated in Fig. <ref>e for resonance m=6 around 64 GHz. Instead of a peak, a resonant dip is observed at Φ = ϕ_0/2, demonstrating an inverted frequency shift. This change can be understood as a reversal of the Josephson inductance of the RF SQUID when varying the phase drop across the junction <cit.>. This demonstrates that the resonances detected using two-tone Josephson spectroscopy indeed correspond to the modes of the λ / 4 resonator. § CONCLUSION AND PERSPECTIVES These measurements demonstrate that SNS Josephson junctions behave as on-chip broadband microwave emitters, capable of performing two-tone spectroscopy on superconducting quantum circuits at unprecedented frequencies. With this innovative technique, we were able to probe both a transmon qubit resonating around 6 GHz and the electromagnetic modes of a λ / 4 resonator up to 83 GHz. Two-tone Josephson spectroscopy is here well adapted to detect these highly coherent systems, with lifetimes ∼ 10 μs and ∼ 0.5 μs respectively. All indications suggest that this technique could operate at higher frequencies, albeit necessitating some adjustments in both the emitter properties and its coupling with the system under study. More fundamentally, two-tone Josephson spectroscopy could be instrumental in detecting fermionic or bosonic excitations in mesoscopic systems, such as Andreev bound states in superconducting quantum dots <cit.> or topological Weyl band structures in multiterminal Josephson junctions <cit.>. It could also prove valuable for studying quasiparticle generation <cit.> and recombination in exotic 2D superconductors <cit.>. Another promising axis is to explore the quantum nature of Josephson radiation <cit.>. This could lead to the development of novel Josephson photonics devices, such as quantum-limited amplifiers <cit.> or single-photon detectors <cit.>, operating at high frequencies. More generally, the architectures that we developed could be used to probe the high-frequency environment of standard superconducting qubits and how it induces decoherence on them. Finally, an exciting research direction involves sharpening the emitted Josephson frequency, achievable either by reducing voltage noise or by employing clever injection locking techniques <cit.>. This advancement would enable time-domain measurement and control at unprecedented frequencies, thereby opening up a whole new realm for high-frequency cQED. We first want to emphasize the invaluable help of the late F. Portier and P. Jacques from the Nanoelectronics Group on low-noise electronics. We acknowledge valuable discussions with the Quantronics Group, in particular with C. Urbina for sharing the initial idea of two-tone Josephson spectroscopy. Special thanks to Ç. Girit, J.-L. Smirr, and Z. Leghtas on microwave measurements. Gratitude is extended to the SPEC of CEA-Saclay for their help on nanofabrication, and to D. Roux and R. Mohammedi from LPMC for their technical support. JDP acknowledges support of Agence Nationale de la Recherche through grant ANR-20-CE47-0003. LB acknowledges support of the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 947707). This work has been supported by the French ANR-22-PETQ-0003 grant under the France 2030 plan. § SAMPLE FABRICATION Resonator fabrication begins with sputtering 150 nm of Nb onto an oxidized high-resistivity silicon wafer, which is subsequently diced into 10× 11 mm^2 chips. For processing a single chip, a layer of S1813 resist is spun on top of the Nb. Following this, the negative of the resonator pattern is defined using a laser writer and developed in MF319. Subsequently, reactive ion etching is utilized to etch away the Nb through the resist mask. The resist is then removed by immersing the chip in acetone maintained at 60^∘C for 1 hour, followed by sonication in an ultrasonic bath. Our SNS junctions are fabricated via metallic deposition on the same chip using a suspended resist mask. For this, a bilayer of MAA EL6 resist is spun at 4000 rpm, followed by a layer of PMMA A6 at a slower speed of 3000 rpm. This results in a bilayer with a thin bottom layer (210 nm) and a very thick top layer (520 nm). The SNS junction mask is defined using e-beam lithography. The bilayer is then developed in a 1:3 MIBK/IPA solution for 1 minute, which provides a suspended mask. The chip is then loaded into a metal evaporator. Before metal deposition, we perform an ion-milling step using an Argon plasma to remove the layer of Nb oxide at the chip surface, which could hinder proper electrical contact between the SNS junction and the voltage leads. Initially, 5 nm of Ti is evaporated to serve as a sticking layer for subsequent metals. This is followed by a 15 nm Au deposition. The sample is then tilted by 22^∘, and 60 nm of Nb is evaporated. Nb is deposited almost everywhere over the gold layer but not at the constriction level, where it is blocked by the mask of PMMA due to the angle, thus preventing from shorting the SNS junction. The pressure inside the loadlock is maintained below a few 10^-7 mbar throughout the process to ensure a clean SN interface. Finally, the resist is lifted off after evaporation in a hot acetone bath. § MEASUREMENT SETUPS In Fig. <ref>, we represent the three implementations of the experiments presented in this manuscript. The setup in Fig. <ref>a corresponds to one used for the measurement of the Josephson emission, i.e., the experimental data presented in Fig. <ref>c. The power emitted by the junction is directed, via a bias tee, towards a power spectrum analyzer (PSA) through a ZX60-83LN12+ room-temperature amplifier as well as a 4-8 GHz HEMT amplifier from LNF anchored at 4K. The junction is protected from the noise emitted by the HEMT by means of three 4-8 GHz cryogenic isolators, a 12 GHz low-pass filter from K& L, and a 4-8 GHz bandpass filter from Micro-Tronics. This setup also allows for the measurement of the current-voltage characteristic of the Josephson emitter. For this, a voltage is applied with a Yokogawa GS200 source through a ÷ 10^4 voltage divider, and both voltage and current are measured using SP1004 voltage amplifiers from Basel Precision Instrument and a Yokogawa DL350 oscilloscope. The current is determined by measuring the voltage across a resistor in series with the Josephson junction, whose resistance value (97 Ω) was measured during the run. For all DC lines, we use Thermocoax coaxial cables, RC lowpass filters and π-filters, both at 10 mK, to protect the junctions from the noise. Detailed references of most components can be found in reference <cit.>. In Fig. <ref>b, we show the setup used for the two-tone Josephson spectroscopy of a transmon qubit (Fig. <ref>). The DC part of the setup is similar than before, but the RF part is changed significantly. This time the RF port of the bias tee directs the Josephson emission towards the readout port of a transmon qubit through a 4-8 GHz bandpass filter, a 12 GHz lowpass filter and a circulator. The same port is also used for cavity spectroscopy by sending a microwave signal from a ZNB20 vector network analyzer (VNA) through a -60 dB attenuator, a 12 GHz low-pass filter, and then through the -20 dB port of a directional coupler. The latter allows the VNA signal to be combined with the one emitted by the junction, enabling two-tone spectroscopy of the transmon qubit. Once the microwave signal from the VNA is reflected off the cavity, it is redirected back to the VNA, passing through three 4-8 GHz isolators, a 12 GHz low-pass filter, and a similar amplification chain as before. The final setup, depicted in Fig. <ref>c, corresponds to the two-tone Josephson spectroscopy experiment of the λ/4 resonator shown in Fig. <ref>. Although not fully represented, the DC part is similar to the previous two setups, with a few minor modifications. Additionally, this setup includes a flux line that allows a current to flow close to the superconducting loop terminating the resonator, enabling us to adjust the magnetic flux in the loop. The spectroscopy of the fundamental mode of the resonator is performed in a standard manner, with a VNA connected to an attenuated input line and an output line with a similar amplification chain as before. § CHARACTERIZATION OF SNS JUNCTIONS The current-voltage characteristic of the Al-Hf-Al junction, used for the experiments described in Fig. <ref> and <ref>, is shown in Fig. <ref>a. Unlike our Nb-Au-Nb junctions, it was not designed for an efficient evacuation of Joule power. Consequently, it exhibits a significantly lower retrapping current compared to its switching current. In order to characterize the Al-Hf-Al junction as a microwave source, we measured the emitted power in a narrow band centered at 5 GHz as a function of the bias voltage (see Fig. <ref>b). The narrow peak close to 10 μ V, indicated by a red arrow, corresponds to the AC Josephson effect. The continuously increasing background corresponds to incoherent noise, due to shot noise and thermal noise, and remains lower than the AC Josephson emission in the voltage bias range of interest. § CALIBRATION OF THE JOSEPHSON FREQUENCY In order to properly calibrate the frequency f_J of the signal emitted by the SNS junction, it is crucial to accurately know the voltage V_J across its terminals. As we apply a voltage V_b to the bias line of the junction, we simultaneously measure V_J using a low-noise amplifier. As depicted in Fig. <ref>, these two voltages are related by the equation V_J^2 = ϵ V_b^2 - V_0^2, which is consistent with the RCSJ model <cit.>. Here ϵ is a constant that depends on the attenuation of our line and V_0 is a constant extracted from the fit. This relation allows us to determine the actual value of V_J, and thus the Josephson frequency f_J = 2eV_J / h, without constantly monitoring it when performing two-tone Josephson spectroscopy. Since our amplifier gain has a systematic error of about 1%, we use in Section <ref> the 64.35 GHz mode from Fig. <ref>d as a reference to calibrate f_J even more precisely. § STANDARD RF MEASUREMENTS OF THE TRANSMON QUBIT The complete design of the circuit investigated in Section <ref>, which mostly consists in a transmon qubit capacitively coupled to a microwave resonator, is described in refs. <cit.>. In a preliminary run, we characterized this circuit-QED architecture using standard microwave instruments. Fig. <ref> shows the resonator and transmon qubit spectra. We could thus extract the characteristic frequencies f_r0≈8.1775 GHz, f_q0≈5.9136 GHz and χ_0≈10.7 MHz. The qubit frequency and dispersive shift are slightly different than the one we identified later using two-tone Josephson spectroscopy, which we attribute to the transmon's ageing between the two runs. Going further, we have probed the qubit quantum coherence by performing operations in the time domain. The strong coupling between qubit and cavity allows one to perform single-shot measurements of the qubit state. We could thus measure the probability P_|e⟩ of the qubit to be in the excited state as a function of time. Using Rabi oscillations, one can calibrate π and π/2-pulses. Thus, by applying a π-pulse that prepares the qubit in state |e⟩, we could measure after a variable delay an exponential decay trace with an energy relaxation time T_1 ≈ 39.9 μs (Fig. <ref>a). Finally, we have measured the dephasing for an equal qubit superposition (|g⟩+|e⟩)/√(2), by applying two π/2-pulses detuned at f_d-f_q ≈ 1 MHz and separated by a varying delay time. Fig. <ref>b shows the corresponding Ramsey oscillations, which indicate coherent precession of the qubit state in the Bloch sphere around the z-axis. From the decay, we could extract a dephasing time T_2^*≈ 16.2 μs. This value being much smaller than 2T_1, our qubit coherence is dominated by pure dephasing with a characteristic time T_ϕ≈ 20 μs. § COMPLEMENTARY TWO-TONE JOSEPHSON SPECTROSCOPY MEASUREMENTS In Fig. <ref>a, we present two-tone Josephson spectroscopy measurements performed in an additional sample similar to the one of Fig. <ref>. The mutual inductance between the resonator and the superconducting loop was however designed to be smaller, which reduces Kerr effect. A dip in the amplitude of the reflected signal indicates the position of the m=0 mode of the microwave resonator. As we sweep the bias voltage, the Josephson frequency happens to match the frequency of a higher mode. As the latter is populated by microwave photons, the m=0 resonance shifts by a few kHz, which allows the detection of high frequency modes. In Fig. <ref>b, we present measurements performed on another device, where the coupling between the Josephson emitter and the resonator was made much stronger. We detect many more resonances than in the measurements presented in the main text; however, they do not correspond to the bare resonances f_m=(2m+1)f_0 of the λ/4 resonator. We believe that, with the coupling between the emitter and the resonator being stronger, the modes of the resonator are perturbed by the emitter, and in this case, the spectroscopy is too invasive not to perturb the system that we probe. § TWO-TONE JOSEPHSON SPECTROSCOPY WITH A TUNNEL JOSEPHSON JUNCTION One of the reasons we opt for an SNS junction over an SIS tunnel junction as the Josephson emitter is due to the SNS junction's relatively constant impedance when a voltage is applied across its terminals, compared to the tunnel junction. In Fig. <ref>, we show measurements of a two-tone Josephson spectroscopy, similar to Fig. <ref>d, but conducted with an SIS junction as the emitter. We observed significant signal variations with numerous structures resembling resonances. However, they do not necessarily signify mode detections but are sometimes simply due to changes of the resonance frequency of the resonator fundamental mode (m=0) caused by impedance variations of the SIS junction. Consequently, conducting two-tone Josephson spectroscopy with this type of junction is challenging.
http://arxiv.org/abs/2406.08995v1
20240613105344
Non-perturbative determination of the ${\cal N} = 1$ SUSY Yang-Mills gluino condensate at large $N$
[ "Claudio Bonanno", "Pietro Butti", "Margarita García Pérez", "Antonio González-Arroyo", "Ken-Ichi Ishikawa", "Masanori Okawa" ]
hep-th
[ "hep-th", "hep-lat" ]
http://arxiv.org/abs/2406.07782v1
20240612002525
Wideband tunable cavity development for axion dark matter searches using a piezoelectric motor in combination with gears
[ "A. K. Yi", "T. Seong", "S. Lee", "S. Ahn", "B. I. Ivanov", "S. V. Uchaikin", "B. R. Ko", "Y. K. Semertzidis" ]
hep-ph
[ "hep-ph", "hep-ex", "physics.ins-det" ]
Automatic detection of large-scale flux ropes and their geoeffectiveness with a machine learning approach Simon W. Good ========================================================================================================= § INTRODUCTION The quantum chromodynamics (QCD) axion or axion <cit.> is a very natural solution for the strong CP problem in the Standard Model of particle physics (SM) <cit.> proposed by Peccei and Quinn <cit.>, and is predicted to be massive, stable, cold, and weakly interacting with the SM <cit.>. Such axion characteristics meet those of cold dark matter (CDM) which is believed to constitute about 85% of the matter in our Universe <cit.>. The QCD axion is one of the leading CDM candidates and in this context is referred to as axion dark matter and the axion haloscope search <cit.> is the only search method sensitive to axion dark matter to date thanks to resonant conversion from axions to photons with help from a resonant cavity when the axion mass m_a matches the cavity mode frequency ν, m_a=hν/c^2. As the unknown axion mass and resonant conversion by the microwave cavity are the most important factors for this search, the figure of merit in axion haloscope searches is the scanning rate <cit.> proportional to B^4V^2C_ mode^2Q_L_ mode/T^2_n, where B is the magnetic field inside the cavity volume, V is the cavity volume, C_ mode is the cavity mode-dependent form factor <cit.>, Q_L_ mode is the loaded quality factor of the cavity mode, and T_n is the system noise temperature. Among those parameters, the cavity is related to the parameters V, C_ mode, and Q_L_ mode. The cavity geometry is typically cylindrical for axion haloscope searches using solenoid magnets and the employed cavity modes have an electromagnetic field profile similar to that of the TM_010 mode of a cylinder to maximize the C_ mode. Frequency tuning is generally performed by moving the tuning rod such that its position moves in the radial direction <cit.>; rarely is it ever tuned vertically. The power dissipated by the driving of the tuning rod could increase the physical temperature of the cavity, and subsequently the T_n, implying that the cavity could affect T_n as well. The axion parameter space that the axion haloscopes will have to search for is enormously wide even if it is limited to the microwave region in light of the best scanning rate to date <cit.> and axion haloscopes can usually expand the parameter space by using several cavity systems with different dimensions of the cavity walls or tuning rods. However, such cavity production itself is by no means trivial. Therefore, it is highly desirable for the cavities to have tunable frequency ranges as wide as possible, as long as the experimental sensitivity is retained. For a cylindrical conductor cavity with a cylindrical conductor tuning rod, the cavity mode frequencies and their tuning range are generally proportional to the diameter of the tuning rod for a given cavity diameter. According to the cavity simulations <cit.> for the aforementioned frequency tuning <cit.>, the tunable range of a cavity system with a tuning rod dimension of about a tenth of the cavity barrel is at best 30% with respect to the central frequency of the tuning range. In this work, we developed a cavity system with a large, and therefore heavy, tuning rod to increase the frequency tuning range. With a piezoelectric motor in combination with gears, we were able to drive a heavy tuning rod and realized a wideband tunable cavity whose frequency range is about 42% with respect to the central frequency of the tuning range without compromising the experimental sensitivity too much. Since the cavity mode frequencies increase accordingly with the dimension of the conductor tuning rod, the increase from 30 to 42% actually corresponds to increasing the absolute frequency range by a factor of about 1.8, which is nontrivial. Note that this is the first time that a piezoelectric motor and gear combination has been employed in order to drive stronger power in the extreme environment associated to axion dark matter experiments, i.e., cryogenic, high-magnetic-field, and high vacuum. § CAVITY SYSTEM The cavity system constitutes a cylinder and a cylindrical tuning rod, where the inner diameter of the cylinder is 262 mm and the outer diameter of the tuning rod is 68 mm. The heights of the cylinder and the tuning rod are 560 and 559 mm, respectively, and thus the gaps between the end caps of the cavity and the tuning rod are 0.5 mm. Figure <ref> shows the lateral and top views of the cavity system. In light of the cavity dimensions, it can be placed in the magnet bore of the CAPP-12TB experiment <cit.>. They are all OFHC (oxygen free high conductive) copper except for an alumina axle of the tuning rod. In order to avoid crank arms linking the axle and rod, our tuning rod has an off-axis axle developed in Ref. <cit.>. With this off-axis axle located 70.5 mm away from the cavity center, the tuning rod sweeps a part of the cavity volume resulting in a frequency range of about 0.99 to 1.19 GHz. In order to extend the frequency tuning range, the axle can be shifted 47 mm toward the cavity center from the aforementioned axle location, which extends it up to about 1.51 GHz. The total frequency tuning range is about 0.52 GHz that corresponds to about 42% with respect to the central frequency of the tuning range. The large tuning rod, which weighs about 5 kg as shown in Fig. <ref>, was driven by a piezoelectric motor manufactured by attocube <cit.> in combination with gears. The gear reduction ratio is about 69.4:1 resulting from the combination of two 200-tooth and two 24-tooth plain gears as shown in Fig. <ref>. The moment of inertia or rotational inertia of the tuning rod considered in this work is about 32 times larger than the tuning rod for the cavity used in our previous work <cit.>, where the tuning mechanism driver was solely a piezoelectric motor. Therefore, the gear reduction ratio of about 69.4 we chose in principle is a double of the necessary power and would provide enough marginal power to handle the rotational inertia by the tuning rod considered in this work unless other significant issues appear. The frequency range of 0.52 GHz could be realized by sweeping the tuning rod from the cavity center to the cavity wall with an axle located 47 mm away from the cavity center, but it is necessary to have additional crank arms to link the axle and the rod. Crank arms must be made of dielectric materials as opposed to conductors to avoid unwanted capacitive effects, but are likely too brittle to support our heavy tuning rod. Furthermore, it increases the rotational inertia of the tuning rod, hence an increase to the necessary driving power for the frequency tuning mechanism. As shown in Fig. <ref>(b), our tuning mechanism driver is modularized, and therefore it is trivial to put the driver on another tuning axle location once the tuning rod's position is switched. § HEATING FROM THE PIEZOELECTRIC MOTOR OPERATION The benefit from the frequency tuning mechanism employing the piezoelectric motor is that the attocube piezoelectric motor ANR240 can be operated in cryogenic, high-magnetic-field, and high vacuum environments <cit.>, which is the typical axion haloscope search environment. On the other hand, the piezoelectric motor operation generates unavoidable heat mostly from the power dissipated by the actuator. This heat load is proportional to the number of the piezoelectric motor steps n_ steps. With the gear combination developed in this work, the required n_ steps would increase in order to move through frequencies at a reasonable pace such that the experimental sensitivity is relatively preserved. We measured the heat load on a dilution fridge LD400 manufactured by Bluefors Oy <cit.>. Only a piezoelectric motor, without the cavity[Our Bluefors dilution fridge cannot afford either the cavity dimension and weight.], was installed on the mixing chamber (mc) plate of the Bluefors dilution fridge, establishing a good thermal link between the two to mimic our axion dark matter search experimental conditions. A time delay of 60 s was applied after the piezoelectric motor operation, and afterwards we measured the temperature difference of the mixing chamber Δ T_ mc depending on n_ steps and the piezo driving frequency or signal repetition rate f_d for a given piezoelectric motor input voltage V_p of 50 V. The base temperature of the mixing chamber was about 40 mK. The blue solid line and red dashed lines in Fig. <ref> denote the measured Δ T_ mc using f_d of 500 Hz and 1000 Hz, respectively. It is usually necessary to have such a time delay after the frequency tuning to stabilize the system and depends on the target sensitivity of the experiment, e.g., 30 s for the CAPP-12TB experiment <cit.>, where n_ steps was about 10 for a frequency step of 10 kHz. We also applied a 1 s delay for every fifth piezoelectric motor step, e.g., 1 s delay in every 100 s for an n_ steps of 500. Assuming a good thermal link between the mixing chamber and cavity, and this Δ T_ mc is usually approximately the cavity physical temperature increase Δ T_ cavity. The Δ T_ cavity subsequently increases the T_n, thus it is crucial to prevent such heat load to maintain the experimental sensitivity. As shown in Fig. <ref>, the temperature increase at the Bluefors dilution fridge is about 10 mK due to the piezoelectric motor operation with an n_ steps of about 200 and about 30 mK when n_ steps is about 500, with the aforementioned parameters. While the former is accepted for experiments depending on their experimental sensitivity, but the latter is not for the CAPP-12TB sensitivity <cit.>. Due to ongoing experiment, the DRS-1000 dilution fridge manufactured by Leiden cryogenics BV <cit.> was unavailable for use, and we could not consider the case with the DRS-1000 dilution fridge whose cooling power was measured to be 1 mW at 90 mK <cit.>. Using the relation Q̇_ mc=84ṅ_3 T^2_ mc and the Bluefors specifications <cit.>, we could expect three times stronger cooling power from the DRS-1000 fridge compared to that from the Bluefors fridge, where Q̇_ mc is the cooling power at the mixing chamber and ṅ_3 the ^3He molar circulation rate. Hence, we could expect less temperature increase due to the piezoelectric operation with the same parameters in the DRS-1000 dilution fridge. Note that the temperature increase is practically independent of the driving frequency of a piezoelectric motor, according to our measurements shown in Fig. <ref> for the given experimental conditions. § VALIDATION OF THE CAVITY PERFORMANCE The cavity performance was first checked by measuring the cavity unloaded quality factor of the cavity mode as a function of the cavity mode frequency, Q_0(ν). Δν depending on V_p, f_d, and n_ steps were then also measured for further validation of the cavity performance. We performed these measurements on a 4-K cryocooler due to a few aforementioned unavoidable reasons, but do not expect significant drawbacks even in an 𝒪(10 mK) environment with the reasons described in the text below. §.§ Q_0(ν) measurements The Q_L(ν) measurements were performed over the full rotation of the tuning rod. In order to get Q_0 from Q_L, we also measured the reflection of the antenna coupled to the cavity. Following the procedure using the Smith chart data <cit.>, we obtained the coupling coefficient β. The Q_0 was then calculated from the relation Q_0=(1+β)Q_L. The results are shown in Fig. <ref> which were obtained from the full clockwise rotation of the tuning rod. The results from driving in the opposite direction were similar and left out. As shown in Fig. <ref>, our tunable cavity developed in this work shows not only good quality factors over the full range of the resonance frequency, but also the capability of the full rotation of the tuning rod by the tuning mechanism driver with the piezoelectric motor combination with gears. We noted that the transmission line calibration between the network analyzer (NA) and cavity using the NA calibration kits is crucial to measure the coupling coefficient β, and subsequently Q_0 as shown in Fig. <ref>, where the calibration was applied to the left, but not to the right. Since the cavity is approximately symmetric in the azimuthal coordinates, without any external coupling effects Q_0 should be also be symmetric in those coordinates as shown in the left plot of Fig. <ref>. Although the Q_0(ν) measurements were done in temperatures of about 4 K for the cavity, their values would not change even in an 𝒪(10 mK) environment due to the anomalous skin effect in metals <cit.>, i.e., copper for our cavity. §.§ Δν measurements These Δν measurements were also performed in temperatures of about 4 K, and therefore the stored power in the piezoelectric motor system is expected to be degraded in an 𝒪(10 mK) environment due to the temperature dependence of capacitance. First we measured the temperature dependent capacitance C(T) to see how much power degradation is expected when it operates in an 𝒪(10 mK) environment. The C(T) can be measured directly from the attocube systems' piezo motor controller ANC350 <cit.> and was measured down to a temperature of about 40 mK on the aforementioned Bluefors dilution fridge LD400. Figure <ref> shows the C(T) measurement resulting in C(40 mK)/C(4 K)∼0.93, where the factor 0.93 is also true for the piezoelectric motor's driving power for a given V_p. As mentioned in Sec. <ref>, the chosen gear reduction ratio is a double of the necessary power, so we do not expect significant effects from the piezo power degradation in an 𝒪(10 mK) environment. For the Δν measurements, we limited the tuning range from 1.09 to 1.10 GHz to compare the cavity system here with the one used in the CAPP-12TB experiment <cit.>. The Δν of the CAPP-12TB experiment was about 10 kHz resulting from an n_ steps of about 10, V_p=50 V, and f_d=1000 Hz. The frequency tuning with the tuning mechanism parameters did not introduce significant heat load to the system, which was one of the key ingredients that resulted in the success of the CAPP-12TB experiment. Note that this is not a like-for-like comparison due to the different cavity regions swept by different tuning mechanisms, but to check if the experimental sensitivity in this work is comparable to that of CAPP-12TB reflecting every experimental aspect as much as possible. The left plot of Fig. <ref> shows the Δν(f_d) for an n_ steps of 500, where the blue rectangles and the black triangles were measured with V_p of 60 V and 50 V, respectively. The linear fit for the blue rectangles denotes Δν(f_d)=0.83+0.0047f_d and that for the black triangles Δν(f_d)=0.78+0.0032f_d, respectively. Therefore, the Δν is about 4 kHz from the tuning mechanism conditions, n_ steps=500, V_p=50 V, and f_d=1000 Hz, whose conditions could increase the T_ mc of about 30 mK already for a dilution fridge system according to our measurements shown in Fig. <ref>. In order to utilize the tuning mechanism developed here, we can employ a Δν of 2 kHz and approximately a fifth of Δ t_ 10 kHz per each tuning step if Δ t_ 10 kHz is that for the case with a Δν of 10 kHz. This relatively finer tuning step approach that has been employed by the Axion Dark Matter eXperiment <cit.> does not degrade the statistical sensitivity, but would elongate the scanning rate depending on the aforementioned time delay for the system stabilization, mainly cooling. In order to find the n_ steps for the finer tuning step of 2 kHz, we measured Δν(n_ steps) with V_p=50 V which is shown as the blue rectangles in the right plot of Fig. <ref>. From the linear parametrization of the rectangles Δν(n_ steps)=0.374+0.0071n_ steps, we found n_ steps=230 results in Δν∼2 kHz. Since we found no significant temperature increase with a higher f_d as shown in Fig. <ref>, we also measured them with a higher f_d of 1500 Hz which is shown as the black triangles in the same plot. From the linear fit of the triangles Δν(n_ steps)=0.213+0.01081n_ steps, we found n_ steps=170 results in Δν∼2 kHz. Taking into consideration the results shown in Fig. <ref>, the Δ T_ mc from the piezoelectric motor operation with n_ steps=170, V_p=50 V, and f_d=1500 Hz, would be less than 10 mK at an axion dark matter experiment employing the Bluefors LD400. One can naïvely expect further suppression of Δ T_ mc at the experiment employing the Leiden DRS-1000 thanks to its stronger cooling power, and the cooling time is also expected to be shorter. With three times stronger cooling power we can assume a time delay of 20 s, which changes the total running time to move 10 kHz to 600 s. This is 13% longer than the CAPP-12TB case <cit.>, but is still in a generally acceptable range. § SUMMARY We developed a wideband tunable cavity system for axion dark matter search experiments. Our cavity system employed a large and accordingly heavy tuning rod to increase the frequency tuning range. With a piezoelectric motor in combination with gears, we were able to drive a heavy tuning rod and realized a wideband tunable cavity whose frequency range is about 42% of the central frequency of the tuning range. By employing a relatively finer tuning step of 2 kHz, we expect insignificant experimental sensitivity drop-off even compared with the experimental sensitivity coming from the best scanning rate to date <cit.>. This first tuning mechanism driver with a piezoelectric motor in combination with gears can drive much bigger power than that with the piezoelectric motor only. Our approach therefore can be useful to axion dark matter search experiments requiring heavy tuning mechanisms with several conductor tuning rods toward higher frequencies or a large dielectric chunk of tuning rod toward lower frequencies, and also to drive heavy loads under extreme environments. This work was supported by the Institute for Basic Science (IBS) under Project Code No. IBS-R017-D1-2024-a00 and the Korea University Grant No...... B. R. Ko acknowledges G. Rybka suggested the tuning mechanism driver idea. 99 AXION1 S. Weinberg, Phys. Rev. Lett. 40 (1978) 223. AXION2 F. Wilczek, Phys. Rev. Lett. 40 (1978) 279. strongCP1 G. 't Hooft, Phys. Rev. Lett, 37 (1976) 8. strongCP2 G. 't Hooft, Phys. Rev. D 14 (1976) 3432; 18 (1978) 2199(E). strongCP3 J. H. Smith, E. M. Purcell, and N. F. Ramsey, Phys. Rev. 108 (1957) 120. strongCP4 W. B. Dress, P. D. Miller, J. M. Pendlebury, P. Perrin, and N. F. Ramsey, Phys. Rev. D 15 (1977) 9. strongCP5 I. S. Altarev et al., Nucl. Phys. A341 (1980) 269. PQ R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38 (1977) 1440. CDM_LOW1 J. Preskill, M. B. Wise, and F. Wilczek, Phys. Lett. 120B (1983) 127. CDM_LOW2 L. F. Abbott and P. Sikivie, Phys. Lett. 120B (1983) 133. CDM_LOW3 M. Dine and W. Fischler, Phys. Lett. 120B (1983) 137. PLANCK P. A. R. Ade et al. (Planck Collaboration), Astron. Astrophys. 594 (2016) A13. sikivie1 P. Sikivie, Phys. Rev. Lett. 51 (1983) 1415. sikivie2 P. Sikivie, Phys. Rev. D 32 (1985) 2988. scanrate L. Krauss, J. Moody, F. Wilczek, and D. E. Morris, Phys. Rev. Lett. 55 (1985) 1797. EMFF_BRKO B. R. Ko et al., Phys. Rev. D 94 (2016) 111702(R). CAVITY_RSI C. Hagmann, P. Sikivie, N. Sullivan, D. B. Tanner, and S.-I. Cho, Rev. Sci. Instrum. 61 (1990) 1076. 12TB_PRL Andrew K. Yi et al., Phys. Rev. Lett. 130 (2023) 071002. CST <www.cst.com>. COMSOL <www.comsol.com>. HAYSTAC_CAVITY Nicholas M. Rapidis, Samantha M. Lewis, and Karl A. van Bibber, Rev. Sci. Intrum. 90 (2019) 024706. MAX Saebyeok Ahn et al., <arXiv:2010.00169>. ATTOCUBE <www.attocube.com>. BLUEFORS <https://bluefors.com>. DRS-1000 <leidencryogenics.nl>. 8TB_PRL S. Lee, S. Ahn, J. Choi, B. R. Ko, and Y. K. Semertzidis, Phys. Rev. Lett. 124 (2020) 101802. 8TB_NIM J. Choi, S. Ahn, B. R. Ko, S. Lee, and Y. K. Semertzidis, Nucl. Instrum. Methods Phys. Res., Sect. A 1013 (2021) 165667. PIPPARD Pippard, A. B., Proc. Roy. Soc., A. 191 (1947) 385. ADMX1 S. J. Asztalos et al., Phys. Rev. D 64 (2001) 092003. ADMX2 N. Du et al. (ADMX Collaboration), Phys. Rev. Lett. 120 (2018) 151301. ADMX3 T. Braine et al. (ADMX Collaboration), Phys. Rev. Lett. 124 (2020) 101303. ADMX4 C. Bartram et al. (ADMX Collaboration), Phys. Rev. Lett. 127 (2021) 261803.
http://arxiv.org/abs/2406.07946v1
20240612071124
Elevator: Self-* and Persistent Hub Sampling Service in Unstructured Peer-to-Peer Networks
[ "Mohamed Amine Legheraba", "Maria Potop-Butucaru", "Sébastien Tixeuil" ]
cs.DC
[ "cs.DC" ]
Elevator: Hub Sampling Service Sorbonne Université, CNRS, LIP6, F-75005 Paris, France Institut Universitaire de France, Paris, France Elevator: Self-* and Persistent Hub Sampling Service in Unstructured Peer-to-Peer Networks Mohamed Amine LEGHERABA 1 Maria POTOP-BUTUCARU 1 Sébatien TIXEUIL 1,2 June 17, 2024 ========================================================================================== § ABSTRACT We present Elevator, a novel algorithm for hub sampling in peer-to-peer networks, enabling the construction of overlays with a topology between a random graph and a star network, and networks that have both hubs and are resilient to failures. Our approach emerges from principles of preferential attachment, forming hubs spontaneously, offering an innovative solution for decentralized networks that can benefit use cases requiring a network with both low diameter and resilience to failures. § INTRODUCTION In recent years, the rise of decentralized systems such as blockchain <cit.> and federated learning <cit.> has spurred considerable interest in peer-to-peer (P2P) communication protocols. While existing P2P protocols have demonstrated significant utility across various applications, emerging demands for enhanced performance, scalability, and robustness necessitate the development of innovative solutions. Peer-to-peer (P2P) protocols have undergone extensive research and development to facilitate efficient decentralized communication among networked devices. Foundational P2P protocols like Napster, Gnutella <cit.>, and BitTorrent paved the way for distributed file sharing and content distribution across the Internet. Typically, P2P overlay networks are categorized as either structured (e.g. CAN <cit.>, Chord <cit.>, or Kademlia <cit.>) or unstructured (e.g. Gnutella <cit.>). More comprehensive details about peer-to-peer overlays can be found in recent surveys <cit.>. Structured overlays come with a maintenance cost <cit.>, and are more susceptible to Byzantine attacks (that is, attacks performed by the peers themselves) <cit.> and churn <cit.> (that is, the unexpected departure and arrival process of the peers). Unstructured networks exhibit advantages in resilience to node failures and adaptability to shifting network conditions <cit.>, rendering them well-suited for dynamic and heterogeneous environments when compared to their structured counterparts. Their shortcomings are that the quality of services built on top of the network is difficult to assess. Peer sampling. Peers within an unstructured overlay maintain a dynamic set of neighbors, often discovered through mechanisms like peer sampling <cit.>, which enables nodes to gather and exchange information about other nodes in the network, and thus dictates the network topology. Existing peer sampling algorithms in the literature yield two types of topologies (random and power-law) that demonstrate favorable networking characteristics. Random graphs are built from gossip peer sampling algorithms, and are known to be resilient to churn <cit.>. Power-law (or scale-free) networks are built from algorithms that use the concept of preferential attachment and are known to have ultra-small diameter <cit.>, which helps scalability. However, when considering the specific use case of federated learning, certain limitations emerge: (i) Gossip learning, based on gossip peer sampling, exhibits a slower convergence rate compared to centralized federated learning methodologies <cit.>, and (ii) while power-law topologies theoretically offer improved convergence efficiency, prior research has predominantly focused on constructing networks adhering strictly to power-law distributions or implementing algorithms to restrict the proliferation of hubs <cit.> (that is, peers that are extremely well connected). Yet, for federated learning, the presence of hubs is advantageous, as these hubs facilitate rapid relay of machine learning models across the network, accelerating convergence rates. Nonetheless, conventional approaches relying on predefined hubs (e.g., super-peer-based topologies) are susceptible to attacks targeting static and well-defined hub nodes <cit.>. Hence, there exists a pressing need for a protocol that fosters the organic emergence of hubs within networks. The service outlined in this article is designed precisely for this purpose, allowing selected nodes to naturally ascend to hub status through a process we term "hub sampling". By enabling nodes to organically assume the role of hubs, our protocol aims to strike a balance between leveraging the efficiency of hub-based networks for applications like federated learning, while mitigating vulnerabilities associated with static hub designations. Our contribution. Our primary goal is to develop a protocol that autonomously promotes nodes to act as hubs within unstructured peer-to-peer networks. To achieve this goal, we hybridize two fundamental concepts: preferential attachment, and random attachment. By integrating these two concepts, our protocol promotes a balanced network structure, where hubs emerge organically based on connectivity patterns, and yet adapt to dynamic network changes. This approach not only fosters robustness against failures and disruptions, but also maintains a low network diameter, facilitating efficient communication and information propagation. The parameter h, representing the desired number of hubs, allows for flexibility and control over the network's topology, enabling tailored configurations to suit specific application requirements and network environments. The rationale behind this initiative is rooted in the benefits of having hub nodes, particularly in applications such as federated learning, where efficient information dissemination is crucial. The existence of hubs facilitates faster network-wide communication compared to overlay networks structured in a random graph topology. The structure of this article is organized as follows: Section <ref> presents the hub sampling service altogether with its properties, its programming interface (API), and its implementation, the Elevator algorithm. Section <ref> presents extensive simulations of Elevator, compared against three classical algorithms from the literature <cit.>. § HUB SAMPLING SERVICE The key desired properties we expect from our protocol are connectivity (the overlay remains connected), low-diameter (for efficient communication), convergence (properties are obtained in an autonomous manner), stability (structural overlay properties are maintained throughout execution), and robustness (resilience to churn and targeted attacks). They will serve as metrics during simulation experiments to ascertain the efficacy of our algorithm. §.§ Service API The API of the hub sampling service mirrors that of classical peer sampling service <cit.>, comprising two key methods: (i) init() that initializes the service on a given node, i.e., initializes the list of outgoing connections of a node (Indeed, we assume that a given node starts connected to a random subset of nodes in the network, the actual initialization procedure being implementation-dependent), and (ii) getPeer() that returns a random peer address from the node list of peers. The focus of this work is to present an implementation of the getPeer() method, Elevator, as a gossip-based algorithm, and to study the performance of its implementation. In addition to these two methods, we add a third method to the API called getHub() that returns a random hub. The getHub() method can be easily derived from getPeer() by filtering the output of getPeer() to only select the h nodes acting as hubs in the network. This method can be useful for applications that only need to contact a hub. §.§ Preliminaries In the context of our study, we consider an overlay network of interconnected nodes modeled as a directed graph. Communication within this network is bidirectional, corresponding to an underlying undirected graph that represents the physical network. Each node in this network possesses a unique address, akin to an IP address in the context of the Internet, serving as an abstract identifier of its identity. Nodes maintain a local list called cache, which contains addresses of other nodes, and represents their partial knowledge of the network's node set. The maximum size of this cache, denoted by parameter c, is uniform across all nodes. The cache is pivotal for peer sampling, as it serves as the basis for neighbor selection and information exchange. At the network's inception, nodes are initially connected to a random subset of nodes, forming what is known as a random k-out graph. Subsequently, new nodes joining the network also establish connections with a random subset of existing nodes, a process that populates their cache and integrates them into the network. Given the decentralized nature of the network, peer sampling algorithms are designed to operate asynchronously, but we can refer to the idea of cycles of the protocol, as it is more convenient for the evaluation of protocols during simulations. During each cycle, every node initiates one execution of the peer sampling protocol, potentially updating its cache based on interactions with neighboring nodes. By leveraging cycles, we can analyze the convergence, performance, and robustness of peer sampling protocols under varying conditions and scenarios within the decentralized network environment. §.§ Elevator core concepts To achieve both robustness and a low network diameter, we integrate two fundamental concepts: preferential attachment and random attachment, each serving distinct yet complementary role in shaping the network topology. Preferential Attachment. Drawing from the concept pioneered by Barabási and Albert <cit.>, preferential attachment dictates that new connections in the network are established preferentially with nodes possessing a higher number of existing connections. In our adaptation, we modify this concept to elevate certain nodes to the status of hubs without requiring the network to continuously grow. Instead of new nodes joining and preferentially connecting to highly connected nodes, each existing node leverages information from its neighbors to identify and connect to the most frequently connected nodes (up to a predefined number h). This mechanism enables the organic emergence of hubs within the network, with selected nodes naturally assuming central roles based on their connectivity without any explicit distinction other than their number of incoming links. Random Attachment. Inspired by gossip-based peer sampling algorithms <cit.>, random attachment ensures that nodes maintain connections with a representative and diverse subset of the network. This strategy promotes network robustness by preventing excessive clustering and dependency on specific nodes (hubs). When existing hubs disappear (e.g., due to failures or departure), other nodes within the network are opportunistically elevated to hub status, ensuring continuity and adaptability of the network topology over time. Our target is to obtain a topology of the network that has the following properties: (i) There are h defined hubs, with h a parameter defined before the start of the network and common to all nodes, (ii) ignoring hubs, the distribution of the remaining connections is random, and iii) each node has c connections, consisting of h connections to hubs and c-h connections to random nodes. Through simulation evaluation, we demonstrate in the sequel the effectiveness and advantages of our protocol with respect to state-of-the-art algorithms. §.§ Elevator detailed description The algorithm uses the following parameters and data structures: * Parameter c: The maximum number of outgoing connections (its default value for all nodes is 20). * Parameter h: The number of preferential attachment connections (its default value for all nodes is c/2). * Parameter maxsize_buffer_backward: The maximum number of backward connections to send (its default value for all nodes is 100). * Structure cache: The list of outgoing connections. The list is implemented as an array of size c. The list is initialized with random existing addresses (random connections to other nodes of the network). * Structure backward_peers: The list of other nodes that have tried to connect to the node. The list is implemented as a linked list (initially empty). Additionally, we have three temporary structures: (i) frequency_map holds the frequency of occurrences for all neighbors of neighbors, implemented as a map (node → integer), (ii) preferred holds the list of preferred nodes, implemented as a linked list, and (iii) preferred_backward holds the list of backward connections of the preferred nodes, implemented as a linked list. The proposed protocol executes the following actions at each cycle: Each node retrieves the neighbor's list of their neighbors (i.e., the neighbors at distance two). The node then builds an ordered list of the most frequent peers (the frequency map), and contacts the c most frequent nodes (called preferred). Each contacted node sends back to the contacting node a maximum of maxsize_buffer_backward addresses from its backward list, maintained in the structure backward_peers, and adds the contacting node to its backward list. The cache of the contacting node is then reset as an empty array. Then the node selects the h most frequent peers and c-h random peers from the list of backward peers of all preferred peers to fill its cache. If the cache is not full, the node adds random peers from the frequency map to the cache until the size of the cache is c (see Algorithm <ref> and Algorithm <ref> for detailed pseudocode of the algorithm). § EXPERIMENTAL EVALUATION We evaluate our proposal by carrying out a simulation campaign. All simulations use the Java PeerSim simulator <cit.>. We have modified the simulator to add parallelism to accelerate computations. With Peersim, we implemented our algorithm Elevator, and state-of-the-art PROOFS <cit.> and Phenix <cit.> algorithms. Also, we used the implementation of Newscast provided by PeerSim. A detailed description of these algorithms can be found in Appendix <ref>. All simulations were run with a network of size n=1000. As the Phenix network needs a growing network to work, we started the Phenix algorithm with a network size of 20, and capped the size of the network to 1000. The simulations were run during 1000 cycles, and we repeated each simulation 100 times. All simulations were started with a network initialized as a k-out random graph, with k=c=20. All simulations were run on 16 vCPU, using 64G of memory, on a cluster composed of 10 servers of the following type: Machine Memory Processors Cores DELL PowerEdge XE8545 2 To 2 x AMD EPYC 7543 128 threads @ 2.80 GHz DELL PowerEdge R750xa 2 To 2 x Intel Xeon Gold 6330 112 threads @ 2.00 GHz We evaluated the following metrics: in-degree distribution, clustering coefficient, average shortest path length, and diameter. More details about those classical graph metrics can be found in Appendix <ref>. Figure <ref> illustrates that the degree distributions of Newscast and PROOFS exhibit patterns akin to a normal distribution. We see similar results for Elevator, except for a distinct group of 10 hubs with an in-degree of 999. By contrast, the Phenix protocol's degree distribution conforms to a power-law distribution. PROOFS and Newscast maintain a low clustering coefficient during all simulations, as seen in Figure <ref>. On the contrary, Phenix and Elevator have both a clustering coefficient of around 0.55. For Phenix, the value is related to the power-law distribution of in-degree, and for Elevator, the value is linked to the presence of hubs, that are connected to everyone, and this automatically increases the value of the coefficient. As we can see in Figure <ref>, Elevator has a very low average path length, with a value below 2. This value is due to the presence of hubs in the network, that permit to have a maximum distance of 2 between 2 nodes. Phenix is a bit better, with a value slightly above 1.9. PROOFS is very close, with a value around 2.15 and Newscast is a bit below 2.6. All these values are very good and thus we need to compute the diameter to discriminate between algorithms. In Figure <ref>, we see that Elevator gives a network with a diameter almost equal to the average path length, with a value almost equal to 2. Again, this value is due to the presence of hubs in the network. The Phenix algorithm yields similar results. This is better than PROOFS and Newscast, which output respectively 3 and 4 for this metric. We also compared the algorithms according to their resilience to crashes, churn, and attacks on hubs, as shown below. Additional results and the accompanying figures are included in the Appendix <ref>. §.§ Resilience to crashes We analyze the performance of the four algorithms when the network suffers crashes. Resilience to sparse crashes. In Figure <ref>, we consider the biggest weakly connected cluster of the network after the run of each algorithm, and we then remove one by one all nodes inside it, and compute the number of nodes outside this cluster. As we can see in Figure <ref> for all algorithms, we don't observe outsider nodes until we remove 80% of the nodes. Resilience to massive crashes. To simulate a brutal failure we disconnected 50% of the nodes in the middle of the simulation, i.e., in this case, we have disconnected 500 nodes at cycle 500 (as there are 1000 nodes in total and 1000 cycles). As we can see in Figures <ref> and <ref>, the performance of Elevator is not affected, as the in-degree distribution is still the same, and we have 10 hubs with an in-degree of 499. The degree distribution is also the same for Newscast and PROOFS. For Phenix, the degree distribution remains the same, with values going to a max of 999, even if there are only 500 nodes in the network. It's because the nodes have kept in their cache the addresses of (old) nodes who are no longer in the network. In Figure <ref>, the clustering coefficient evolution shows that it is not affected by the crashes, as we have almost the same results as those obtained without a crash. The same observation holds for the average path length and the diameter, as we can see in Figures <ref> and <ref>. §.§ Resilience to churn We now analyze the performance of the four algorithms when the network is subject to churn. To simulate churn, we disconnecte 10% of the nodes at each cycle, and replace them with the same amount of new nodes, each connected to 20 nodes uniformly at random. The churn occurs during 500 cycles, between cycle n°250 and cycle n°750. As the Phenix algorithm needs a growing network to work, the way we implement churn differs. Following previous work <cit.>, in the case of Phenix, we implement churn having the number of removed nodes less than the number of added nodes at each cycle, assuming nodes are removed following a normal distribution 𝒩(0,1), for all cycles of the simulation. As we can see in Figures <ref> and <ref>, the in-degree distribution of Elevator remains the same, with 10 hubs. PROOFS seems affected by churn, as the mean degree distribution goes to 10 instead of 20 without churn. In Figure <ref> we can observe that we have almost the same results as the results obtained without churn for the clustering coefficient, except Phenix which seems affected by churn, as its clustering coefficient varies greatly, which is probably because the coefficient decreases a lot if the nodes affected by churn are the ones with a high in-degree. For the average path length, PROOFS is the most affected, with a value going from 2.25 without churn to a value of 2.5 with churn, as we can see in Figure <ref>. In Figure <ref>, we can see that the diameter varies with churn, with a mean going up to 2.5 instead of 2.0, but the values for Phenix and Elevator remain below the ones of Newscast and PROOFS. §.§ Resilience to hub-targeted attacks We hereby analyze the performance of the four algorithms after a targeted attack on the hubs during the execution of the simulation. To simulate a hub-targeted attack, we disconnected 10 nodes that have the highest in-degree in the middle of the simulated scenario. Logically, Newscast and PROOFS are not affected by the attack, as there are no hubs in the networks built by these algorithms. For Elevator, as we can see in Figure <ref> and <ref>, the in-degree distribution remains similar, with 10 high-in-degree peers that have each an in-degree of 989. We are thus confident in the capacity of our algorithm to promote new nodes to the position of hubs if the previous hubs were disconnected. In Figure <ref> we can see that we have almost the same results as the results obtained without crashes for the clustering coefficient, except for the clustering coefficient dropping from 0.55 to 0.3 in the middle of the simulation for Elevator, which is logical as the 10 hubs are disconnected. The drop is only temporary, as the value goes back to 0.55 almost immediately. For the average path length and the diameter there is no impact, as we can see in Figure <ref> and <ref>. Summary. In Figure <ref> we compare the in-degree distribution of the network after the run of the Elevator algorithm for a various number of hubs <ref>, and also for each context of simulation <ref>. The shape of the degree distribution remains consistent across different hub counts, except for a scenario with 20 hubs where nodes exclusively connect to these hubs (resulting in a multi-star topology). This phenomenon aligns with the prescribed number of preferred connections (h = c = 20), where nodes exclusively link to elevated hub nodes, omitting random connections entirely. The shape of distribution also remains consistent across failure contexts. In Figure <ref>, we compare Elevator across all contexts for the different metrics, and we can see that there are not many variations in values, as expected from the definition of our protocol and as seen in previous comparative analyses presented above. § CONCLUSION We proposed a novel peer sampling algorithm, Elevator, designed for unstructured P2P networks, which facilitates the organic promotion of specific nodes to serve as hubs. Our simulations confirm that the Elevator algorithm successfully maintains network connectivity, constructs networks with low diameters, achieves stability with a defined number of hubs (denoted as h), and demonstrates resilience against crashes, churn, and targeted attacks on hubs. We anticipate that this work will pave the way for a new category of algorithms known as "hub sampling algorithms", which could hold significant relevance for specific decentralized applications. For instance, such algorithms may accelerate the transmission of machine learning models in federated learning scenarios. While our current study does not delve into these specific use cases, we envision exploring federated learning applications within this network paradigm in future investigations. § DESCRIPTION OF PROOFS, NEWSCAST AND PHENIX ALGORITHMS As our goal is to present our new hub sampling algorithm and compare it to previous peer sampling algorithms, we will (briefly) present three peer sampling algorithms (PROOFS, Newscast, and Phenix). We chose to compare our proposed algorithm to these three algorithms as they are widely used in the literature. Newscast is used for gossip learning<cit.>, PROOFS is a foundational algorithm, as Secure Cyclon<cit.>, one of the latest peer sampling algorithm in the literature, is based on Cyclon<cit.>, itself based on PROOFS. Phenix is interesting as it has especially been conceived to be resilient to failures and Byzantine attacks and also to construct networks that have a low diameter. §.§.§ The PROOFS algorithm: The PROOFS algorithm, as presented in <cit.> is a very simple algorithm used to create a peer sampling service. At each cycle, each node initiates a neighbor exchange (or shuffling) with another peer q chosen at random. The peer selects a random subset of size l (the shuffle length, a global parameter) and sends this subset to q. Upon reception of the subset, the node q also selects a random subset and sends it to p. When the node receives the subset of q, it replaces the previous entry in its cache, starting with the empty cache slots (if any) and then replacing entries previously sent to q. The parameters of the algorithm are c, the size of the list of outgoing connections, and l, the shuffle length, i.e. the number of outgoing connections exchanged with a peer during a neighbor exchange. The cache list is implemented as an array of size c. The list is initialized with random values (random connections to other nodes of the network). The goal of the algorithm is to produce a network that is “well-mixed”, in the sense that after enough shuffling operations, the node’s neighbors are essentially drawn at random from the set of all peers. §.§.§ The Newscast algorithm: The Newscast algorithm<cit.> is similar to PROOFS but is more generic and adds the idea of "age" for the node descriptors. The age of the node descriptors is incremented at each cycle. The goal is to create a peer sampling service that allows each node of the network to connect to a random subset of the nodes in the network. The parameters of the algorithm are c, the size of the list of outgoing connections, the mode of the peer selection (random or tail, but for simulations we only used random), and the mode of view propagation. The mode of view propagation can be push, pull, or push-pull, as described below: * Push strategy: At each cycle, a node will send its knowledge to the selected node * Pull strategy: At each cycle, a node will ask for knowledge from the selected node and wait for the answer. * Push-Pull strategy: At each cycle, a node will both use a Push and a Pull strategy. As the push-pull mode is the most efficient<cit.>, this is the mode we will present and the one that we used in our simulations. The cache list is implemented as an array of 2-tuple of size c. The two elements of the tuple are the node descriptor and the associated age of the descriptor. The list is initialized with random values for the node descriptors (random connections to other nodes of the network) and with 0s for the age of the node descriptors. At each cycle, each node initiates an exchange of membership information with a neighbor chosen at random. The node sends a buffer that contains c/2-1 random node descriptors from its cache to the other node, with c the parameter representing the size of the cache. The other node replies to the message with a similar message also containing a buffer with c/2-1 nodes descriptors. The node then merges the received buffer with its cache and filters the elements (removing the duplicates, the older elements, and finally removing the sent elements) to achieve a cache of the same size as before. If there are still too many elements, the algorithm removes elements of the cache at random until the length of the cache is c. As with PROOFS, the Newscast algorithm allows the creation of a network that has the same behavior as a random graph. In particular, this allows the network to be very resilient to failures and churn, as explained in <cit.>. §.§.§ The Phenix algorithm: The Phenix algorithm<cit.> is a peer sampling algorithm that differs from PROOFS and Newscast in the sense that the goal of the authors is to create an algorithm that is both resilient to failures and with a low-diameter. The algorithm is inspired by the concept of preferential attachment<cit.> and constructs a network with a topology that is close to a power-law. Contrary to the two previous algorithms, the Phenix algorithm only executes once for each node, when the node enters the network. The node splits its cache in 2 parts G_random and G_friends. Then it connects directly to the nodes in G_random and asks for the list of neighbors for each node in G_friend and adds them to the list G_candidates. Each node in G_friend also sends a ping message to all its neighbors and all neighbors add the new node to their Γ list, to prevent crawling from malicious nodes. The new node then sorts this list of distance two neighbors (G_candidates) and selects the s more frequent nodes and connects to them (G_preferred). When a node receives a connection request from a new node, it will increment its internal counter and create a backward connection with this node if its counter's value is greater than the γ constant. The parameters of the algorithm are c, the number of outgoing connections, τ, the number of cycles a node is kept in the gamma list (fixed at 10), γ, the constant limiting the number of backward connections (fixed to 20, thus a backward connection is created for 20 in-going connections) and s, the number of preferential connections, chosen to the value of c/2 for our simulations. The cache list is implemented as an array of size c. The list is initialized with random values (random connections to other nodes of the network). The Γ list is implemented as a linked list and initialized as an empty list. To allow for the idea of preferential attachment to work, the network needs to be initialized with a small number of nodes (the number is set to 20 in <cit.> and we have chosen the same value for our simulations) and nodes are added progressively, with the number of nodes added at each cycle is drawn from a normal distribution 𝒩(2,1). The constructed network is a scale-free network, with a topology following a power law. § METRICS USED: DEGREE DISTRIBUTION, CLUSTERING, AVERAGE PATH LENGTH AND DIAMETER §.§ Degree distribution The indegree (resp outdegree) distribution of a network represents the probability distribution of these indegrees (resp outdegrees) over the whole network. * A network that follows a random graph distribution (Erdős–Rényi model) should have a degree distribution that follows the probability P(k) = n-1k p^k (1-p)^n-1-k * A network that follows a power law (Barabási-Albert model) should have a degree distribution that follows the following probability P(k) = Ck^-γ Indeed, observing the degree distribution should tell us if our algorithm creates a network with a topology closer to a random graph or one closer to a power-law. §.§ Clustering coefficient A random graph tends to have a low clustering coefficient, and a network with a lot of hubs will have a higher clustering coefficient. The clustering coefficient of a node is the number of edges between the neighbors of the node divided by the number of all possible edges between those neighbors. Intuitively, we can think of this coefficient as the measure of the degree to which nodes in a graph tend to cluster together (neighbors of the node are also neighbors of each other). C_i = 2e_i/k_i(k_i - 1) Where: * e_i is the number of closed triangles containing node i. * k_i is the degree of node i, which is the number of links (edges) connected to that node. C = 1/n∑_i=1^n C_i §.§ Average Path Length As our goal is to have an algorithm that constructs a network that disseminates information in the network, our algorithm must produce a network topology with a low average path length. The average path length of each node was computed using the Floyd–Warshall algorithm. The average path length is the average of the shortest path lengths over all pairs of nodes in the graph. a =∑_s,t ∈ V s≠ td(s, t)/n(n-1) §.§ Diameter The diameter of a graph is a measure of the longest distance between any two vertices (nodes) in the graph, measured in terms of the number of edges. In other words, the diameter of a graph is the maximum shortest path between any pair of nodes in the network. diam(G) = max_u,v ∈ Vd(u, v) While the average path length provides a basic measure of information dissemination efficiency in algorithms, it may overlook disparities in dissemination speed across different nodes within the network. An algorithm could potentially have a favorable average path length but still exhibit uneven dissemination speeds among nodes due to varying distances. Calculating the network's diameter, however, offers a more comprehensive assessment. § ADDITIONAL RESULTS FROM SIMULATIONS
http://arxiv.org/abs/2406.08876v1
20240613072302
Heuristics for Influence Maximization with Tiered Influence and Activation thresholds
[ "Rahul Kumar Gautam", "Anjeneya Swami Kare", "Durga Bhavani S" ]
cs.SI
[ "cs.SI" ]
Jackknife Inference with Two-Way Clustering We are grateful to conference and seminar participants at the Canadian Econometric Study Group, University of Graz, UCLA, and the Aarhus Workshop in Econometrics. MacKinnon and Webb thank the Social Sciences and Humanities Research Council of Canada (SSHRC grant 435-2021-0396) for financial support. Nielsen thanks the Danish National Research Foundation for financial support (DNRF Chair grant number DNRF154). Code and data files may be found at <http://qed.econ.queensu.ca/pub/faculty/mackinnon/twowayjack/> James G. MacKinnonCorresponding author. Queen's University Morten Ørregaard Nielsen Aarhus University Matthew D. Webb Carleton University June 17, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT The information flows among the people while they communicate through social media websites. Due to the dependency on digital media, a person shares important information or regular updates with friends and family. The set of persons on social media forms a social network. Influence Maximization (IM) is a known problem in social networks. In social networks, information flows from one person to another using an underlying diffusion model. There are two fundamental diffusion models: the Independent Cascade Model (ICM) and the Linear Threshold Model (LTM). In this paper, we study a variant of the IM problem called Minimum Influential Seeds () problem proposed by <cit.>. It generalizes the classical IM problem with LTM as the diffusion model. Compared to IM, this variant has additional parameters: the influence threshold for each node and the propagation range. The propagation range is a positive integer that specifies how far the information can propagate from a node. A node on the network is not immediately influenced until it receives the same information from enough number of neighbors (influence threshold). Similarly, any node does not forward information until it receives the same information from a sufficient number of neighbors (activation threshold). Once a node becomes activated, it tries to activate or influence its neighbors. The problem aims to select the minimum number of initial spreader nodes such that all nodes of the graph are influenced. In this paper, we extend the study of the problem. We propose heuristics that construct seed sets based on the average degree of non-activated nodes, closest first, and backbone-based heaviest path. We have also proposed a pruning technique that further reduces the size of the seed sets. We have implemented the existing heuristics and the proposed heuristics. We have done extensive experimentation on 18 real-world data sets. The proposed heuristics give improved seed sets compared to the existing heuristics. § INTRODUCTION In this digital world, people get news or information digitally on their gadgets. Due to the advantages of social media, human beings are rapidly adopting social media in their daily life. However, there are some pros and cons of social media. Some disadvantages are rumor-spreading, privacy-related issues, data theft, etc. Nevertheless, getting important information about what is happening in society becomes essential. As we have seen during the COVID-19 pandemic, the government has to make people aware of the pandemic and related safety measures. Almost all countries' governments use social media to run awareness campaigns as it saves the time and effort of the government. On account of the enormous applications of social media, the discussion on how information propagates on social media networks becomes very important. The influence maximization problem is related to information propagation and maximizing the influenced people in social networks. A node in the social network is said to be influenced by a message when it starts believing the message. On the other hand, a node is said to be activated when it starts forwarding (spreading) the message to its neighbors.  <cit.> introduced the Influence Maximization (IM) problem. The IM problem is also called the Target Set Selection (TSS) problem. Using the Linear Threshold Model (LTM) of diffusion, there are primarily two variants of the TSS problem: the maximization version and the minimization version. For the maximization version, input is a graph G=(V, E) and a positive integer k, and the problem asks to compute a target set (seed set) S ⊆ V of size at most k that activates the maximum number of vertices. For the minimization version, input is a graph G and an integer input ℓ, and the problem asks to compute a target set (seed set) S ⊆ V of the minimum size that activates at least ℓ vertices. If ℓ = |V|, then the problem asks to compute a target set S ⊆ V of the minimum size that activates all the vertices of the graph.  <cit.> studied a variant of the TSS problem, which they called the Perfect Evangelizing Set (PES) problem. For the PES problem, input is a graph G, influence and activation (evangelizing ) thresholds t_I, t_A: V →{0, 1, 2, …, } and the problem asks to compute a target set (seed set) S ⊆ V of minimum size that influences all the vertices of the graph.  <cit.> also introduced a problem called the Perfect Awareness (PA) problem, which is a specialization of the PES problem. In the PA problem t_I(v) = 1, ∀ v∈ V.  <cit.> proposed a variant of the TSS problem, which they call the Minimum Influential Seeds () problem, which is a generalization of the PES problem. Compared to the PES problem, the problem has an additional input parameter called the propagation range p∈ℤ^+. The propagation range indicates how far the information propagates from one node to another node. The problem with p = Diameter(G) is equivalent to the PES problem. For the experimentation <cit.> used two input parameters θ and α such that 0 < θ≤α≤ 1. For each vertex u ∈ V, they set the influence threshold t_I(u) = θ.deg(u) and the activation threshold t_A(u) = α.deg(u). In the problem, the information can flow from the sources (initial spreaders) up to p ∈ℤ^+ distance. If a vertex u receives information from at least t_I(u) neighbors, u becomes influenced. Likewise, If a vertex u receives information from at least t_A(u) neighbors, u becomes activated and forwards information to the neighbors. An initial spreader can not activate or contribute to activating vertices at more than p distance. The set of initial spreaders is called a seed set. The objective is to find the set of initial spreaders (seed set) of minimum size, which influences all the vertices of the graph.  <cit.> proposed minimization and maximization variants of the Influential Seeds problem. They have proposed heuristics for the problem. In this paper, we extend the study of the problem. We propose three heuristics and a pruning strategy to improve the solutions obtained by the heuristics. We have the following results: - The first heuristic picks the average number of highly influential inactive vertices for the seed set in each iteration. It improves the quality of the result and running time compared to existing heuristics for the problem. - The second heuristic finds the closest highly influential seed vertex from the seed set. - The third heuristic, a backbone-based heuristic, finds the dominant path and selects vertices from the dominant path for the seed set. - The proposed pruning technique is applied to solutions returned by the existing and as well as the proposed heuristics. The pruning technique improves the quality of the solutions. The whole paper is organized as follows. In section <ref>, we present related and recent studies on the problem, such as influence maximization, perfect awareness problem, and the target set selection problem. Section <ref> covers our proposed heuristics. In section <ref>, we analyze the performance of algorithms and present results on the real datasets. The final section of the article concludes the paper. § RELATED WORK Influence Maximization (IM) in social networks is an essential area of research due to its applications in business advertisements, viral marketing, and campaigning. The IM problem is also known as the Target Set Selection (TSS) problem. <cit.> introduced the IM problem. They proved that the IM problem is NP-hard and proposed experimental algorithms for the problem. The greedy algorithm by <cit.> guarantees approximation 1-(1/e) (e is logarithmic base). For the TSS problem based on the decreasing cascade model, the 1-(1/e)-ϵ-approximation algorithm was studied by <cit.>. <cit.> studied the hardness of the TSS problem and proved that the problem is hard to approximate within a poly-logarithmic factor. There are two fundamental diffusion models: the Independent Cascade Model (ICM) and the Linear Threshold Model (LTM). A vertex influences its neighbors with some probability in an independent cascade model. In the linear threshold model, each vertex is activated or influenced if the vertex has a number of active neighbor spreaders greater than or equal to the threshold value of the vertex. <cit.> studies evangelism in social networks based on the linear threshold model, in which each vertex has an influence threshold and activation threshold. When a vertex receives information from an influence threshold number of neighbors, it becomes influenced. For a vertex to become activated, the vertex should have at least an activation threshold number of activated neighbors. Later, <cit.> presented a Perfect Awareness (PA) problem on the linear threshold model in which the influential threshold for each vertex is considered one. The heuristics  <cit.> of the PA problem are proposed. The k-center problem <cit.>, PA problem <cit.>, evangelism in social networks <cit.>, graph burning problem <cit.>, opinion maximization <cit.>, target influence maximization in competitive  social networks<cit.> under the independent cascade model, and rumor minimization <cit.> are related problems to the problem. The experimental works on the problems PA, graph burning number, and opinion maximization are proposed in  <cit.>. In the real-life scenario, the information does not flow continuously in the social media networks. Over time, the propagation of information or advertisements in social media networks gets exhausted due to people's waning interest. So, the distance traversed by the information in social media networks needs to be addressed. <cit.> study the issue in the diffusion process and introduce a significant constraint as propagation range (information can traverse distance up to the propagation range from the initial spreaders). Due to the importance of propagation range in real scenarios, we study the problem and propose three heuristics for the problem. § PROBLEM DEFINITION The flow chart in Fig. <ref> shows that TSS is evolving to other problems by adding useful constraints. The problem is the generalization of TSS, evangelism in social networks, and perfect awareness problems. In a given graph and a set S ⊆ V, initially, only the vertices of the set S are influenced and activated. The variable p∈ℤ^+ denotes the propagation range. Let A be the set of activated vertices initially A=S. Initially, each activated vertex u ∈ S can send the information up to p distance. In the diffusion process, If |N_A(v)| ≥α * deg(v) where N_A(v) is a set of active neighbors of v, then v is added to A with the condition that v is activated by a smallest subset S'≠ϕ. Similarly, if |N_A(v)| ≥θ*deg(v), then v becomes influenced. We repeat the above steps until all the vertices of the set V are influenced. The objective is to find the minimum set of seed nodes ( seed set S of minimum size). § PROPOSED ALGORITHMS In this section, we propose three heuristics for the problem. The approach constitutes two significant steps. In the first step, the heuristics compute a potential seed set iteratively. In the second step, the potential seed set is pruned to obtain the smallest seed set. The tricky part of solving the problem is to find the smallest seed set S ⊆ V. In this paper, the heuristic algorithms construct the seed set S with criteria such that the algorithm influences the set of vertices V through the diffusion process. The diffusion process is implemented using the Breath First Search approach <cit.>. §.§ Average Degree Heuristic The high-level idea of the proposed method as given in Algorithm <ref> is as follows. The takes input parameters of graph G, propagation range p, influence threshold θ, and activation threshold α. Initially, all the vertices of are inactive, so initialize activated set A=ϕ, influenced set I= ϕ, and initial spreader list = [ ]. contains the potential seed nodes and is treated as a list since the order in which the nodes are added to is important for step. Repeat the following steps (1 and 2) if a non-influenced node exists in the graph G. * Select a potential list of spreaders L by calling the method as given in Algorithm <ref>. * For all vertices of u ∈ L. * Append u to . * Add u to the activation set A. * function updates activation set A and influence set I. As given in Algorithm <ref>, method returns ⌈n”/n'⌉ number of highly influential vertices where n' is the number of inactive vertices in the graph G, and n” is the sum of the inactive-degree of inactive vertices ( inactive-degree d_V\ A(v) means the number of inactive neighbors of v). The number ⌈n”/n'⌉ indicates the average degree of the graph G(V\ A,E). In each iteration (in Algorithm <ref>, lines 6-10), select a vertex w∉ L with maximum d_V\ A(w) value and append w to L. We use to refine the spreader list obtained by the above steps (1 and 2). The method, as given in algorithm <ref>, removes the extra spreaders from the list and returns the final seed set S. The importance of pruning is illustrated in Figure <ref>. The method reverses the list . For each vertex u ∈Ŝ, check whether the diffusion process can influence all vertices of the graph by spreader list Ŝ\{u}. If yes, remove the vertex u from ; otherwise, u must be present in the spreader list . The method removes the extra spreader nodes from and returns the final seed set S. The method gives results based on the order of vertices selected as the initial spreader. So, if we apply on two lists of equal sizes but with different sequences of spreader lists having the same elements then may return two reduced lists of different sizes of initial spreaders in both cases. Therefore, performance depends on how we construct the potential spreader list . §.§ Closest First Heuristic Suppose d_V\ A(u) ( where d_V\ A(u) is the number of inactive neighbors of the vertex u) and d_V\ A(v) are the top two highest values in the graph G(V\ A, E), where d(u,v)≤ 2. Both u and v together can activate and influence more vertices. As given in Fig. <ref>, the vertices 2 and 5 activate and influence 4 immediately. Therefore, the next seed vertex w is selected within two hops from the nodes with a maximum d_V\ A(w) value. Initially, the spreader list is empty. Find a vertex w with the most inactive neighbors within two hops from all vertices in and add w to . If no inactive vertex exists within two hops from the nodes in and the whole graph is not influenced, a vertex with the most inactive neighbors is added to . On each addition of a seed vertex to list , the diffusion process diffuses the information in graph G and updates influenced and activated sets I and A. The process stops when all vertices are influenced. After finding list , the process removes extra spreaders from the list and returns the final seed set S. §.§ Backbone-Based Heuristic As we saw in the previous algorithm, given in the algorithm <ref> where method returns highly influential list L. We add each vertex of list L one by one to the spreader list . Instead of adding all vertices from L to spreader list , append-only a vertex u from list L with the most inactive surrounding vertices. The reason for selecting a vertex u is that u belongs to a dense sub-graph of inactive vertices. As given in algorithm <ref>, the method finds a tree simultaneously from the list of roots L by assuming the weight on the vertices as the number of inactive neighbor vertices in the graph G(V\ A, E). It returns a root w∈ L associated with the heaviest BFS tree. The vertex w is included in and A, and the process marks vertices as activated or influenced. The process stops when all the vertices of become influenced. The method removes the extra spreaders from the list and returns the seed set S. As given in algorithm <ref>, the inputs for the method are , A, and L. The method uses the queue data structure to find the heaviest BFS tree. The initial step is to en-queue each vertex u ∈ L to queue Q and initialize W[v]=0 ∀ v∈ V. Do de-queue u from queue Q. For each unmarked vertex v ∈ N(u) \ A, update W[v] by W[v] + W[u], enqueue v to queue Q, D_b[v]=u ( where D_b array keeps track of the root u that discovers the vertex v. ) and mark v as visited. If W[v]> max, then update w by D_b[v] and max=W[v]. If Q becomes empty, stop; otherwise, repeat. In the last step, The method returns the vertex w ∈ L associated with the heaviest BFS tree. §.§ BFS and DFS Greedy Heuristics proposed two heuristics, BFS-GREEDY and DFS-GREEDY. We also apply the technique on the seed set returned by these heuristics to improve the seed set. § RESULT AND DISCUSSION We implemented our algorithms on the Ubuntu Operating System, and the hardware specifications are the processor Intel CoreTM i7-8700CPU@3.2Ghz and 16GB RAM. For comparison purposes, we set the parameters used in the algorithms as the propagation range P=3 and P=diameter(G), the activation threshold α=0.6, and the influence threshold θ = 0.4. The proposed algorithms are compared with recently published efficient algorithms by  <cit.>. The sources of datasets are network repository <cit.>, SNAP dataset<cit.>, social networks <cit.>, and data collected by <cit.>. The results are shown for propagation range p = 3 in Figure <ref> and Table <ref>. The results for p=diameter are shown in Figure<ref> and Table <ref>. In the tables, the algorithms from <cit.> are referred to as DFS-GREEDY (DFS-GD) and BFS-GREEDY (BFS-GD) while the proposed heuristics DFS-GREEDY-PRUNNING as DFS-PRUN, BFS-GREEDY-PRUNNING as BFS-PRUN, BACKBONE-BASED as BBH, AVERAGE-DEGREE as ADH and CLOSEST-FIRST as CFH. Average degree heuristic and backbone heuristics perform well for dense data sets like Karate <cit.>, Reed98 <cit.>, musae-squirrel <cit.>, and Web-pol blogs as both the algorithms find spreader vertices based on the importance of the degree of inactive vertices. Average Degree (ADH) and Closest-First (CFH) heuristics perform well on dense graphs with a high average clustering coefficient and a high average degree. and heuristics work well on sparse graphs. The works efficiently and effectively if the selected spreaders are in the neighborhood of each other. Our algorithm uses Prim's algorithm to find the dominated path based on degree, which can activate more vertices. Therefore, all our proposed four heuristics are improving recent results given by <cit.>. § CONCLUSION Due to the importance of social media networks in daily life, this paper studies the influence maximization problem with propagation range. If a vertex receives the same information sufficient times from its neighbors, the vertex in the network becomes influenced. Similarly, an influenced vertex in the network starts spreading information if the vertex receives the same information from enough neighbors. Indeed, information originating from a source does not flow continuously. So, the influence model includes the propagation range of information from the originating vertex. This paper proposes heuristics based on backbone-based heaviest paths and the average degree of non-activated vertices. The proposed heuristics and the pruning techniques give improved seed sets compared to existing heuristics. Applying genetic algorithms, particle swarm optimization, and other metaheuristic techniques to this problem is an interesting future direction. plainnat
http://arxiv.org/abs/2406.09005v1
20240613111849
Privacy Aware Memory Forensics
[ "Janardhan Kalikiri", "Gaurav Varshney", "Jaswinder Kour", "Tarandeep Singh" ]
cs.CR
[ "cs.CR" ]
PRIVACY AWARE MEMORY FORENSICS 1st Janardhan Kalikiri Indian Institute of Technology, Jammu jana1tech@gmail.com 2nd Gaurav Varshney Indian Institute of Technology, Jammu gaurav.varshney@iitjammu.ac.in 3rd Jaswinder Kour Indian Institute of Technology, Jammu 2021rcs2010@iitjammu.ac.in 4th Tarandeep Singh Indian Institute of Technology, Jammu tarandeep42@gmail.com June 17, 2024 =============================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In recent years, insider threats and attacks have been increasing in terms of frequency and cost to the corporate business. The utilization of end-to-end encrypted instant messaging applications (WhatsApp, Telegram, VPN) by malicious insiders raised data breach incidents exponentially. The Securities and Exchange Board of India (SEBI) investigated reports on such data leak incidents and reported about twelve companies where earnings data and financial information were leaked using WhatsApp messages. Recent surveys indicate that 60% of data breaches are primarily caused by malicious insider threats. Especially, in the case of the defense environment, information leaks by insiders will jeopardize the country’s national security. Sniffing of network and host-based activities will not work in an insider threat detection environment due to end-to-end encryption. Memory forensics allows access to the messages sent or received over an end-to-end encrypted environment but with a total compromise of the user's privacy. In this research, we present a novel solution to detect data leakages by insiders in an organization. Our approach captures the RAM of the insider’s device and analyses it for sensitive information leaks from a host system while maintaining the user's privacy. Sensitive data leaks are identified with context using a deep learning model. The feasibility and effectiveness of the proposed idea have been demonstrated with the help of a military use case. The proposed architecture can however be used across various use cases with minor modifications. malicious insider, memory forensics, user privacy, privacy-aware forensics, deep learning, data leak detection, sensitive data detection § INTRODUCTION Insider threats come from users with legitimate authorized access to the organization’s assets and who exploit them intentionally or accidentally. The 2021 insider threat report by Cybersecurity Insider states that 98% of organizations feel vulnerable to insider threats. Most organizations (85%) consider unified visibility and access control across all apps, devices, web destinations, on-premises resources and infrastructure as significant to moderately important to prevent insider threats citesurvey.In many insider trading incidents, the violator posted information about the financial performance indicators of companies before their official disclosure through WhatsApp Messenger. This significantly impacted the stock price and gave the dishonest brokers a competitive advantage. The regulatory authorities could not detect these leaks due to the lack of monitoring technologies for these IM applications, which have become a channel for sensitive data leakages. SolarWinds Data Loss Prevention with ARM, Trustifi Outbound Shield, Manage Engine Endpoint DLP Plus are commonly used data loss prevention tools in companies but none of them deals with the prevention of data leaks through IM apps<cit.>. In this paper, we for the very first time address this issue and propose an effective methodology to prevent insider attacks using memory forensics while ensuring user privacy. We are trying to demonstrate a way through which live memory forensics-based insider threat detection solutions can run on end systems with an assurance that user's privacy won't be hampered. Memory is the workplace of the processor. Therefore, Memory Forensics is one of the most effective digital forensic disciplines that aims to extract digital evidence from volatile data existing in the RAM<cit.>. However one of the challenges in this process is that the RAM also contains private/personal data of the user/employee such as chat with family or other information related to his/her access to social media accounts etc. A full analysis of RAM by the employer will compromise the privacy of the user. In this research work, we focus on how an employer can have access to meaningful traces and indicators of data leakage from the end systems yet he can assure a benign user that his personal data privacy stakes are not affected. We propose a novel idea of a privacy-aware memory forensics framework that solves this problem. This paper is organized as follows: Section II discusses related work in the field of memory forensics and user privacy preservation and identified research gaps. Section III provides a detailed explanation of our proposed work. In section IV, the implementation of our approach with experiments, results, and their analysis is elucidated. Finally, in section V, we conclude and throw light on future work in this direction. § RELATED WORK Memory forensics can be defined as the forensics field of computer science, which is growing very fast in assisting the investigator in investigating malicious activities [11]. Memory forensics evolved in 2004 and was introduced by Michael Ford. Many tools were later developed in this area to help forensics analysis of memory including Responder PRO, Memoryze, MoonSols Winodws Memory Toolkitwinen, Belkasoft Live RAM Capturer, etc.; Memory forensic is now being used actively in forensics evidence collection and in real-time incident response. S. Srinivasan <cit.> proposed the privacy preservation methodology in a digital investigation by implementing policies while handling user data but did not implement the proposed design. Frank Y.W. Law et al.<cit.> proposed a searchable encryption model to provide privacy in the digital investigation model where the disk image is analyzed. It didn't include the contextual aspect of privacy preservation. M. Burmester et al.<cit.> proposed policy-based privacy preservation in disk forensics. The search performed was keyword-based based thus could include the user's private data. Waleed Halboob et al. <cit.> proposed a concept of quaternary privacy levels in computer forensics using the investigators' user data access control rights. Majority of existing published digital forensics investigation models or procedures have not incorporated the strategy for supporting data privacy protection <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. Also the previous works on applications of memory forensics to solve various problems<cit.>, <cit.> and <cit.> are computationally expensive due to the analysis of the entire RAM for their solution. The comparison of previous research work is tabulated in Table I. The following research gaps are identified: * Most of the schemes have focused on efficient memory forensics practices that require less computation. * None of the solutions are focusing on live memory forensics over instant messaging applications. * It was identified that none of the schemes focus on the user privacy aspects of memory forensics. * Not many schemes have focused on context-based searches for live memory forensics. These gaps have motivated our research on privacy-aware memory forensics. We propose a novel malicious insider detection scheme. We have taken WhatsApp as the IM application for our research and development which uses a BERT pre-trained model over sensitive context for the detection of sensitive messages from the WhatsApp memory dump acquired live from the RAM of a desktop. The proposed method performs a context-based sensitive WhatsApp messages detection on the Windows 10 operating system. The major contributions of this paper are: * Detailed study of Memory Forensics and issues concerning user privacy. * Designing an algorithm that can capture per-process live memory and individual chats of an instant messaging application (WhatsApp). * Proof of concept sensitive message detection tool over Windows. * Military sensitive training data set for BERT model for detecting malicious insiders attack in the military use case. § PROPOSED WORK This paper proposes an NLP-based architecture for Insider's attack to ensure user privacy in memory forensics. We demonstrate a novel approach for detecting sensitive data leaks by an insider (intentional or accidental) using the Windows desktop instant messaging WhatsApp while preserving user privacy in a defense use case. The goal is achieved in two significant steps: firstly the WhatsApp chats matching the desired context are retrieved to identify sensitive messages, and in the second step an alert message is generated and sent to the security center if any sensitive data leak is identified, appending the login user’s mobile number for further action. The proposed architecture is illustrated in Figure 1. §.§ Data Set Generation Due to the non-availability of the military data set consisting of sensitive data for fine-tuning the BERT module, we generated our training data set manually. For pre-training of the BERT model, we can train the model with a manually generated small data set for efficient performance. Post-analysis of recent defense data leakage cases and the following topics are considered while developing the data set. * Modernisation plans of defense equipment * Combat capabilities of the armed forces * Serviceability of weapon and mission-critical equipment * Movement of the fleet, battalion, military forces, and VIPs * Information about special operations §.§ Live Memory Capturing: Using ProcDump In our model, a Microsoft command-line utility called ProcDump <cit.> captures a particular process (WhatsApp) instead of the entire RAM (in GB). The proposed model hence is lightweight and utilizes significantly low computation and storage. ProcDump required administrative-level rights to dump a process memory. In most of the previous work on memory forensics, third-party tools are used to capture volatile data. The entire RAM was inspected for conducting the experiment. In our case, we only dump the WhatsApp process memory thereby reducing the dump storage requirement and improving the privacy from the first step of our analysis. With the specifications given in Table II, we implemented our privacy-aware memory forensics architecture on a user machine to detect sensitive data leaks over WhatsApp Desktop app installed on a Windows Machine. Our implementation over WhatsApp is a proof of our concept that privacy-aware memory forensics can be done over IM applications. To implement this our first step was to capture the memory of the WhatsApp desktop application in real-time. We send sample sensitive data on individual and group chats along with personal chat messages from systems/mobiles. WhatsApp desktop application launches multiple (generally 6 to 7) background processes while running on the user machine. All processes do not contain chat messages, thus capturing correct process memory with chat in it is challenging. After several experiments and analyzing the process IDs, the process whose dump eventually ended with chat messages was identified. We concluded that WhatsApp's process with the highest Process ID (PID) is the one that always contains the chat messages. The Algorithm is shown in Algorithm 1. The complexity of Algorithm 1 is of the order of O(n) where n is the number of processes running in the system. We identify the Process ID (PID) of that process to capture memory dump using the ProcDump tool and are able to retrieve all the user's chat messages. §.§ Extracting UNICODE & ASCII Strings: Using Strings Tool We used Microsoft's Sysinternals tool Strings v2.54<cit.> developed by Mark Russinovich to extract UNICODE and ASCII strings from captured memory dumps and saved them as a text file. §.§ Chat Messages Retrieval: Using Python Script After extracting the textual file from the captured memory dump, we retrieved the chat messages involving only the sensitive keywords. We redirected the sentences that involved organizational sensitive keywords (Military data in this use case) for further sensitivity detection based on context to the Sensitive Data Detection Model (SDDM) as shown in Figure 1. The algorithmic implementation for retrieval of sensitive chat messages is described in Algorithm 2. The complexity of Algorithm 2 is of the order of O(s*l*m) where s= number of sensitive words, l = no. of lines in the complete text, and m=no.of extracted messages. §.§ Context-based Sensitive Data Detection Model Our proposed method uses context-based feature extraction for efficient data detection. The user’s private data and the sensitive military data used in this proposed architecture use case are publicly unavailable. We have used a semi-supervised pre-trained module called BERT to identify sensitive data by generating vectors of data on contextual bases rather than keywords. § EXPERIMENTS, RESULTS AND ANALYSIS We designed our experimental setup to test the proposed model. We deployed the proposed architecture on Windows 10 user machine and installed the WhatsApp desktop version. The aim of the experiment is to access the proposed architecture when the malicious insider communicates military sensitive data over Whatsapp. In the following subsections, we describe the experiments conducted to demonstrate the deployment of our proposed model for detecting insider data leakage. §.§ Assumptions While conducting experiments, we assume that the user’s machine is not compromised, root access is not available to the user, the process memory dump of the IM application installed on the user’s computer is genuine and not affected by any malicious activity and the analysis doesn’t affect the system working. It is assumed that the insider uses a device where the solution is installed. §.§ Tools Utilised The following tools are utilized while designing the proposed privacy-aware memory forensics model for insider threat detection. * Memory Dumping Tool: Microsoft ProcDump v10.11 * Strings extraction from Memory Image: Microsoft Strings V2.54 * Extraction of chat messages from Strings: Python Script * Implementation of Model: Python Script §.§ Implementation and Testing We implemented our privacy-aware memory forensics architecture on a user machine, with the specifications given in Table II, to detect sensitive data leaks over WhatsApp Desktop app installed on a Windows Machine. Many experiments are conducted on this model to assess the performance by supplying the testing data set. The method proceeded in the following sequence: * Model identified the running WhatsApp process ID from system information. * Memory dump of that particular process captured by using PrcoDump command-line utility. * Converted the memory dump file into readable form by retrieving the UNICODE, ASCII strings using the Strings tool and saved as a text file. * Chat messages containing sensitive military keywords are retrieved along with the user’s mobile number from the text file of the memory dump. * Identified the sensitivity of these retrieved chat messages using SDDM. * Alert Message is initiated consisting of a leaked sensitive message and user mobile number to the security center if any data breach is identified. §.§ Results and Analysis An offline test on the BERT model indicates that the model performed the task with 95% accuracy, and the corresponding confusion matrix is shown in Figure 2. We have used Recevier Operating Characteristic (ROC) for evaluating the performance of our model. In our case, True Positive Rate (TPR) TPR = 138/(138+12)= 0.92 and True Negative Rate (TNR) TNR = 148/(148+2)= 0.98 Thus the accuracy of our model is 95% We tested the trained model by supplying some sensitive and normal messages as input, and the model identified the sensitive messages based on the context and generated the corresponding output values; during the experiments, it was observed that the model provided a sensitivity score above 0.5 for sensitive messages and less than 0.5 for user personal messages. The proposed architecture extracted the chat messages containing sensitive data from the memory dump and identified all the sensitive messages exchanged through WhatsApp desktop application based on the context. It can be clearly seen in Figure 3. The private messages exchanged are not detected, maintaining the user's privacy with memory forensics. This is the scenario where organizations provide IT infrastructure to employees like desktops and laptops in security-sensitive environments. Our tool will run in the background over a host machine, capturing context-sensitive data. During experiments, we observed that deleted messages from the hard disk are also recovered from the RAM. The proposed model is lightweight and thus significantly reduces the computational power. It captures the process memory instead of the entire RAM. Hence, the data that needs to be analyzed is in Kilo Bytes instead of some Giga Bytes. The processing time is also reduced accordingly. Our experimental results demonstrate the design feasibility of detecting malicious insider threats by using privacy-aware memory forensics techniques. The performance of the proposed privacy-aware memory forensics architecture while addressing the research gaps is compared with previous research work and tabulated in Table III. It is evident from the table that * only 50% of the schemes work on live memory forensics. The significance of using live memory is that the data is available in raw form i,e it is not encrypted. Thus it gives a lot of scope to analyze the data. * None of the schemes implements user privacy preservation during memory forensics. All the schemes capture the entire RAM where the user's private data is also available. Using Memory Forensics on the entire RAM breaches users' privacy. * None of the schemes works on per process capturing of RAM. Working with the entire captured RAM is computationally expensive compared to per-process captured RAM. * Most of the models perform keyword-based searches instead of context-based ones. Keyword-based searches are recommended for identifying the required data but since no context-based searches are done, it leads to the detection of user's private messages as well. Thus, context-based search is required so that the user's privacy is maintained. Our proposed model only targets specific applications and applies privacy preservation on live memory forensics using the context-based approach, thus addressing all the identified gaps. § CONCLUSIONS, LIMITATIONS AND FUTURE SCOPE We demonstrated optimal data leakage detection application of privacy-aware memory forensics in the military use case. All research gaps identified earlier are addressed by our model. Our model inspects process-based live memory instead of the entire RAM reducing the computations and hence is lightweight. The proposed method works for the new versions of operating systems because of the utilization of Microsoft utilities for acquiring memory (ProcDump) and converting dump files (Strings tool) in contrast to the third-party open-source tools which may not function properly. User privacy is ensured as only the sensitive data is determined from the live memory dump as our model mounts a context-based search for sensitive words which ensures the detection of only sensitive data. The privacy preservation feature of the proposed novel method escalates the memory forensics capabilities in the field of information security. Our model launched on the user machine, functions all actions automatically and initiates an alert message with necessary data to the security center. Since our model is designed for the military use case, one limitation of our scheme was the limited data set for training the model. Interesting future work in this direction is to implement privacy preservation at the kernel API level so that any memory acquisition tool cannot access the user’s private data. 00 idwatch Idwatch, Insider threats and data breaches. Available: https://www.idwatchdog.com/insider-threats-and-data-breaches. Accessed:2022-08-23[online]. comparitech Stephen Cooper, comparitech, Data loss prevention tools software. Available: https://www.comparitech.com/data-privacy-management/data-loss-prevention-tools-software, Accessed:2022-08-23 [online]. survey Cybersecurity Insiders, Insider threat report. Available: https://www.cybersecurity-insiders.com/portfolio/2021-insider-threat-report-gurucul, Accessed:2022-08-24[online]. insider Ekran, Insider threat statistics facts and figures. Available: https://www.ekransystem.com/en/blog/insider-threat-statistics-facts-and-figures. Accessed: 2022-06-15[online]. generic Ethar Qawasmeh, Mohammed I. Al-Salehy, and Ziad A. Al-Sharif, Towards a Generic Approach for Memory Forensics P.2019. email R. Padmavathi Iyer, Pradeep K. Atrey, Gaurav Varshney and Manoj Misra, Email Spoofing Detection Using Volatile Memory Forensics P.2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop chat Abdullah Kazim, Fadya Almaeeni, Shamsah Al Ali and Farkhund Iqbal, Memory Forensics: Recovering Chat Messages and Encryption Master Key P.2019 10th International Conference on Information and Communication Systems (ICICS) network Mohammed I. Al-Saleh1, Ziad A. Al-Sharif, and Luay Alawneh, "Network Reconnaissance Investigation: A Memory Forensics Approach" P.2019 10th International Conference on Information and Communication Systems (ICICS) doh Gaurav Varshney, Padmavathi Iyer and Pradeep Atrey, Manoj Misra "Evading DoH via Live Memory Forensics for Phishing Detection and Content Filtering" P.2021 13th International Conference on Communication Systems and Networks (COMSNETS) review Arjun Chetry, Uzzal Sharma, Memory Forensics Analysis for Investigation of Online Crime - A Review P.2019 6th International Conference on Computing for Sustainable Global Development (INDIACom) Electronics 2021, 10, 1380. https:// doi.org/10.3390/electronics10121380 Psurvey A. Nieto, R. Rios, and J. Lopez, "Privacy-Aware Digital Forensics", Security and Privacy for Big Data, Cloud Computing and Applications, 2019. NICS Lab. Publications: https://www.nics.uma.es/publications policy S. Srinivasan, "Security and Privacy in the Computer Forensics Context",2006 International Conference on Communication Technology, IEEE Enc Law FY, Chan PP, Yiu SM, et al. "Protecting digital data privacy in computer forensic examination" In: Systematic Approaches to Digital Forensic Engineering (SADFE), 2011 IEEE Sixth International Workshop on. IEEE; 2011. p. 1–6 choose M. Burmester, Y. Desmedt, R. Wright, and A. Yasinsac, "Security or Privacy, Must We Choose?" Symposium on Critical Infrastructure Protection and the Law, 2002 quarter Waleed Halboob, Muhammad Abulaish and Khaled S. Alghathbar, "Quaternary Privacy-Levels Preservation in Computer Forensics Investigation Process" 6th International Conference on Internet Technology and Secured Transactions, 2011 privacy Ali Dehghantanha and Katrin Franke, Privacy-Respecting Digital Investigation P.2014 Twelfth Annual Conference on Privacy, Security and Trust (PST) bert Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding P.2018 Google AI Language DOI: https://arxiv.org/pdf/1810.04805.pdf strings"Strings v2.54", https://docs.microsoft.com/en-us/sysinternals/downloads/strings proc "ProcDump v10.11", https://docs.microsoft.com/en-us/sysinternals/downloads/procdump
http://arxiv.org/abs/2406.09415v1
20240613175958
An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels
[ "Duy-Kien Nguyen", "Mahmoud Assran", "Unnat Jain", "Martin R. Oswald", "Cees G. M. Snoek", "Xinlei Chen" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Exploring Transformers on Individual Pixels D-K. Nguyen et al. 1FAIR, Meta AI 2University of Amsterdam An Image is Worth More Than 16×16 Patches: Exploring Transformers on Individual Pixels Duy-Kien Nguyen2 Mahmoud Assran1 Unnat Jain1 Martin R. Oswald2 Cees G. M. Snoek2 Xinlei Chen1 June 17, 2024 ==================================================================================================== § ABSTRACT This work does not introduce a new method. Instead, we present an interesting finding that questions the necessity of the inductive bias – locality in modern computer vision architectures. Concretely, we find that vanilla Transformers can operate by directly treating each individual pixel as a token and achieve highly performant results. This is substantially different from the popular design in Vision Transformer, which maintains the inductive bias from ConvNets towards local neighborhoods (by treating each 16×16 patch as a token). We mainly showcase the effectiveness of pixels-as-tokens across three well-studied tasks in computer vision: supervised learning for object classification, self-supervised learning via masked autoencoding, and image generation with diffusion models. Although directly operating on individual pixels is less computationally practical, we believe the community must be aware of this surprising piece of knowledge when devising the next generation of neural architectures for computer vision. § INTRODUCTION The deep learning revolution can be characterized as a revolution in inductive biases for computer vision. Learning previously occurred on top of manually crafted features, such as those described in <cit.>, which encoded preconceived notions about useful patterns and structures for specific tasks. In contrast, biases in modern features are no longer predetermined but instead shaped by direct learning from data using predefined model architectures. This paradigm shift's dominance highlights the potential of reducing feature biases to create more versatile and capable systems that excel across a wide range of vision tasks. Beyond features, model architectures also possess inductive biases. Reducing these biases can facilitate greater unification not only across tasks but also across data modalities. The Transformer architecture <cit.> serves as a great example. Initially developed to process natural languages, its effectiveness was subsequently demonstrated for images <cit.>, point clouds <cit.>, codes <cit.>, and many other types of data. Notably, compared to its predecessor in vision – ConvNet <cit.>, Vision Transformer (ViT) <cit.> carries much less image-specific inductive biases. Nonetheless, the initial advantage from such biases is quickly offset by more data (and models that have enough capacity to store patterns within the data), ultimately becoming restrictions preventing ConvNets from scaling further <cit.>. Of course, ViT is not entirely free of inductive bias. It gets rid of the spatial hierarchy in the ConvNet and models multiple scales in a plain architecture. However, for other inductive biases, the removal is merely half-way through: translation equivariance still exists in its patch projection layer and all the intermediate blocks; and locality – the notion that neighboring pixels are more related than pixels that are far apart – still exists in its `patchification' step (that represents an image with 16×16 patches on a 2D grid) and position embeddings (when they are manually designed). Therefore, a natural question arises: can we completely eliminate either or both of the remaining two inductive biases? Our work aims to answer this question. Surprisingly, we find locality can indeed be removed. We arrive at this conclusion by directly treating each individual pixel as a token for the Transformer and using position embeddings learned from scratch. In this way, we introduce zero priors about the 2D grid structure of images. Interestingly, instead of training divergence or steep performance degeneration, we obtain better results in quality from the resulting architecture. For easier reference, we name this , short for Pixel Transformer. Note that our goal is not to promote as an approach to replace ViT, but the fact that works so well suggests there is more signals Transformers can capture by viewing images as sets of individual pixels, rather than 16×16 patches. This finding challenges the conventional belief that `locality is a fundamental inductive bias for vision tasks' (see <ref>). In the main paper, we showcase the effectiveness of via three different case studies: (i) supervised learning for object classification, where CIFAR-100 <cit.> is used for our main experiments thanks to its 32×32 input size, but the observation also generalizes well to ImageNet <cit.>; (ii) self-supervised learning on CIFAR-100 via standard Masked Autoencoding (MAE) <cit.> for pre-training, and fine-tuning for classification; and (iii) image generation with diffusion models, where we follow the architecture of Diffusion Transformer (DiT) <cit.>, and study its pixel variant on ImageNet using the latent token space provided by VQGAN <cit.>. In all three cases, we find exhibits reasonable behaviors, and achieving results better in quality than baselines equipped with the locality inductive bias. This observation is further generalized to fine-grained classification and depth estimation tasks in the appendix. As a related investigation, we also examine the importance of two locality designs (position embedding and patchification) within the standard ViT architecture on ImageNet. For position embedding, we have three options: sin-cos <cit.>, learned, and none – with sin-cos carrying the locality bias whilst the other two not. To systematically `corrupt' the locality bias in patchification, we perform a pixel permutation before dividing the input into 256-pixel (akin to a 16×16 patch in ViT) tokens. The permutation is fixed across images, and consists of multiple steps that swap a pixel pair within a distance threshold. Our results suggest that patchification imposes a stronger locality prior, and (given ) translation equivariance is still indispensable for network designs. Admittedly, is not as practical as ViT, since treating each pixel as a token will lead to a sequence length much longer than previously adopted for images. This is especially limiting as Self-Attention operations in Transformers demand quadratic computations. In practice, patchification is still arguably the most effective idea that trades quality for efficiency, and locality is still useful. Nevertheless, we believe our investigation delivers a clean, compelling message that locality is not a necessary inductive bias for model design. We believe this finding will be an integral piece of the community knowledge when exploring the next generations of architectures to process images. § RELATED WORK Locality for images. To the best of our knowledge, most modern vision architectures <cit.>, including those aimed at simplifications of inductive biases <cit.>, still maintain locality in their design. Manually designed visual features before deep learning are also locally biased. For example, SIFT <cit.> uses a local descriptor to represent each point of interest; HOG <cit.> normalizes the gradient strengths locally to account for changes in illumination and contrast. Interestingly, with these features, bag-of-words models <cit.> were popular – analogous to the set-of-pixels explored in our work. Locality beyond images. The inductive bias of locality is widely accepted beyond modeling 2D images. For text, a natural language sequence is often pre-processed with `tokenizers' <cit.>, which aggregate the dataset statistics for grouping frequently-occurring adjacent characters into sub-words. Before Transformers, recurrent neural networks <cit.> are the default architecture for such data, which exploit the temporal connectivity to process sequences step-by-step. For even less structured data (point clouds <cit.>), modern networks <cit.> will resort to various sampling and pooling strategies to increase their sensitivity to the local geometric layout. In graph neural networks <cit.>, nodes with edges are often viewed as being locally connected, and information is propagated through these connections to farther-away nodes. Such a design make them particularly useful for analyzing social networks, molecular structures, . Other notable efforts. We list four efforts in a rough chronological order, and hope it can provide historical context from multiple perspectives for our work: * For ConvNets, relevant attempts have being made to remove locality. Notably, <cit.> replaces all the spatial convolutional filters with 1×1 filters in a ResNet <cit.>. It provides more interpretability to understand the decision making process of a ConvNet, but without inter-pixel communications, the resulting network is substantially worse in performance. Our work instead uses Transformers, which are inherently built on set operations, with the Self-Attention mechanism handling all-to-all communications; understandably, we attain better results. * Before ViT gained popularity, iGPT <cit.> was proposed to directly pre-train Transformers on pixels following their success on text <cit.>. In retrospect, iGPT is a locality-free model for self-supervised next (or masked) pixel prediction. But despite the expensive demonstrations, its performance still falls short compared to simple contrastive pre-training <cit.> for ImageNet linear classification. Later, ViT <cit.> re-introduced locality (, via patchification) into the architecture, achieving impressive results on many benchmarks including ImageNet. Since then, the community has moved on with 16×16 patches as the default tokens for images. Even today, it is still unclear whether higher resolution or locality is the key differentiator between the two. Our work closes this understanding gap, pointing to resolution as the enabler for ViT, not locality with systematic analyses. * Perceiver <cit.> is another series of architectures that operate directly on pixels for images. Aimed at being modality-agnostic, Perceiver designs latent Transformers with cross-attention modules to tackle the efficiency issue when the input is high-dimensional. However, this design is not as widely adopted as plain Transformers, which have consistently demonstrated scalability across multiple domains <cit.>. Through PiT, we show Transformers can indeed work directly with pixels, and given the rapid development of Self-Attention implementations to handle massive sequence length (up to a million) <cit.>, efficiency may not be a critical bottleneck even when all the pixels are counted. * Our work can also be viewed as exploring sequence length scaling to the extreme. It's to model individual pixels for images, and to model individual characters for text <cit.>. Longer sequence (or higher resolution) is generally beneficial, as evidenced in <cit.>. However, all of them stopped short of reaching the extreme case that completely gets rid of locality. § INDUCTIVE BIAS OF LOCALITY In this section, we provide in-depth discussions about the inductive bias of locality (or locality) in mainstream architectures. To be precise, locality is the inductive bias that neighboring pixels are more related than pixels farther apart. We cover ConvNets and ViTs. §.§ Locality in ConvNets In a ConvNet, the locality bias is reflected in the receptive fields of the features computed in each layer of the network. Intuitively, receptive fields cover the pixels involved in computing a specific feature, and for ConvNets, these fields are local. Specifically, ConvNets consist of several layers, each containing convolutional operations using kernels (, 7×7 or 3×3) or pooling operations – both of which are locally biased. For example, the receptive field in the first layer often corresponds to only a small local window. The field is progressively expanded as the network becomes deeper, but the window is still local and centered at the location of the pixel. §.§ Locality in Vision Transformers At the first glance, Transformers are locality-free. This is because the majority of Transformer operations are either global (, Self-Attention), or purely within each individual token (, MLP). However, a closer look will reveal two designs within ViT <cit.> that can still retain the locality inductive bias: patchification and position embedding. Locality in patchification. In ViT, the tokens fed into the Transformer blocks are patches, not pixels. Each patch consists of 16×16 pixels, and becomes the basic unit of operation after the first projection layer. This means the amount of computation imposed within the patch is drastically different from the amount across patches: the information outside the 16×16 neighborhood can only be propagated with Self-Attention, and the information among the 256 pixels are always processed jointly as token. While the receptive field becomes global after the first Self-Attention block, the bias toward local neighborhood is already inducted in the patchification step. Locality in position embedding. Position embeddings can be learned <cit.>, or manually designed and fixed during training. A natural choice for images is to use a 2D sin-cos embedding <cit.>, which extends from the original 1D one <cit.>. As sin-cos functions are smooth, they tend to introduce locality biases that nearby tokens are more similar in the embedding space.[While sin-cos functions are also cyclic, it's easy to verify that the majority of their periods are longer than the typical sequence lengths encountered by ViTs.] Other designed variants are also possible and have been explored <cit.>, but all of them can carry information about the 2D grid structure of images, unlike learned position embedding which does not make assumptions about the input. The locality bias has also been exploited when the position embeddings are interpolated <cit.>. Through bilinear or bicubic interpolation, spatially close embeddings are used to generate a new embedding of the current position, which also leverages locality as a prior. Compared to ConvNets, ViTs are designed with much less pronounced bias toward locality. We push this further by completely removing this bias next. § TRANSFORMERS ON PIXELS We closely follow the standard Transformer encoder <cit.> which processes a sequence of tokens. Particularly, we apply the architecture directly on an unordered set of pixels from the input image with learnable position embeddings. This removes the remaining inductive bias of locality in ViT <cit.>, and for reference, we name it Pixel Transformer (PiT, see <ref>). Conceptually, PiT can be viewed as a simplified version of ViT, with 1×1 patches instead of 16×16. Formally, we denote the input sequence as X = (x_1, ..., x_L)∈ℝ^L× d, where L is the sequence length and d is the hidden dimension. The Transformer maps the input sequence X to a sequence of representations Z = (z_1, ..., z_L)∈ℝ^L× d. The architecture is a stack of N layers, each of which contains two blocks: multi-headed Self-Attention block and MLP block: Ẑ^k = (norm(Z^k-1)) + Z^k-1, Z^k = (norm(Ẑ^k)) + Ẑ^k, where Z_0 is the input sequence X, k∈{1, ..., N} indicates k-th layer in the Transformer, and norm(·) is a normalization layer (typically LayerNorm <cit.>). Both blocks use residual connections <cit.>. Pixels as tokens. The typical input in computer vision to the network is commonly an image of RGB values, I∈ℝ^H× W× 3, where (H, W) is the size of the original image. We follow a simple solution and treat I as an unordered set of pixels (p_l)_l=1^H· W, p_l∈ℝ^3. Thus, simply projects each pixel into a d dimensional vector via a linear projection layer, f: ℝ^3 →ℝ^d, resulting the input set of tokens X = (f(p_1), ..., f(p_L)) with L = H · W. We append the sequence with a learnable token <cit.>. Additionally, we learn an content-agnostic position embedding for each position. The pixel tokens are then fed into the Transformer to produce the set of representations Z. X = [, f(p_1), ..., f(p_L)] + , where ∈ℝ^L× d is a set of learnable position embeddings. removes the locality inductive bias and is permutation equivariant at the pixel level. By treating individual pixels directly as tokens, we assume no spatial relationship in the architecture and let the model learn it from data. This is in contrast to the design of the convolution kernel in ConvNets or the patch-based tokenization in ViT <cit.>, which enforces an inductive bias based on the proximity of pixels. In this regard, PiT is more versatile – it can naturally model arbitrarily sized images (no need to be divisible by the stride or patch size), or even generalize to irregular regions <cit.>. Besides the removal of locality, using each pixel as a separate token has additional benefits. Similar to treating characters as tokens for language, we can greatly reduce the vocabulary size of input tokens to the Transformer. Specifically, given the pixel of three color channels in the range of [0, 255], the maximum size of vocabulary is 255^3 (as pixels take discrete integer values); a patch token of size p×p in ViT, however, can lead to a vocabulary size of up to 255^3·p·p. If modeled in a non-parametric manner, this will heavily suffer from out-of-vocabulary issues. Of course, PiT also has downsides, with the biggest one being computationally expensive (or even prohibitive) for modeling long sequences. However, given the rapid development of techniques that handle massive sequence lengths for large language models (up to a million) <cit.>, it is entirely possible that soon, we can train PiTs on all the pixels directly (, a standard 224×224 crop on ImageNet `only' contains 50,176 pixels). Therefore, the goal of our paper is to empirically verify the effectiveness and potential of at a smaller scale – which we do next, and leave the engineering effort of practical deployment for the future. § EXPERIMENTS FOR In this section, we verify the effectiveness of with three case studies: supervised learning, self-supervised learning with MAE <cit.>, image generation with DiT <cit.>. We use four variants of : Tiny (T), Small (S), Base (B) and Large (L) with the specifications shown in <ref>. Unless otherwise specified, we use the ViT <cit.> variants of the same configuration as our baselines. In sum, our experiments show that can indeed learn strong vision representations with no inductive bias on locality. §.§ Case Study #1: Supervised Learning In this study, we train and evaluate from scratch without any pre-training <cit.>. Our baselines are ViTs with patch sizes 2×2. Datasets. We use two datasets: CIFAR-100 <cit.> with 100 classes and 60K images combined and ImageNet <cit.> with 1K classes and 1.28M images for training, 50K for evaluation. While CIFAR-100 is suitable for exploring the effectiveness of due to its intrinsic image size of 32×32, ImageNet has many more images which helps us to further confirm our findings. Evaluation metrics. For both datasets, we train our models on the split and report the top-1 (Acc@1) and top-5 (Acc@5) accuracy on the split. Implementation details. For CIFAR-100, due to the lack of optimal settings even for ViT, we search for the recipe and report results using model sizes Tiny and Small. We use the augmentations from the released demo of <cit.> to train from scratch, as we found more advanced augmentations (, AutoAug <cit.>) not helpful in this case. All models are trained using AdamW <cit.> with β_1=0.9, β_2=0.95. We use a batch size of 1024, weight decay of 0.3, drop path <cit.> of 0.1, and initial learning rate of 0.004 which we found to be the best for all the models. We use a linear learning rate warm-up of 20 epochs and cosine learning rate decay to a minimum of 1e-6. Our training lasts for 2400 epochs, compensating for the small size of dataset. On ImageNet, we closely follow the from-scratch training recipe from <cit.> for ViT and report results using -S and -B. Due to the limit in computation, images are crop-and-resized to 28×28 as the low-resolution inputs by default. Globally average pooled outputs are used for classification. The training batch size is 4096, initial learning rate is 1.6×10^-3, weight decay is 0.3, drop path is 0.1, and training length is 300 epochs. MixUp <cit.> (0.8), CutMix <cit.> (1.0), RandAug <cit.> (9, 0.5), and exponential moving average (0.9999) are used. Main comparisons (<ref>). While our baselines for both ViT variants (ViT-T and ViT-S) are well-optimized on CIFAR-100 (, <cit.> reports 72.6% Acc@1 when training from scratch with ViT-B, whilst we achieve 80+% with smaller sized models), -T improves over ViT-T by 1.5% of Acc@1; and when moving to the bigger model (S), shows an improvement of 1.3% of Acc@1 over the small model (T) while ViT seems to be saturated. These results suggest compared to the patch-based ViT, is potentially learning new, data-driven patterns directly from pixels. Our observation also transfers to ImageNet – albeit with a significantly lower resolution our results are significantly lower than the state-of-the-art <cit.> (80+%), still outperforms ViT in both settings we have experimented. ViT: a tale of two trends. If position embeddings are learned, PiT is simply a version of ViT with 1×1 patches. Therefore, it is crucial to study the performance trend when varying the patch sizes in ViT. There are three variables in concern: sequence length (L), input size (H×W) and patch size (p). They have a deterministic relationship: L=H×W/(p^2). Thus we have two ways to study the Acc@1 trend patch size p: * Fixed sequence length. We show the trend on ImageNet with a fixed L in <ref>. The model size is ViT-B. In this plot, the input size varies (from 224×224 to 14×14) as we vary the patch size (from 16×16 to 1×1). The last data point is equivalent to -B. If we follow this trend, then is the worst. This means sequence length is not the only deciding factor for Acc@1 even for classification – a task where a single label is assigned to the entire image. Input size, or the amount of information fed into the model is arguably a more important factor, especially when the size is small. It's only when the input size is sufficiently large (, 112×112), the additional benefit of further enlarging the size starts to diminish. This also means when the amount of information from the input is not enough, – or any architecture that follows this design (, iGPT <cit.>) would not work well. * Fixed input size. Our finding resides in the other trend, when we fix the input size (therefore the amount of information), and vary the patch size on ImageNet in <ref>. The model size is ViT-S. Interestingly, we observe an opposite trend here: it is always helpful to decrease the patch size (or increase the sequence length), aligned with the prior studies that claim sequence length is highly important. Note that the trend holds even when it ultimately reaches – a model without any design for locality. So performs the best in accuracy compared to ViTs. With these two trend figures in <ref>, our study augments the observations made from previous studies, as they mainly focused on regimes where the input size is sufficiently large <cit.>, and presents a more complete picture. To see what has learned, we show visualizations of the attention maps, position embeddings from in <ref>. §.§ Case Study #2: Self-Supervised Learning In this subsection, we study with self-supervised pre-training and then fine-tuning for supervised classification. In particular, we choose MAE <cit.> due to its efficiency that only retains 25% of the sequence length for the encoder, and its effectiveness for fine-tuning based evaluation protocols. Datasets. We use CIFAR-100 <cit.> due to its inherent size of 32× 32 for images. This allows us to fully explore the use of pixels as tokens on the original resolution. Evaluation metrics. We first perform pre-training on the split. Then it serves as the initialization in the fine-tuning stage (also trained on ). Again, we use image classification on CIFAR-100 as the downstream task and report the top-1 (Acc@1) and top-5 (Acc@5) accuracy on the split. Implementation details. We follow standard MAE and use a mask ratio of 75% and select tokens randomly. Given the remaining 25% visible tokens, the model needs to reconstruct masked regions using pixel regression. Since there is no known default setting for MAE on CIFAR-100 (even for ViT), we search for recipes and report results using -T and -S. The same augmentations as in <cit.> are applied to the images during the pre-training for simplicity. All models are pre-trained using AdamW with β_1=0.9, β_2=0.95. We follow all of the hyper-parameters in <cit.> for the pre-training of 1600 epochs except for the initial learning rate of 0.004 and a learning rate decay of 0.85 <cit.>. Thanks to MAE pre-training, we can fine-tune our model with a higher learning rate of 0.024. We also set weight decay to 0.02, layer-wise rate decay to 0.65, and drop path to 0.3, β_2 to 0.999, and fine-tune for 800 epochs. Other hyper-parameters closely follow the scratch training recipe for supervised learning (see <ref>). The models were not prompted to generate using any of the people images/classes from ImageNet (scuba diver, baseball player, bridegroom). Results. As shown in <ref>, we find that for too, self-supervised pre-training with MAE improves accuracy compared to training from scratch. This is true for both -T and -S that we experimented with. Notably, the gap between ViT and , with pre-training, gets larger when we move from Tiny to Small models. This suggests can potentially scale better than ViT. §.§ Case Study #3: Image Generation We switch to image generation with Diffusion Transformer (DiT) <cit.> in this section, which has a modulation-based architecture different from vanilla ViT, and operates on the latent token space from VQGAN <cit.> that shrinks the input size by 8×. Dataset-wise, we use ImageNet for class-conditional generation, and each image is center-cropped to 256×256, resulting in an input feature map size of 32×32×4 (4 is channel dimension). -L is fed with this feature map, same as its baseline DiT-L/2 <cit.>. Evaluation metrics. The generation quality is measured by standard metrics: Fréchet Inception Distance (FID) <cit.> with 50K samples, sFID <cit.>, Inception Score (IS) <cit.>, and precision/recall <cit.>, using reference batches from the original TensorFlow evaluation suite of <cit.>. Implementation details. We followed the settings for DiT training, with a larger batch size (2048) to make the training faster (the original recipe uses a batch size of 256). To make the training stable, we perform linear learning rate warm up <cit.> for 100 epochs and then keep it constant for a total of 400 epochs. We use a maximum learning rate of 8e-4, with no weight decay applied. Qualitative results. Sampled generations from -L are shown in <ref>. The sampling takes 250 time steps, with the latent diffusion outputs mapped back to the pixel space using the VQGAN decoder. A classifier-free guidance <cit.> scale of 4.0 is used. All generations are detailed and reasonable compared to the DiT models with the locality inductive bias <cit.>. Quantitative comparisons. We summarize qualitative comparisons between DiT-L/2 and -L in <ref>. First, our baseline is strong despite the change of training recipe: compared to the reference 10.67 FID <cit.> with a larger model (DiT-XL/2) and longer training (∼470 epochs), our DiT-L/2 achieves 8.90 without classifier-free guidance. Our main comparison (first two rows) uses a classifier-free guidance of 1.5 with 250 sampling steps. With operating on the latent `pixels', it outperforms the baseline on three metrics (FID, sFID and IS), and is on-par on precision/recall. With extended training, the gap is bigger (see <ref>). Our demonstration on the image generation task is an important extension of . Compared to the case studies on discriminative benchmarks from <ref> and <ref>, the task has changed; the model architecture is changed from standard ViT to a conditioned one; the input space is also changed from raw pixels to latent encodings from the VQGAN tokenizer. The fact that works out-of-box suggests our observation generalizes well, and locality-free architecture can be used across different tasks, architectures, and operating representations. § LOCALITY DESIGNS IN VIT Finally, we complete the loop of our investigation by revisiting the ViT architecture, and examining the importance of its two locality-related designs: (i) position embedding and (ii) patchification. Experimental setup. We use ViT-B for ImageNet supervised classification. We adopt the exact same hyper-parameters, augmentations, and other training details from the scratch training recipe from <cit.>. Notably, images are crop-and-resized to 224×224 and divided into 16×16 non-overlapping patches. Position embedding. Similar to the investigation in <cit.>, we choose from three candidates: sin-cos <cit.>, learned, and none. The first option introduces locality into the model, while the other two do not. The results are summarized below: 4pt1.1 sin-cos learned none Acc@1 82.7 82.8 81.2 Our conclusion is similar to the one drawn by <cit.> for self-supervised representation evaluation: learnable position embeddings are on-par with fixed sin-cos ones. Intrestingly, we observe only a minor drop in performance even if there is no position embedding at all – `none' is only worse by 1.5% compared to sin-cos. Note that without position embedding, the classification model is fully permutation invariant patches, though not pixels – will show evidence next. Patchification. Next, we use learnable position embeddings and study patchification. To systematically reduce locality from patchification, our key insight is that neighboring pixels should no longer be tied in the same patch. To this end, we perform a pixel-wise permutation before diving the resulting sequence into separate tokens. Each token contains 256 pixels, same in number to pixels in a 16×16 patch. The permutation is shared, , stays the same for all the images – including the ones for testing. The permutation is performed in T steps, each step will swap a pixel pair within a distance threshold δ∈ [2, inf] (2 means within the 2×2 neighborhood, inf means any pixel pair can be swapped). We use hamming distance on the 2D image grid. T and δ control how `corrputed' an image is – larger T or δ indicates more damage to the local neighborhood and thus more locality bias is taken away. <ref> illustrates four such permutations. <ref> illustrates the results we have obtained. In the table (left), we vary T with no distance constraint (i.e., δ=inf). As we increase the number of shuffled pixel pairs, the performance degenerates slowly in the beginning (up to 10K). Then it quickly deteriorates as we further increase T. And at T=25K, Acc@1 drops to 57.2%, a 25.2% decrease from the intact image. Note that in total there are 224×224/2=25,088 pixel pairs, so T=25K means almost all the pixels have moved away from their original location. <ref> (right) shows the influence of δ given a fixed T (10K or 20K). We can see when farther-away pixels are allowed for swapping (with greater δ), performance gets hurt more. The trend is more salient when more pixel pairs are swapped (T=20K). Overall, pixel permutation imposes a much more significant impact on Acc@1, compared to changing position embeddings, suggesting that patchification is much more crucial for the overall design of ViTs, and underscores the value of our work that removes the patchification altogether. Discussion. As another way to remove locality, pixel permutation is highly destructive. On the other hand, shows successful elimination of locality is possible by treating individual pixels as tokens. We hypothesize this is because permuting pixels not only damages the locality bias, but also hurts the other inductive bias – translation equivariance. In , although locality is removed altogether, the Transformer weights are still shared to preserve translation equivariance; but with shuffling, this inductive bias is also largely removed. The difference suggests that translation equivariance remains important and should not be disregarded, especially after locality is already compromised. § CONCLUSION AND LIMITATIONS Through our explorations, we have demonstrated that Transformers can directly work with individual pixels as tokens. This is surprising, as it allows for a clean, potentially scalable architecture without locality – an inductive bias that was presumably fundamental for vision models. Given the spirit of deep learning that aims to replace manually inducted priors with data-driven, learnable alternatives, we believe our finding is of great value to the community, especially when designing the next-generation of models for the domain of 2D images and beyond. However, the practicality and coverage of our current demonstrations remains limited. Given the quadratic computation complexity, is more of a method for investigation, and less for applications. And even with the additional tasks in <ref>, the study is still not comprehensive. Nonetheless, we believe this work has sent out a clear, unfiltered message that locality is not fundamental, and patchification is simply a useful heuristic that trades-off efficiency accuracy. splncs04 10 Ba2016 Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv:1607.06450 (2016) beyer2022better Beyer, L., Zhai, X., Kolesnikov, A.: Better plain vit baselines for imagenet-1k. arXiv preprint arXiv:2205.01580 (2022) boykov2001fast Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE TPAMI (2001) brendel2019approximating Brendel, W., Bethge, M.: Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. arXiv preprint arXiv:1904.00760 (2019) Brown2020 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D.: Language models are few-shot learners. In: NeurIPS (2020) chang2015shapenet Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., et al.: Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012 (2015) Chen2020c Chen, M., Radford, A., Child, R., Wu, J., Jun, H., Luan, D., Sutskever, I.: Generative pretraining from pixels. In: ICML (2020) chen2021evaluating Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H.P.d.O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., et al.: Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021) Chen2020b Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. In: NeurIPS (2020) Chen2021a Chen, X., Xie, S., He, K.: An empirical study of training self-supervised Vision Transformers. In: ICCV (2021) Clark2020 Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: ELECTRA: Pre-training text encoders as discriminators rather than generators. In: ICLR (2020) csurka2004visual Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: ECCVW (2004) cubuk2018autoaugment Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501 (2018) Cubuk2020 Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: CVPR Workshops (2020) dai2017scannet Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: CVPR (2017) Dalal2005 Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR (2005) dao2022flashattention Dao, T., Fu, D.Y., Ermon, S., Rudra, A., Ré, C.: FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In: NeurIPS (2022) dehghani2023scaling Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., Steiner, A.P., Caron, M., Geirhos, R., Alabdulmohsin, I., et al.: Scaling vision transformers to 22 billion parameters. In: ICML (2023) Deng2009 Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR (2009) Devlin2019 Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: NAACL (2019) Dhariwal2021 Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. In: NeurIPS (2021) Dosovitskiy2021 Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021) Dosovitskiy2014 Dosovitskiy, A., Springenberg, J.T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with convolutional neural networks. In: NeurIPS (2014) Esser2021 Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: CVPR (2021) Goyal2017 Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., He, K.: Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv:1706.02677 (2017) He2022 He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: CVPR (2022) He2020 He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR (2020) He2016 He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) Heusel2017 Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: NeurIPS (2017) ho2022classifier Ho, J., Salimans, T.: Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 (2022) Hochreiter1997 Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation (1997) hu2022exploring Hu, R., Debnath, S., Xie, S., Chen, X.: Exploring long-sequence masked autoencoders. arXiv preprint arXiv:2210.07224 (2022) Huang2016 Huang, G., Sun, Y., Liu, Z., Sedra, D., Weinberger, K.Q.: Deep networks with stochastic depth. In: ECCV (2016) jaegle2021perceiverio Jaegle, A., Borgeaud, S., Alayrac, J.B., Doersch, C., Ionescu, C., Ding, D., Koppula, S., Zoran, D., Brock, A., Shelhamer, E., et al.: Perceiver io: A general architecture for structured inputs & outputs. arXiv preprint arXiv:2107.14795 (2021) jaegle2021perceiver Jaegle, A., Gimeno, F., Brock, A., Vinyals, O., Zisserman, A., Carreira, J.: Perceiver: General perception with iterative attention. In: ICML (2021) karpathy2015visualizing Karpathy, A., Johnson, J., Fei-Fei, L.: Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078 (2015) ke2022unsupervised Ke, T.W., Hwang, J.J., Guo, Y., Wang, X., Yu, S.X.: Unsupervised hierarchical semantic segmentation with multiview cosegmentation and clustering transformers. In: CVPR (2022) Krizhevsky2009 Krizhevsky, A.: Learning multiple layers of features from tiny images. Tech Report (2009) kudo2018sentencepiece Kudo, T., Richardson, J.: Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226 (2018) kynkaanniemi2019improved Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., Aila, T.: Improved precision and recall metric for assessing generative models. NeurIPS (2019) lazebnik2006beyond Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR (2006) LeCun1989 LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural computation (1989) Li2021 Li, Y., Xie, S., Chen, X., Dollár, P., He, K., Girshick, R.: Benchmarking detection transfer learning with vision transformers. In preparation (2021) liu2023ring Liu, H., Zaharia, M., Abbeel, P.: Ring attention with blockwise transformers for near-infinite context. arXiv preprint arXiv:2310.01889 (2023) Loshchilov2019 Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: ICLR (2019) Lowe2004 Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV (2004) mikolov2010recurrent Mikolov, T., Karafiát, M., Burget, L., Cernockỳ, J., Khudanpur, S.: Recurrent neural network based language model. In: Interspeech (2010) nash2021generating Nash, C., Menick, J., Dieleman, S., Battaglia, P.W.: Generating images with sparse representations. arXiv preprint arXiv:2103.03841 (2021) nilsback08flowers Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: Indian Conference on Computer Vision, Graphics and Image Processing (2008) parmar2018image Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, L., Shazeer, N., Ku, A., Tran, D.: Image transformer. In: ICML (2018) Peebles2023 Peebles, W., Xie, S.: Scalable diffusion models with Transformers. In: ICCV (2023) qi2017pointnet++ Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. NeurIPS (2017) Radford2018 Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018) Rombach2022 Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022) salimans2016improved Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. NeurIPS (2016) Scarselli2009 Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Transactions on Neural Networks (2009) sennrich2015neural Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909 (2015) shen2023asymmetric Shen, C., Chen, J., Wang, S., Kuang, H., Liu, J., Wang, J.: Asymmetric patch sampling for contrastive learning. arXiv preprint arXiv:2306.02854 (2023) silberman2012nyuv2 Silberman, N., Kohli, P., Hoiem, D., Fergus, R.: Indoor segmentation and support inference from rgbd images. In: ECCV (2012) tolstikhin2021mlp Tolstikhin, I.O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, D., Uszkoreit, J., et al.: Mlp-mixer: An all-mlp architecture for vision. NeurIPS (2021) Touvron2021 Touvron, H., Cord, M., Sablayrolles, A., Synnaeve, G., Jégou, H.: Going deeper with image transformers. In: ICCV (2021) Vaswani2017 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) Walmer2023 Walmer, M., Suri, S., Gupta, K., Shrivastava, A.: Teaching matters: Investigating the role of supervision in vision transformers. In: CVPR (2023) Yun2019 Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: Regularization strategy to train strong classifiers with localizable features. In: ICCV (2019) Zhang2018a Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. In: ICLR (2018) zhao2021point Zhao, H., Jiang, L., Jia, J., Torr, P.H., Koltun, V.: Point transformer. In: ICCV (2021) Acknowledgments. We thank Kaiming He, Mike Rabbat and Sho Yaida for helpful discussions. We thank Yann LeCun for feedback on positioning. § VISUALIZATIONS OF To check what has learned, we experimented different ways for visualizations. Unless otherwise specified, we use -B and ViT-B models trained with supervised learning on ImageNet classification, and compare them side by side. Mean attention distances. In <ref>, we present the mean attention distances for and ViT across three categories: late layers (last 4), middle layers (middle 4), and early layers (first 4). Following <cit.>, this metric is computed by aggregating the distances between a query token and all the key tokens in the image space, weighted by their corresponding attention weights. It can be interpreted as the size of the `receptive field' for Transformers. The distance is normalized by the image size, and sorted based on the distance value for different attention heads from left to right. As shown in <ref> and <ref>, both models exhibit similar patterns in the late layers, with the metric increasing from the 8th to the 11th layer. In the middle layers, while ViT displays a mixed trend among layers (see <ref>), clearly extract patterns from larger areas in the relatively later layers (see <ref>). Most notably, focuses more on local patterns by paying more attention to small groups of pixels in the early layers, as illustrated in <ref> and <ref>. Mean attention offsets. <ref> shows the mean attention offsets between and ViT as introduced in <cit.>. This metric is calculated by determining the center of the attention map generated by a query and measuring the spatial distance (or offset) from the query's location to this center. Thus, the attention offset refers to the degree of spatial deviation of the `receptive field' – the area of the input that the model focuses on – from the query's original position. Note that different from ConvNets, Self-Attention is a global operation, not a local operation that is always centered on the current pixel (offset always being zero). Interestingly, <ref> suggests that captures long-range relationships in the first layer. Specifically, the attention maps generated by focus on regions far away from the query token – although according to the previous metric (mean attention distance), the overall `size' of the attention can be small and focused in this layer. Figure-ground segmentation in early layers. In <ref>, we observe another interesting behavior of . Here, we use the central pixel in the image space as the query and visualize its attention maps in the early layers. We find that the attention maps in the early layers can already capture the foreground of objects. Figure-ground segmentation <cit.> can be effectively performed with low-level signals (, RGB values) and therefore approaches with a few layers. And this separation prepares the model to potentially capture higher-order relationships in later layers. § EXTENDED RESULTS ON IMAGE GENERATION In the main paper (<ref>), both image generation models, DiT-L/2 and -L, are trained for 400 epochs. To see the trend for longer training, we followed <cit.> and simply continued training them till 1400 epochs while keeping the learning rate constant. The results are summarized in <ref>. Interestingly, longer-training also benefits more than DiT. Note that FID shall be compared in a relative sense – a 0.2 gap around 2 is bigger than 0.2 around 4. § GENERALIZATION TO OTHER TASKS To further examine the generalization of our observation, we tried on two more tasks: (i) fine-grained classification on Oxford-102-Flower <cit.>, which requires nuanced understanding; and (ii) depth estimation on NYU-v2 <cit.>. Given the computation budget, we resize images to either 32×32 (former) or 48×64 (latter), and follow standard protocols to train and evaluate models. The results again shows holds more effectiveness over ViT in quality: 8pt1.1 2c|fine-grained classification depth estimation Acc@1 (↑) Acc@5 (↑) RMSE (↓) ViT-S/2 45.8 68.3 0.80 -S 46.3 68.9 0.72 § TEXTURE SHAPE BIAS ANALYSIS As a final interesting observation, we used an external benchmark[<https://github.com/rgeirhos/texture-vs-shape>] which checks if an ImageNet classifier's decision is based on texture or shape. ConvNets are heavily biased toward texture (∼20 in shape bias). Interestingly, we find relies more on shape than ViT (57.2 56.7), suggesting that even when images are broken down into sets of pixels, Transformers can still sift through potentially abundant texture patterns to identify and rely on the sparse shape signals for tasks like object recognition. § ADDITIONAL NOTES FOR TRAINING While for some cases (, DiT <cit.>), the training recipe can be directly transferred to ; for some other cases, we do want to note more potential challenges during training. Below we want to especially highlight the effect of reduced learning rates when training a supervised ViT from scratch. We take CIFAR-100 as a representative example. As shown in <ref>, we find for , the training becomes unstable if we maintain the same learning rate from ViT. It is especially vulnerable toward the end of the schedule. When the initial learning rate is reduced from 2e^-3 to 1e^-3, the training is more stable and leads to better accuracy. Similar observations are also made on ImageNet.
http://arxiv.org/abs/2406.08351v1
20240612155612
Enhancing Cosmological Model Selection with Interpretable Machine Learning
[ "Indira Ocampo", "George Alestas", "Savvas Nesseris", "Domenico Sapone" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc" ]
IFT-UAM/CSIC-24-86 indira.ocampo@csic.es g.alestas@csic.es savvas.nesseris@csic.es Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid, Cantoblanco, 28049 Madrid, Spain. domenico.sapone@uchile.cl Departamento de Física, FCFM, Universidad de Chile, Santiago, Chile. § ABSTRACT We propose a novel approach using neural networks (NNs) to differentiate between cosmological models, especially in the case where they are nested and the additional model parameters are close to zero, making it difficult to discriminate them with traditional approaches. Our method complements Bayesian analyses for cosmological model selection, which heavily depend on the chosen priors and average the unnormalized posterior over potentially large prior volumes. By analyzing simulated realistic data sets of the growth rate of the large scale structure (LSS) of the Universe, based on current galaxy-clustering survey specifications, for the cosmological constant and cold dark matter (ΛCDM) model and the Hu-Sawicki f(R) model, we demonstrate the potential of NNs to enhance the extraction of meaningful information from cosmological LSS data. We find that the NN can successfully distinguish between ΛCDM and the f(R) models, by predicting the correct model with approximately 97% overall accuracy, thus demonstrating that NNs can maximise the potential of current and next generation surveys to probe for deviations from general relativity. Enhancing Cosmological Model Selection with Interpretable Machine Learning Domenico Sapone June 17, 2024 =========================================================================== Introduction. The accelerated expansion of the Universe, which the cosmological constant Λ and Cold Dark Matter (ΛCDM) model successfully describes, remains a significant enigma in cosmology due to several tensions that have appeared recently between low redshift and high redshift probes, see for example <cit.> for a recent review. To account for this phase of accelerated expansion of the Universe, alternative theories of gravity have been proposed, such as various covariant modifications of the Einstein-Hilbert action, from which general relativity (GR) with a cosmological constant can be derived from. The simplest such example is promoting the Lagrangian R-2Λ to a more general function of the Ricci scalar of the form R + f(R) <cit.>. These alternative theories of gravity attempt to change the nature of the gravitational attraction, however they also face challenges in their theoretical justification, as higher-order covariant modifications of GR in general may exhibit ghost-like behavior or the Ostrogradski instability <cit.>. However, in the simple case of f(R) the latter issue is avoided. As a result of the intriguing possibility to probe for deviations from GR, over the past two decades a plethora of analyses has been performed, using data from both early Universe physics (e.g., cosmic microwave background (CMB) photons from the Planck <cit.>, ACT <cit.>, and South Pole Telescope <cit.> experiments) and the distribution of baryonic matter in later times (e.g., BOSS <cit.>, DES <cit.>, DESI <cit.>, Euclid <cit.>). All these efforts have dramatically decreased the uncertainty in the estimation of cosmological parameters; however, tensions still remain in the parameters even for the favored standard ΛCDM model <cit.>. Future surveys such as the Simons Observatory <cit.>, and the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) <cit.>, aim to further refine the measurements of these parameters. However, the enhanced precision of these experiments demands highly accurate theoretical modeling and extensive evaluations of the likelihood <cit.>. Thus, there is a pressing need to develop more intricate astrophysical and cosmological models, incorporating numerous nuisance parameters to properly capture astrophysical phenomena <cit.>. Furthermore, it is essential to test various cosmological models that predict the same or very similar expansion histories. Typically, such comparisons rely on a likelihood function and calculations of Bayes factors <cit.> to determine the preferred model. However, this approach has its limitations, mainly due to the very high computational cost, especially when dozens or more nuisance parameters are taken into account (as is the case of the Planck likelihood <cit.>), but also due to the dependency on ad-hoc priors and the averaging of the unnormalized posterior over prior volumes, which can affect each model differently. Moreover, the Bayesian comparison does not outright reject models, but rather evaluates which one is comparatively more supported by the data, based on the Jeffreys' scale <cit.>. To address the aforementioned issues, we turn to machine learning (ML), which has become a cornerstone in the landscape of artificial intelligence (AI) techniques. Its main advantage is in extracting patterns, insights, and knowledge from vast amounts of data without explicit instructions, and can adjust and improve autonomously. In astronomy, ML has recently seen a plethora of practical applications facilitating the automation of the identification of celestial objects and specific patterns of the sources <cit.>, given the volume of data from ongoing surveys, manual analysis has become challenging. Also, in high-energy experiments like the Large Hadron Collider, the volume of data produced is enormous. ML can facilitate the creation of efficient triggers by filtering out irrelevant data, pin-pointing significant events, and providing insights that manual analyses might miss <cit.>. Given the aforementioned challenges in standard analyses, but also the unique advantages of ML, in this work we propose a novel approach to enhance model selection for discriminating between different cosmological models. Specifically, we explore neural networks (NNs) as a tool that can complement traditional Bayesian analysis, offering a new perspective in extracting meaningful information, especially in cases where there are degeneracies in the parameter space, several nuisance parameters or ad-hoc chosen priors, all of which may affect traditional analyses. Furthermore, as ML techniques are gaining a lot of attention and being exploited in almost every field of study nowadays, there is a need to unveil its complexity and understand its decision making process. Thus, in the present work we also study the NN interpretability, so as to understand which are the relevant features that have a more significant impact in the classification. In particular, we perform this analysis using <cit.>. To demonstrate the advantage of our approach, we performed analyses using realistic, simulated data sets based on a Stage-IV LSS survey, for two distinct cosmological models: the ΛCDM model and a specific class of the Hu-Sawicki f(R) model <cit.>, where the latter replicates the ΛCDM background expansion precisely, while it differs in the growth of matter distribution, thus guaranteeing that the only information used comes from the growth of structures in our Universe. Our study specifically targets a DESI-like survey to assess these models, which is particularly timely as the recent DESI DR1 data release and the resulting cosmological constraints <cit.>, showed exciting hints for a possible time evolution of the dark energy equation of state parameter w(z), which in the ΛCDM takes the value w(z)=-1. This possible redshift evolution of w(z) seen by DESI, further motivates our work as it could hint to either the presence of a scalar field <cit.> or some deviation from GR <cit.>. However, special care should be taken when perturbations in the dark sector are considered <cit.>. Setting the stage. In order to keep the analysis simple, in this work we will consider two models: the ΛCDM model and the Hu-Sawicki f(R) model (HS), described by the action <cit.> S = ∫^4 x √(-g) {1/2κ^2[R+f(R)]+ℒ_m} , where ℒ_m is the matter Lagrangian, κ^2 = 8π G_ N/c^4, and the f(R) function for |f_R| ≪ 1 is approximately equal to f(R) = - 6 Ω_DE,0H_0^2/c^2 + |f_R0| R̅_0^2/R+… , where f_R0= df(R)/ d R|_z=0. For values of |f_R0|≪ 1, the background expansion history is well approximated by ΛCDM <cit.>, and here we use |f_R0|= 5× 10^-6, which is in agreement with observations and it also allows us to compare the results with Ref. <cit.>. Then, as the background expansion is similar to that of the ΛCDM model, we turn to the matter density perturbations of the large scale structure (LSS) of the Universe, and in particular the growth-rate of the matter density contrast δ_m≡δρ_m/ρ_m. The growth-rate is then defined as f≡lnΔ_m/ ln a, where a is the scale factor that describes the expansion of the Universe in an Friedmann–Lemaître–Robertson–Walker (FLRW) geometry and Δ_m(a)≡δ_m(a)/δ_m(a=1) is the normalized to today growth. However, what is actually measurable by LSS and galaxy-clustering spectroscopic surveys is instead the quantity fσ_8(a)≡ f(a) σ_8(a)=σ_8,0 a Δ_m'(a), where σ_8(a)=σ_8,0 Δ_m(a) is the redshift-dependent root mean square (RMS) fluctuations of the linear density field at R=8 h^-1 Mpc and σ_8,0 is its present value. Then, fσ_8(a) can be measured directly via the monopole to the quadrupole ratio of the redshift-space power spectrum, that depends on the parameter β=f/b, where b is the galaxy bias, thus making fσ_8(a) bias-free, as the bias cancels out from the expression. Finally, fσ_8 was shown in Ref. <cit.> to be an effective discriminator of dark energy (DE) and modified gravity (MG) models, making it ideal for our analysis. Finally, we also need to model the three-dimensional matter power spectrum, which is a measure of the variance of the density contrast, and is the Fourier transform of the 2-point correlation function <cit.>. Then, the observed galaxy power spectrum is modeled as in <cit.>: P_obs(k,μ;z) = H(z) D_A,r(z)/H_r(z) D_A(z) [b(z)σ_8(z)+f(z)σ_8(z)μ^2]^2 ×P_nl(k,μ;z)/σ^2_8(z) e^-k^2 μ^2σ^2_r + P_s(z) , where μ is the angle of the wave-vector k with respect to the line of sight, and the P_nl is the matter power spectrum with non-linear corrections damping the BAO signal, see <cit.>; the first term is the Alcock-Paczynski effect, which takes into account the change on volume via the Hubble parameter H(z) and the angular diameter distance D_A(z), while P_s(z) is the shot-noise term. The galaxy matter power spectrum is further modulated <cit.> by an error in the redshift measurement σ^2_r = c δ z/H(z) with δ z = 0.0005 (1 + z). The linear matter power spectrum for the models used in this work has been obtained using  <cit.>. In order to simulate the data, we will assume a DESI-like survey, covering 14 000 deg^2, whose targets are: Bright Galaxies (BGS) at low redshifts z<0.4, Luminous Red Galaxies (LRGs) at redshifts 0.4<z<1.1, Emission Line Galaxies (ELGs) at redshifts 0.6 < z < 1.6, and quasars at 0.9 < z < 2.1, <cit.>. In this work, we use only BGS to cover low redshifts and ELG to cover high redshifts having in total 16 redshift bins. To evaluate the covariance matrix we use the Fisher matrix approach, the cosmological parameters are the Hubble parameter H(z), D_ A(z), fσ_8(z), bσ_8(z), and P_s(z). While the shape parameters ω_m,0 = Ω_m,0 h^2, h, ω_b,0 = Ω_b,0 h^2, n_s are kept fixed. The latter implies setting priors from CMB experiments <cit.>. The specific Fisher matrix will have dimension 16 bins× 5 parameters per bin= 80 parameters. The bias factor and the shot-noise are considered nuisance parameters, to be marginalised over. Regarding the fσ_8(z), in any modified theory model or if dark energy perturbations are present, the growth rate depends on the scale k, see <cit.>. In this work, in order to constrain fσ_8(z), we relaxed this assumption and took the value of the growth rate at k=0.01 Mpc^-1, since the dependence across the entire k range is less than 0.1%. The final covariance matrix for DESI-like specifications is obtaining from the marginalization of the Fisher matrix. As the dependence of the covariance matrix on the cosmology is weak, especially as the f(R) model is close to ΛCDM, we use the same covariance for both models, based on a fiducial cosmology of Ω_m,0 = 0.32, Ω_b,0 = 0.05, h = 0.67, n_s = 0.96, and σ_8,0 = 0.85. Simulated data. In order to test deviations from ΛCDM with our methodology, we simulated DESI-like datasets for fσ_8 measurements reflecting both cosmological models: ΛCDM and HS. Using the specifications, as discussed earlier, we generate mock datasets using grids within the following intervals: Ω_m,0∈[0.2, 0.4] and σ_8,0∈ [0.7, 0.9] for ΛCDM, while for the HS model, we vary the parameters in the range Ω_m,0∈[0.2, 0.4], σ_8,0∈[0.7, 0.9] and f_R0∈[10^-6, 5× 10^-6], where these values were chosen so that they are in agreement with current observational constraints for the two models. NN architecture. In summary, the architecture implemented, is the one shown in Fig. <ref> created with .[<https://github.com/martinjm97/ENNUI.git>] The input data are fσ_8 values for a DESI-like survey with 16 z-bins, where we took into account the uncertainties from the Fisher matrix approach. We first implemented a feature normalization layer, with a activation function <cit.>, where we set the to 32 (for each one of the 16 features). After normalization, we concatenated the features and passed the full data set through a fully connected layer, also with a activation function. Then, we applied a dropout layer, which is a regularization technique for preventing overfitting <cit.>, with a dropout rate of 0.2, and finally the last fully connected layer with a sigmoid activation for the classification task: HS (class 1) or ΛCDM (class 0). We also applied an early stopping callback <cit.> to prevent overfitting, for this we set the “patience" (the number of epochs in which the accuracy and loss do not change significantly) to 50 training epochs, and found that our model reached a high accuracy and low loss in 1400 epochs, see Fig. <ref>. Our model was compiled with a nadam optimizer <cit.>, a binary cross entropy loss function and a learning rate of 0.001. Results. The full dataset has 5000 fσ_8 samples (50% HS and 50% ΛCDM), that we split as 70% for training + validation and 30% for testing. In Fig. <ref> we show the accuracy and loss with respect to the number of epochs optimized by the early stopping callback. We observe that in both cases the NN has demonstrated well-converged loss and accuracy metrics. Specifically, the training loss decreased smoothly from 0.7 to about 0.1, while the validation loss similarly decreased from 0.68 to 0.04, showing minimal fluctuations. The training accuracy improved from 52% to 97%, with the validation accuracy closely following, improving from 58% to 99%. The alignment between training and validation metrics indicates minimal overfitting, and the final plateau in both loss and accuracy suggests that the model has converged to an optimal state. Finally, in Fig. <ref> we show the confusion matrix, where we notice that the NN performs very well, as it can correctly identify the ΛCDM model 100% of the time and the HS model ∼ 95% of the time, while offering false positives only in the rest ∼ 5%, these results in an overall accuracy rate of 97.5% for a correct prediction. Robustness of the NNs. We have performed several tests in order to test the robustness of our pipeline. First, we examined the effect of dataset size on the performance of the NN, as shown in Fig. <ref>. The results indicate that the correct predictions saturates after a few thousand realizations. Considering that the running time of the NN scales approximately linearly with the number of mock datasets, we chose a dataset size of 5000 realizations. This choice represents a compromise between running time and the accuracy of the NN. We also investigated the NN's performance using simulated data sets based on different covariance matrices. One matrix encapsulated the inherent variations within the noise profile of the ΛCDM model, while the other represented the noise profile associated with the f(R) model. We found that the NN can also accurately discriminate 100% both models when utilizing different covariance matrices, albeit at the cost of adding some biases in the results. The reason for this is that the NN in this case also learns to discriminate the covariance matrix, i.e. if it belongs to ΛCDM or f(R), via the noise distribution of the fσ_8 data. NN interpretability. In order to identify which features of the data help the NN discriminate so well between the two models, we next gray investigate the NN interpretability <cit.>. The first test to identify the most relevant features in the NN's decision-making process was carried out by training and testing using only the first eight and last eight fσ_8 values. We found that the performance is predominantly influenced by the first eight features. An interesting approach for this interpretability task, is the one implemented by , which stands for Local Interpretable Model-agnostic Explanations. The local interpretability procedure is a less complicated task to tackle compared to the global one, therefore this approach aims to understand the model's decision-making process by generating nearby data points through random perturbations of the features from a given data-point and then analyzing how these changes affect the model's predictions. Then, it learns locally weighted linear models on this neighborhood data to explain each of the classes in an interpretable way <cit.>. In Figs. <ref> and <ref> we show the feature impact on the model output for our NN. First we picked a data sample from the test set (that was correctly classified), belonging to the ΛCDM and then one to the HS classes respectively. calculates the individual probability of each feature to belong to a determined class, given that it has a particular value, and then obtains an overall probability of belonging to HS or ΛCDM. In Fig. <ref>, the overall probability of belonging to class ΛCDM is around 0.99, and the most relevant features that have an impact are found to be fσ_8(z_1), fσ_8(z_6) and fσ_8(z_7), whereas in Fig. <ref>, the overall probability of belonging to class HS is around 0.9, and the most relevant features that have an impact are found to be fσ_8(z_6), fσ_8(z_7) and fσ_8(z_8). The positive x-axis in both figures illustrates the local probability of belonging to class HS, and the negative to class ΛCDM. Also, in both cases on the left panel we show the individual impact from each random perturbation for each feature on whether it is classified locally, as ΛCDM (negative values on the x-axis) or HS (positive values on the x-axis). The samples in Fig. <ref> were classified overall as ΛCDM and the ones in Fig. <ref> as HS. The fσ_8 values are also displayed as a heat map, bearing in mind that this analysis was performed for 50 samples of the test set. We can see that in both cases, the overall most important features seem to be the first fσ_8(z_1), then three in the middle fσ_8(z_5), fσ_8(z_6), fσ_8(z_7), fσ_8(z_8) and two at the end fσ_8(z_14), fσ_8(z_15). Finally, in Fig. <ref> we show the values of one realization of fσ_8 data with respect to the redshift z. The color shading corresponds to the feature importance of each redshift bin (light to dark green implying low to high importance), according to the tests from discussed earlier. As can be seen, the most important redshift bins are at low (z<0.2), albeit with mid (0.5<z<0.8) and high (z>1.4) redshifts also carrying strong weight (importance ∼0.5). On the other hand, the intermediate redshifts (0.2<z<0.5) and 0.8<z<1.4) have particularly low feature important (below 0.3). This behavior is observationally correct, as the largest differences in the growth rate between the ΛCDM and MG models occur at low redshift. The impact at intermediate redshifts (z∼0.7) is also clear because this is when the effective DE begins to dominate, and it is where the largest variation, hence the derivative, in the growth factor is observed. Conclusions. In this work we proposed a NN pipeline that can successfully discriminate between the standard cosmological constant ΛCDM model, which is based on GR, and the most commonly used extension of GR based on the popular HS f(R) model, using LSS growth rate data. While the latter model is still viable, i.e. has not been ruled out yet, the best-fit is so close to ΛCDM that traditional analyses, using galaxy-clustering data, cannot discriminate the two models apart. Creating an optimized pipeline to do this is a major goal in light of current surveys, especially in light of the fact that traditional model selection methods, such as Bayesian analyses via the evidence, of course naturally penalize models with more parameters. In this sense, our analysis and pipeline is complementary to the traditional approaches, especially in the case we mention in this work, where due to degeneracies in the data and the fact the extra parameter of the model is close to zero, it is difficult for the traditional analyses to discriminate the two models. In our case, our pipeline performs well in discriminating the two models, to approximately 97%, due to the fact that the NN has more information during the training part compared to the standard analyses which only see one realization at the time. This can be seen particularly clearly in Fig. <ref>, where we vary the number of mocks used by the NN. When the number of mocks is low, e.g. ∼ 50, the accuracy is as low as 50%, i.e. random change and the NN cannot discriminate the models, but then it saturates over 90% for a few thousand mocks. Our work focused exclusively on the galaxy-clustering fσ_8 data as a proof of concept, so as to establish the method and demonstrate its strengths. However, our pipeline can easily be extended to use directly the multipoles of the redshift-space power spectrum and other related observables. Furthermore, we demonstrated the robustness of our approach by studying several aspects, such as the number of training samples, different covariance matrices for the creation of the mock data, but more importantly we also focused on the interpretability of the results, where we identified the aspects of the data that particularly help the NN discriminate the two models. To our knowledge such a combination of observables used to test directly GR and a related NN pipeline has not been considered before in the literature, while the much simpler task of using NNs to speed up numerical calculations of the evidence is now more common, see for example <cit.>. Also, the potential to extend our pipeline is significant, especially when used with current LSS surveys and their observables, as it can help discriminate models which are otherwise difficult to tell apart using traditional methods, thus opening a new avenue to probe for deviations of GR with current and next generation surveys.   Code availability. The numerical codes will be made publicly available in GitHub[<https://github.com/IndiraOcampo/NN-HS_vs_LCDM>] upon publication of the paper.   Acknowledgements. We would like to thank G. Cañas, S. Casas, and V. Pettorino for useful discussions. IO thanks ESTEC/ESA for the warm hospitality during the execution of this project, and for support from the ESA Archival Research Visitor Programme. IO, GA and SN acknowledge support from the research project PID2021-123012NB-C43 and the Spanish Research Agency (Agencia Estatal de Investigación) through the Grant IFT Centro de Excelencia Severo Ochoa No CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033. GA's and research is supported by the Spanish Attraccion de Talento contract no. 2019-T1/TIC-13177 granted by the Comunidad de Madrid. IO is also supported by the fellowship LCF/BQ/DI22/11940033 from “la Caixa” Foundation (ID 100010434). DS acknowledges financial suport from the Fondecyt Regular project number 1200171.
http://arxiv.org/abs/2406.08294v1
20240612145737
Vessel Re-identification and Activity Detection in Thermal Domain for Maritime Surveillance
[ "Yasod Ginige", "Ransika Gunasekara", "Darsha Hewavitharana", "Manjula Ariyarathne", "Ranga Rodrigo", "Peshala Jayasekara" ]
cs.CV
[ "cs.CV", "cs.LG" ]
empty 1]Yasod Ginige 1]Ransika Gunasekara 1]Darsha Hewavitharana 1]Manjula Ariyarathne 1]Ranga Rodrigo 1]Peshala Jayasekaracor1 [cor1]Corresponding author: Tel.: +94-711273844; peshala@uom.lk [1]University of Moratuwa, Katubedda, Colombo (10400), Sri Lanka 1 May 2013 10 May 2013 13 May 2013 15 May 2013 S. Sarkar § ABSTRACT Maritime surveillance is vital to mitigate illegal activities such as drug smuggling, illegal fishing, and human trafficking. Vision-based maritime surveillance is challenging mainly due to visibility issues at night which results in failures in re-identifying vessels and detecting suspicious activities. In this paper, we introduce a thermal, vision-based approach for maritime surveillance with object tracking, vessel re-identification, and suspicious activity detection capabilities. For vessel re-identification, we propose a novel viewpoint-independent algorithm which compares features of the sides of the vessel separately (separate side-spaces) leveraging shape information in the absence of color features. We propose techniques to adapt tracking and activity detection algorithms for the thermal domain and train them using a thermal dataset we created. This dataset will be the first publicly available benchmark dataset for thermal maritime surveillance. Our system is capable of re-identifying vessels with an 81.8% Top1 score and identifying suspicious activities with a 72.4% frame mAP score; a new benchmark for each task in the thermal domain. 41A0541A1065D0565D17 Maritime surveillanceThermal visionRe-identificationActivity detection § INTRODUCTION Illicit maritime activities such as smuggling, illegal fishing, and human trafficking are major threats, especially during the night. Maritime surveillance, a preventive measure and a deterrent in the face of this threat, involves detection, tracking, re-identification, and blacklisting vessels. Vision based maritime surveillance is challenging due to visibility issues at night, water body reflections, and extreme weather conditions. In non-maritime RGB domains, however, video-based surveillance is well established, especially in building and road traffic surveillance systems. Human tracking, counting, identification, authorization, and hazard detection are common use cases of surveillance systems [<cit.>]. In road traffic surveillance, cameras detect speeding and other rule violations, monitor traffic conditions, and provide parking assistance [<cit.>]. In face recognition and person re-identification, existing methods lock on to facial landmarks, body structure, and clothing [<cit.>]. Many methods in face recognition, person re-identification, and traffic detection domains use fine-grained RGB structures such as specific features of the human face, patterns and color in clothing, and number plates of vehicles. These methods that work in the RGB domain fail in gray-scale maritime thermal videos, particularly, as thermal images capture a different set of features and due to the absence of distinctive features as mentioned above. In maritime environments, vessels can appear in entirely different views, unlike in face re-identification. The features of each side of these vessels are vastly different from the other sides. Therefore, the algorithms should be able to build a feature vector paying attention to vessel orientation and comparing related features with other vessels. Furthermore, checking which sides are visible in a vessel image is itself a challenging problem that should be solved prior to the orientation-wise feature extraction. In this paper, we aim at creating a maritime surveillance system mainly focusing on robust vessel re-identification that benefits from the nature of features visible in maritime thermal images. Our system (Fig. <ref>) comprises three main subsystems, namely, maritime vessel tracking, vessel re-identification, and maritime activity detection. In the tracking subsystem, we train an algorithm to track maritime objects in the thermal domain to withstand night and extreme weather conditions. The re-identification subsystem builds a view-weighted feature vector that captures all visible sides of the vessel using an encoder-decoder foreground segmenter and a ViT-based part (view) attention network similar to SPAN [<cit.>]. This feature vector, using an ArcFace loss [<cit.>], identifies vessels using a dynamic database, irrespective of the orientation (achieving viewpoint independence) of the query image by focusing on distinct shapes on each visual side of the vessel. This compensates for the absence of color and fine features in the thermal domain. The maritime activity detection subsystem incorporates both spatial and temporal action localization trained on maritime thermal images. We adopt the YOWO [<cit.>] algorithm in this work, which has been exclusively used in RGB domain, to detect two representative maritime activities. Our main contributions are as follows: * Viewpoint-independent novel re-identification algorithm: The algorithm focuses on the shape of vessels in the absence of color and intricate features in the thermal domain and compares feature vectors in each side-space, separately. The algorithm outperforms the mAP score of the SPAN model [<cit.>] by 32% in the thermal domain. * Creation of a thermal maritime dataset: The annotated dataset contains video footage of maritime vessels, jet-skies, and human activities with COCO annotations [<cit.>]. It also contains images of 40 small vessels and 32 large vessels from different viewpoints. The dataset can be used for detection and tracking, re-identification, and activity detection tasks. To the best of our knowledge, this is the first public dataset created for maritime surveillance in the thermal domain. Link: https://hevidra.github.io/https://hevidra.github.io/ * Detection and tracking in the thermal domain: We adapted the TraDes [<cit.>] algorithm, which is originally trained on RGB data, to track maritime objects such as vessels, ships, jet-skies, and humans in the thermal domain. The algorithm was fine-tuned using the COCO and the Singapore Maritime Dataset (SMD) [<cit.>]. We tuned detection and tracking thresholds and achieved a 61.2% MOTA score. * Activity detection in the thermal domain: We adapted the YOWO [<cit.>] algorithm, which was originally trained in the RGB domain, to detect activities such as possible human trafficking and swimming in the thermal domain. We re-trained the algorithm on our dataset and tuned hyper-parameters to obtain a frame mAP score of 62.45%, demonstrating promising detection of target activities. § RELATED WORK In this section, we explore literature under three main areas: vessel re-identification, object tracking and activity detection. §.§ Re-identification Re-identification is the process of identifying the same object or individual across different scenes, which typically involves matching features across images or video frames captured at different times and locations. It extends into different subdomains including face, human, objects, and vehicle re-identification. Face re-identification methods pay attention to specific features such as the iris, dimension of the face, nose, lips, and the color of the eyeball [<cit.>]. Due to genetic organizations, each human has a unique combination of these features which makes the face re-identification possible. However, it doesn't facilitate re-identification from different angles of the face as the algorithm expects the full frontal view of the face. In human re-identification tasks, algorithms pay attention to the whole body in addition to the face. Thus, there are other features such as height, shape, body language, and colors of the clothes taken into consideration [<cit.>]. More recent algorithms are capable of re-identifying despite different orientations [<cit.>]. However, in the thermal domain, human re-identification has been a challenging task; some approaches get the guidance from a visible model to train the thermal model [<cit.>], while fully thermal approaches suffer from low accuracy [<cit.>]. In vehicle re-identification, the main challenge is the variability in viewpoint and the high similarity among vehicles of the same category. Recent work has introduced several techniques that address the viewpoint variation by considering the camera's perspective [<cit.>]. These methods aim to learn the similarities and differences between images captured from different viewpoints by using triplet loss across extracted feature mappings. It enables accurate re-identification across various camera angles. However, these methods have the advantage of color features markedly absent in thermal domain. Furthermore, they have not been tested under poor visibility conditions, such as night-time and bad weather. Thermal domain vehicle re-identification is not well explored. Eleni et al. [<cit.>] have proposed a cross-domain model and tried to learn sharable features in both visible and IR domains. The model contains a shareable network followed by two separate streams for two domains increasing the computational complexity. The same authors propose a domain generalization approach for multi-modal vehicle re-identification based on meta-learning [<cit.>] using RGB, near-IR, and IR domains. However, both methods share visible domain features when training, possibly paying less attention to shapes than color features. Furthermore, they expect the model to see images of query vehicles from a similar orientation in the inference, i.e., the method does not force the model to learn orientation based feature extraction and identity classification. Nevertheless, for a maritime surveillance system, we cannot guarantee that the gallery contains images of a vessel from all the orientations (front, side, rear, front-and-side,...). Hence, the re-identification algorithm should be robust for appearances from different orientations. Chen et al. have proposed a viewpoint aware re-identification algorithm, SPAN [<cit.>] for RGB domain, which has the additional advantage of color features compared to the thermal domain. It pays less attention to the unique shapes of the vehicle (due to minor modifications) when comparing two vehicles of the same model. In our case, the domain differs from RGB to thermal, and minor changes in the shape of vessels are significant when identifying vessels. To the best of our knowledge, there is no work done in the thermal domain for maritime vessel re-identification. In this paper, we combine thermal images with orientation based feature extraction and identity classification, solving both visibility issues and the orientation issues. §.§ Object Tracking Object tracking is the automated process of locating and following objects of interest in images or videos. Conventional approaches such as [<cit.>] use two stages for detection and tracking, consuming more computational power and time. In these algorithms, a backbone model is used to detect objects, and then, a separate association algorithm builds tracklets between adjacent frames using those detections. Therefore, they cannot usually be used for real time object tracking due to heavy processing. To overcome these challenges, recent work has moved towards joint detection and tracking approaches, where we detect and track using a single backbone model. Other than real time processing, we need to keep a clean tracklet for each detected object throughout the frames as we feed these objects to the re-identification algorithm at predefined time steps. If the tacking algorithm cannot maintain a consistent identification (ID) for an object, the re-identification algorithm will be triggered upon every new object ID, causing redundant computations. Simple Online Realtime Tracker (SORT) [<cit.>] uses a Kalman filter to estimate the object's location from the previous frame and leverages measurements with uncertainty to estimate the current states. Deep SORT [<cit.>], an extension of SORT, compares the appearance of new detections with previously tracked objects within each track to assist data association using a re-identification based approach. However, in these methods, detection is independently predicted without tracking assistance that prevents a possible accuracy increment. This leads to frequent possible ID updates of detected objects in occluded or unclear scenarios. TraDeS, introduced by Jialian Wu et al. [<cit.>], presents an online multi-object tracking algorithm that integrates object detection and tracking to achieve robust and accurate tracking performance using CenterNet [<cit.>] as the backbone. It uses a peer supporting technique where features extracted from detection helps the tracking objective, while tracking offsets predicted in the detection stage and feature of previous frames enhance features of the current frame to help the detection objective. Although it can be used for real-time, accurate tracking tasks, it has not been tested in the thermal domain. In this paper, we train and evaluate the TraDeS algorithm on thermal data and provide a comprehensive results analysis to prove the validity of the algorithm in thermal domain object tracking. §.§ Activity Detection and Localization Vision based activity detection uses cameras to capture a video feed and processes it sequentially to identify activities  [<cit.>]. Recent work has paid attention towards two stream localization which combines both spatial and temporal streams, thereby improving the detection and classification of actions within a video [<cit.>]. However, only a few methods provide both online and realtime activity detection while maintaining a higher accuracy. Singh et al. [<cit.>] focus on online real-time action localization and prediction in real-time. This method has the capability to localize actions and predict upcoming actions, demonstrating the potential of predictive modeling. However, YOWO [<cit.>], proposed by Okan et al., is a comparatively low weight method with both online and real time processing capabilities and higher accuracy. It uses a unified CNN architecture for real-time spatiotemporal action localization using only a single pass through the network. This allows to process the video with a higher frames-per-second (fps) which is a considerable improvement over previous methods that require multiple iterations or separate processes for different tasks. Nevertheless, YOWO is utilized only in the RGB domain and is not tested for detecting suspicious activities such as possible human trafficking. In our work, we show that YOWO can be adapted for the thermal domain by retraining and adjusting hyper-parameters and sets a new benchmark for thermal activity detection. § METHODOLOGY The framework proposed in this study comprises three primary subsystems: object tracking, vessel re-identification, and activity detection, as depicted in Fig. <ref>. The thermal video feed captured by the camera is directed towards the object tracking and activity detection subsystems. Subsequently, the tracking subsystem outputs identified objects, which are then forwarded to the re-identification subsystem. The outputs generated by all three subsystems are integrated into a user interface, facilitating the visualization of detected marine vessels, associated activities, and the corresponding re-identification results. The following sections explain each subsystem in detail. §.§ Object Tracking for Bounding Box Extraction For object tracking, we adapted TraDeS [<cit.>] algorithm for the thermal domain. While many existing detection and tracking approaches conduct independent detection without incorporating tracking input, this method integrates tracking cues into the detection process to enhance performance in challenging scenarios, and thereby improving tracking outcomes. First, we use the DLA-34 model [<cit.>] as the backbone for the feature extraction of input frames. Next, we use two modules to optimize object detection and tracking using the outcomes of each other (Fig. <ref>). The Cost Volume based Association (CVA) module is used to generate embeddings and derive object motions to improve object tracking accuracy. Then, we use the Motion-guided Feature Warper (MFW) module to enhance the object features in the next frame based on the CVA outcomes. More specifically, MFW enhances the feature vector of the current frame based on the tracking history of past frames. This improves the performance of the algorithms, especially when the current frame is occluded. It utilizes tracking cues obtained from the CVA, and propagates them to enhance object features to improve the detection accuracy. Next, object detection is done using CenterNet [<cit.>] in the current frame and is associated using a two-round data association technique. In the first round, objects are mapped to the closest tracklet. If it fails, cosine similarity between unmatched tracklet embeddings and the object feature embedding is considered. This method was chosen because of the model’s ability to integrate detecting, segmenting, and tracking in a single network, which reduces the processing time and improves overall accuracy and efficiency. In addition, the model has better real-time tracking of multiple objects compared to previous models discussed in section <ref>, and it is robust to occlusions and appearance changes. We adapt the TraDeS algorithm for the thermal domain by retraining it using thermal data. We replicate the thermal channel into 3 channels (RGB) when feeding it to the algorithm. The training process is further discussed in section <ref>. §.§ Vessel Re-identification In vessel re-identification, our target is to extract the identity of a given query image using a set of gallery images (the database). Here, the main challenges are the lack of available data in thermal domain for marine vessels and the change of features for each vessel with different camera viewpoint. To tackle these issues, we used a model robust to the viewpoint, which can generalize well with a small amount of data. As the first step, we mask out the foreground (the vessel) from a given frame. Since thermal images do not contain color features and the intensity distribution of the foreground and the background are similar, conventional algorithms such as GrabCut [<cit.>] do not perform well in this task. As a solution, we used an encoder-decoder architecture (Appendix: Table <ref>) with residual connections to build the foreground mask of a given frame. We annotated foreground masks of 300 images as ground truth labels, and trained the model using those frames as input. Then, as shown in Fig. <ref>, the trained model was tested on previously unseen data, demonstrating its capability to accurately mask out foreground elements with complex viewpoints and intensity variations. Therefore, we propose this encoder-decoder architecture as a foreground extractor, specifically for colorless images, given that it can be trained on relevant data. Next, the extracted foreground is fed to the identification model, where we use an architecture as shown in Fig. <ref>.(a) that extracts the features of a given vessel. As the extractor, we used the pre-trained Dino ViT transformer model presented by Mathilde Caron et al. [<cit.>]. As shown in the Fig. <ref>.(c), we use four parallel linear layers to map the extracted feature vector to four latent spaces, namely global, front, rear, and side. We use these latent spaces to train the model to recognize vessel identities in different viewpoints since the vessel's features drastically change with the viewpoint. To get a better feature distribution, an ArcFace [<cit.>] mapping is used in each space. ArcFace is a feature analyzing technique that maps feature vectors onto a hypersphere, enhancing discrimination between different identities by maximizing inter-class variance while minimizing intra-class variance. It achieves state-of-the-art performance in face recognition tasks by embedding faces into a compact feature space. Next, we calculate L2 distances to each vessel in the database, in each space. These distances are multiplied by area ratios to embed the viewpoint information to the result and suppress erroneous information given from spaces corresponding to self-occluded views (Fig. <ref>) as given in eq. (<ref>). Finally, we sort the total distances in ascending order and select the first identity as the match for the query image. Distance_total(ID,Image) = {Distance_global(ID,Image). + Distance_front(ID,Image) ·AR_front + Distance_side(ID,Image)·AR_side + Distance_rear(ID,Image)·.AR_rear}/2 Inspired by the SPAN model, we calculate area ratios to generate masks of different viewpoints as shown in Fig. <ref>.(b). The ratio of each side is calculated using eq. (<ref>) and the qualitative evidence are shown in Fig. <ref>. AR_side X = Area of the sideX view mask/Area of the foreground mask, where sideX∈{front, side, rear}. When training the re-identification model, we freeze the area ratio calculation and fine-tune the linear layers to map the extracted features to the four spaces. We use identity classification and triplet loss in the training process. §.§.§ Identity Classification Loss In the identity classification, after the features are mapped to the four spaces, we use ArcFace mapping to get the cosine distance of each viewpoint space to calculate the confidence of each space and calculate the confidence as given in eq. (<ref>). Then we use the cross-entropy loss as the identity classification loss. §.§.§ Triplet Loss To promote discrimination and effective feature learning, we use triplet loss with Euclidean distance on the features mapped to the four spaces after the primary feature extraction. For the negative and positive samples, we use sample thermal images for each vessel identity manually, to make sure the model is robust to different viewpoints. The total loss is calculated as in eq. (<ref>), L_Total = λ_ID· L_ID + λ_Triplet· L_Triplet where λ_ID and λ_Triplet are hyper parameters. §.§ Activity Detection Activity detection can be done in both spatial and temporal domains, yet better results yield when both dimensions are considered together. Spatio-temporal action localization is approached in both supervised and semi-supervised techniques. We adapted the YOWO [<cit.>] algorithm which combines the spatial and temporal domain action localization. By integrating both information, YOWO effectively captures the dynamics and context of actions in videos. It has been originally trained to detect 200 human activities such as walking, talking, running, and cycling in the RGB domain. As shown in Fig. <ref>, the algorithm contains two main sections. It captures spatial information and spatiotemporal information separately, and it combines them to do the final classification using channel fusion and an attention module. When reorganizing, we changed the last layer of the model to detect two target activities, swimming and possible human trafficking footage, in our dataset. Then, we retrained the model to detect these two activities in thermal domain using our dataset. To obtain better results, we did a parameter fine tuning by monitoring the frame mAP and video mAP scores. The final values are set as IoU 0.6 and clip length 16. Furthermore, the feeding frame rate for the model is increased from 1 fps to 5 fps. §.§ System integration In order to facilitate real-time processing, the implementation of two pipelines was deemed necessary as shown in Fig. <ref>. These pipelines operate concurrently, receiving the video feed as input. The primary objective of the first pipeline (A and B) is to track and re-identify maritime vessels, while the second pipeline (C) detects suspicious activities within the video stream. The objects identified by the tracking algorithm are subsequently passed on to the re-identification algorithm to identify detected vessels compared with the database. The outcome of these pipelines, consisting of processed video feeds, is then channeled into a Graphical User Interface (GUI). This GUI serves as a centralized control interface, enabling users to oversee and manage the entire system. Through the GUI, users can effectively monitor the detected activities and track the identified objects within the video stream in real-time. § EXPERIMENTS In this section, we describe datasets used, experiment methods and the procedure followed. We used two GeForce RTX 2080 GPUs to infer the system and report the performance indicators mentioned in Table <ref>. We conducted extensive experiments on multiple datasets using several state-of-the-art methods along with our method and the dataset. §.§ Datasets Our dataset: Our maritime dataset, captured using a FLIR M232 marine thermal camera, contains videos of moving vessels and maritime objects which are suitable for testing detection and tracking algorithms in the thermal domain. Bounding boxes are drawn for 4 classes including vessels, ships, humans, and jet skies. Furthermore, the dataset contains images of 40 small vessels and 32 large vessels from different viewpoints that can be used to train re-identification algorithms. It contains annotated video feeds of swimming and possible human trafficking activities that can be modeled as suspicious activities. VeRi776 [<cit.>]: This RGB dataset is a comprehensive collection of vehicle re-identification data, comprising a total of 49,357 images that feature 776 distinct vehicles captured by 20 different cameras. Furthermore, it contains bounding boxes and information regarding vehicle types, colors, and brands. VesselID-539 [<cit.>]: This RGB dataset is a collection of marine vessel images that were sourced from the website Marine Traffic (www.marinetraffic.com). The raw vessel image dataset encompasses a substantial quantity of data, comprising over 149,465 images representing 539 distinct vessels. On average, each vessel in the dataset is represented by approximately 277 images. VehicleID [<cit.>]: VehicleID dataset contains 26,267 RGB images of vehicles captured from different viewpoints in daytime. For our experiments, we used 500 identities as the training and validation dataset, and another 250 identities as the query and gallery images. Singapore Maritime Dataset (SMD) [<cit.>]: The Singapore Maritime Dataset consists of meticulously curated high-definition near-IR videos captured using strategically positioned Canon 70D cameras around the waters of Singapore. It encompasses on-shore videos acquired from fixed platforms along the shoreline, as well as on-board videos captured from moving vessels, providing diverse perspectives of the maritime environment. This division ensures comprehensive coverage and enables analysis across various viewpoints and scenarios. JHMDB-21 [<cit.>]: JHMDB is a collection of 960 RGB video sequences featuring 21 different actions for action recognition. It includes video data and annotations for puppet flow, puppet mask, joint positions per frame, action labels per clip, and meta labels per clip. UCF101-24 [<cit.>]: UCF101 is a dataset for recognizing actions in real-life RGB videos sourced from YouTube, encompassing 101 action categories. It builds upon the UCF50 dataset [<cit.>] and contains 13,320 videos spanning the expanded 101 action categories. Table <ref> summarizes the datasets and methods used for evaluation purposes in the results section. §.§ Training For object detection and tracking, we trained TraDeS for 4 classes, including vessels, ships, humans, and jet skies. The training was done in two phases. In the first phase, we trained on a subset of classes in the COCO dataset (RGB), relevant to the specific use case, such as vessels and humans. We converted RGB data to grayscale to make COCO images more similar to thermal images. In the second phase, we completely moved to the thermal domain by tuning the model using SMD dataset [<cit.>], along with our thermal data. Subsequently, we fine-tuned the algorithm by adjusting hyperparameters, the learning rate and detection threshold. In the re-identification module as shown in Fig. <ref>(b), we trained area ratio calculation and the feature mapping parts, separately. We trained area ratio calculation using the thermal data of vessels taken from different viewpoints. We used an encoder-decoder model as mentioned in Section <ref> for foreground masks extraction. Then, we trained the model responsible for part attention in SPAN to generate masks for viewpoints using our thermal dataset. In feature mapping part, we use Dino-ViT, which is pre-trained on the ImageNet dataset, for the initial feature extraction. Then, we use transfer learning to train the linear layers in Fig. <ref> Part (a). In this stage we use our thermal image dataset while keeping the area ratio calculation in the inference as it is already trained. § RESULTS AND DISCUSSION In this section, we first evaluate the performance of the TraDes algorithm in thermal domain for maritime object tracking to show that it can obtain similar results as in the RGB domain after the adaptation that we introduced. Second, we show that the view-weighted re-identification approach used in this paper outperforms SPAN method in both RGB and IR domains obtaining higher mAP values. Then, as ablations, we present results with and without view-weighted feature comparison, and effects of CNN and ViT based feature extractions. Furthermore, we evaluate the effect of the number of viewpoints in the feature comparison for the final outcome. Finally, in thermal activity detection, we show that YOWO algorithm performs on par with RGB domain results by evaluating it on JHMDB-21, UCF101-24, and our dataset. Evaluation of our tracking algorithm: We evaluated the performance of the TraDes algorithm in the near-IR and IR domains using SMD and our dataset. As shown in Table <ref>, higher MOTA and MAP scores in our dataset clearly indicate that the algorithm has successfully adapted to the specified classes (vessels, ships, and humans) in the IR domain. The algorithm has obtained a 61.2% MOTA score in the IR domain which is almost the same as the RGB domain performance. It indicates that we can track objects without color features with only a minor drop in the performance indicators. Also, we could maintain a 15 fps processing speed which is suitable for real-time online tracking. We noticed a considerable drop in the MOTA score for the MOT17 dataset when converted to the black-and-white (B&W) domain, which can be due to complex and highly dynamic environments with occlusions in the dataset, which is usually not the case for maritime environments. Therefore, the domain adaptation has been successfully achieved while conserving the performance of the algorithm. Evaluation of our re-identification algorithm: Our re-identification algorithm convincingly surpasses SPAN and ViT baselines in the IR domain while showing better performance even in the RGB domain with a higher mAP score. Specifically, in our IR dataset, our algorithm achieved a Top1 accuracy of 81.82% and a mAP score of 74.26% compared to SPAN's Top1 accuracy of 78.37% and mAP score of 73.62%, indicating the effectiveness of our method in handling infrared images with multiple vessel viewpoints (4.5% increment in the Top1 score in Table <ref>). Since there are no publicly available thermal vehicle/vessel datasets for re-identification, we conducted experiments with above mentioned RGB datasets to show that our method works competitively in the RGB domain, as well. In the VesselID-539 dataset, our algorithm achieved a Top1 accuracy of 82.60% (compared to SPAN's 82.43%), indicating that the proposed method, which is specified for thermal domain performance, is robust in the RGB domain as well. Moreover, across all datasets, our method shows considerably higher mAP scores, outperforming SPAN by 26.7% on average. We explain it using two concepts: (1) The ViT can extract complex features using its attention mechanism which enables paying more attention to specific shapes and masks of vehicles/vessels (Fig. <ref>). It puts more weight on those features in the feature vector, eventually pushing similar features closer in the feature space. As a result, the mAP score increases as shown in the third main column (ViT Base) of Table <ref>. However, the ViT only cannot maintain a good Top1 score due to the vast variations of the orientation. (2) Secondly, the ArcFace mapping further organizes features in 3 separate spaces (one for each side). It increases the inter-class distances in each side-space and enables side-wise feature comparison, increasing the Top1 and Top5 scores of the algorithm, as shown in the third (ViT Base) and fourth (ViT Base + VW) columns of Table <ref>. SPAN, in contrast, loses accuracy with increasing orientation changes, resulting in low mAP scores. However, our algorithm is capable of finding matches from the gallery, even with different orientations compared to the query image (Fig. <ref>), resulting in higher mAP scores. Moreover, we did another ablation study on the number of viewpoints considered in the feature comparison. Here, we considered the side with the highest area ratio as the largest view and did the feature comparison only for that side. As shown in Table <ref>, feature comparison in multiple viewpoints (typically 2 views appear in an image) increases the Top1 score by 15%. Evaluation of our activity detection algorithm: We evaluated the performance of the YOWO algorithm in the B&W domain by converting UCF101-24 and JHMDB-21 datasets. The algorithm performed well with only minor drops (8.5%) in performance indicators as shown in Table <ref>. Finally, we evaluated the algorithms for our dataset in the IR domain and obtained a frame mAP score of 72.4% and a video mAP score of 78.9%. § CONCLUSION In this paper, we have proposed a thermal vision based approach for maritime surveillance with main contributions in robust vessel re-identification and suspicious activity detection. To the best of our knowledge, this is the first time to address maritime vessel re-identification in the thermal domain. The adaptation of the TraDeS and the YOWO algorithms for object tracking and activity detection, respectively, was successful in obtaining competitive results as in the RGB domain. In the re-identification algorithm, our novel approach of mapping and comparing features in side-based separate spaces enabled viewpoint independency in re-identification. Our method is proven to be robust for the viewpoint variance in the thermal domain while providing consistent accuracy even with a higher number of classes. It outperformed SPAN algorithm in both thermal and RGB domains. Furthermore, the dataset we created contains images and videos of vessels and suspicious activities that can be used in tracking, activity detection, and vessel re-identification tasks. Currently, the integrated system works at 2 fps while independent subsystems work at 30 fps for tracking and 5 fps for activity detection. As further developments, a customized hardware setup can be developed for the system for a higher frame rate. Algorithms can be optimized further using parallel computing concepts and obtain a higher throughput. Furthermore, there is a wide research gap in the thermal maritime surveillance domain which should be explored in the future. § ACKNOWLEDGMENT Funding for the FLIR thermal camera was provided by the Senate Research Committee Capital Grant: SRC/CAP/2018/02. Computational resources were provided by the Creative Software Pvt. Ltd. model2-names § APPENDIX
http://arxiv.org/abs/2406.09119v1
20240613134815
On Modulation and Translation Invariant Operators and the Heisenberg Module
[ "Arvin Lamando", "Henry McNulty" ]
math.FA
[ "math.FA", "math.OA" ]
§ ABSTRACT We investigate spaces of operators which are invariant under translations or modulations by lattices in phase space. The natural connection to the Heisenberg module is considered, giving results on the characterisation of such operators as limits of finite–rank operators. Discrete representations of these operators in terms of elementary objects and the composition calculus are given. Different quantisation schemes are discussed with respect to the results. Primordial black hole formation from self-resonant preheating? Marco Taoso Received XXXX; accepted XXXX ================================================================= § INTRODUCTION Quantum Harmonic Analysis (QHA), introduced by Werner in <cit.>, extends operations of classical harmonic analysis, the convolution and Fourier transform, to operators, such that the interaction between QHA and classical operations interact as one would expect. Central to QHA is the Weyl quantisation, associating to each function (called the Weyl symbol) a corresponding operator such that the translations of operators introduced in <cit.> correspond to a translation of the Weyl symbol, and the Fourier transform of an operator corresponds to the Fourier transform of its Weyl symbol. By considering the operations of QHA on rank one operators, many fundamental objects of Time–Frequency Analysis (TFA) can be retrieved <cit.> <cit.>, which has led to efforts examining further the connections between QHA and TFA <cit.> <cit.>, as well as QHA of other group representations <cit.> <cit.> <cit.>. In <cit.>, the concept of modulation of an operator was considered to extend QHA to Quantum Time–Frequency Analysis, corresponding to a modulation of the Weyl symbol of the operator. In <cit.>, operators which were invariant under translations by some lattice Λ were considered. The canonical Λ-translation invariant operator is the Gabor frame operator, defined in <ref>, which is ubiquitous in the field of Gabor analysis, wherein one aims to discretise many of the operations from time–frequency analysis. As the Weyl symbols of such operators are Λ-periodic, the tools of Fourier analysis can be used to discretise the spreading function of such operators. In terms of QHA, this amounts to considering the Fourier series arising from the Fourier transform for operators <cit.>. Boundedness properties for Λ-translation invariant operators are studied in <cit.>. The Heisenberg modules, originally constructed by Rieffel in <cit.> as a tool to study the noncommutative tori are well-known to be connected to Gabor analysis <cit.>. Using the general technique of localization of Hilbert C^*-modules, as was done in <cit.>, any Heisenberg module can be continuously and densely embedded inside the space of square-integrable functions. Consequently, Gabor atoms coming from the Heisenberg modules generate continuous Gabor frame operators. In this work, we introduce the connection between the Heisenberg modules and QHA. We start with the simple observation that all adjointable maps of the Heisenberg module associated to the lattice Λ are exactly finite sums of Gabor frame operators on Λ (Lemma <ref>). It then follows that adjointable maps of the Heisenberg module associated to the lattice Λ, are Λ-translation invariant operators. Conversely, we shall use techniques from operator algebras to show that we can characterise Λ-translation invariant operators as limits (under different operator topologies) of periodisations of finite rank operators, generated by functions from the Heisenberg module. Our operator algebraic techniques necessitate that we go beyond the usual Banach Gelfand triple framework for sequences, and in turn, we also obtain characterisation results beyond the usual setting of ℳ^∞ for Λ-translation invariant operators. We then introduce the notion of Λ-modulation invariant operators, using the concept of operator modulation analogously to the translation invariant setting. In examining such operators the parity operator naturally arises as the only ℝ^2d-modulation invariant operator, as the identity is in the translation case. This reflects the fundamental nature of the Weyl transform, and its relation to the spreading representation of an operator. We consider how Λ-modulation invariant operators can be represented and recovered in discrete settings, in particular finding that they can be identified from discretisations of the convolution of QHA. We show that by choosing an appropriate lattice, Λ-modulation invariant operators are commutative and find the explicit form for their composition. This allows us to identify Λ-modulation invariant operators with Λ/2-translation invariant operators using the parity operator, and we discuss the equivalent statements for the Heisenberg module in terms of Λ-modulation invariant operators. We shall also consider the discrepancy between the density of the lattices Λ/2 and Λ, along with the implications on the minimum number of generators required to successfully characterise Λ-modulation invariant operators in terms of a new kind of operator-periodisation based on operator-modulation, called the “Fourier-Wigner operator-periodisation”. The relation between translation and modulation of operators depends fundamentally on the quantisation scheme used, and we discuss how Weyl quantisation is the natural setting for Quantum Time–Frequency Analysis. § PRELIMINARIES §.§ Time–Frequency Analysis We begin by introducing the basic objects in time–frequency analysis. The translation and modulation operators are defined as T_x f(t) : = f(t-x) M_ωψ(t) : = e^2π i ω· tf(t), for x,ω∈ℝ^d and f∈ L^2(ℝ^d), and can be extended by duality to tempered distributions. The composition of the two gives a time–frequency shift; π(z) := M_ω T_x where z=(x,ω), which are unitary on L^2(ℝ^d) with adjoint π(z)^*=e^-2π i x·ωπ(-z). The Short Time Fourier Transform (STFT) of f∈ L^2(ℝ^d) with respect to the window g∈ L^2(R^d) can then be defined as V_g f(z) = ⟨ f, π(z) g⟩_L^2. The STFT is an isometry from L^2(ℝ^d) to L^2(ℝ^2d) when g_L^2=1, and satisfies Moyal's orthogonality relation ⟨ V_g_1 f_1, V_g_2 f_2 ⟩_L^2 = ⟨ f_1, f_2⟩·⟨ g_1, g_2⟩_L^2, where the left hand side inner product is on L^2(ℝ^2d), while those on the right hand side are on L^2(ℝ^d). As a consequence, one finds the reconstruction formula f = 1/L^2hg∫_ℝ^2d V_g f(z) π(z)h dz, for ⟨ g, h⟩≠ 0. The integral can be understood weakly. Closely related to the STFT are the ambiguity function A(f,g)(x,ω) = e^iπ x·ωV_g f(x,ω), and the (cross)-Wigner distribution W(f,g)(x,ω) : = ∫_ℝ^2d f(x+t2)g(x-t2)e^-2π i ω tdz. The Weyl symbol σ_S of an operator S∈ℒ(𝒮(ℝ^d),𝒮^'(ℝ^d)), where 𝒮 is the Schwartz space, can be defined weakly as the unique distribution such that ⟨ Sf,g⟩_𝒮^',𝒮 = ⟨σ_S, W(g,f)⟩_𝒮^',𝒮. The mapping S↦σ_S is a unitary map from the space of Hilbert-Schmidt operators ℋ𝒮 to L^2(ℝ^2d). By taking a Schwartz function window g∈𝒮(ℝ^d), the STFT can be extended to the space of tempered distributions. By taking the L^2(ℝ^d) normalised Gaussian φ_0, the modulation spaces can then be defined as M^p,q_m(ℝ^d) = { f∈𝒮^'(ℝ^d): f_M^p,q_m := V_φ_0 f_L^p,q_m(ℝ^2d) < ∞} where L^p,q_m(ℝ^2d) is the weighted mixed–norm Lebesgue space (cf. Chapter 11 <cit.>). All Schwartz functions can be used as windows to define the same spaces with equivalent norms, and the dual of M^p,q_m(ℝ^d) is M^p',q'_1/m(ℝ^d) for 1≤ p,q < ∞. Furthermore, for 1≤ p_1 ≤ p_2 ≤∞ and 1≤ q_1 ≤ q_2 ≤∞, we have the continuous inclusions <cit.>: M^p_1,q_1(ℝ^d) ↪ M^p_2,q_2(ℝ^d). Of particular interest is the so-called Feichtinger's algebra M^1(ℝ^d) := M^1,1_1(ℝ^2d) with dual M^∞(ℝ^d), which generates the Banach Gelfand triple (M^1(ℝ^d),L^2(ℝ^d), M^∞(ℝ^d)), where M^∞(ℝ^d) is equipped with the weak-* topology. §.§ Harmonic Analysis on Phase Space This work is primarily concerned with phase space ℝ^d ×ℝ^d, where ℝ^d denotes the Pontryagin dual of ℝ^d. We will usually abbreviate and identify the phase space ℝ^d×ℝ^d with ℝ^2d, given the fact that ℝ^d≅ℝ^d. An element z∈ℝ^2d of the phase space will typically be denoted as a pair z=(x,ω) where x,ω∈ℝ^d. The symplectic form on the phase space is the mapping Ω: ℝ^2d×ℝ^2d→ℝ, defined via Ω(z,z') := x'·ω - x·ω', for z,z'∈ℝ^2d. The symplectic form Ω plays a central role in the Harmonic analysis on ℝ^2d since it appears in the the appropriate notion of a Fourier transform on the phase space. The symplectic Fourier transform is defined: ℱ_Ω f(z) := ∫_ℝ^2d f(z')e^-2π i Ω(z,z') dz', which shares the relevant properties with the standard Euclidean Fourier transform, although the symplectic Fourier transform is also its own inverse. Note now that the ambiguity function and the cross-Wigner distribution are connected via the symplectic Fourier transform: W(f,g)(x,ω)= ℱ_Ω(A(f,g))(x,ω). A lattice in ℝ^2d will always refer to a full–rank lattice Λ = Aℤ^2d for some A∈GL(2d,ℝ), and the lattice volume is given by |Λ|:=|(A)|. The adjoint lattice Λ^∘ is then the annihilator of Λ, and can be defined as Λ^∘ := {z∈ℝ^2d: π(z)π(λ)=π(λ)π(z), ∀λ∈Λ} = {z∈ℝ^2d: e^2π i Ω(z,λ)=1, ∀λ∈Λ}. Λ^∘ is then itself a lattice in ℝ^2d, with volume |Λ^∘|=1|Λ|. Importantly, Λ^∘ can be identified with the dual group of ℝ^2d/Λ, and as such any reasonably behaved Λ-periodic function f on ℝ^2d has a symplectic Fourier Series f(z) = ∑_λ^∘∈Λ c_λ^∘ e^-2π i Ω(z,λ^∘), and the coefficients {c_λ^∘}_λ^∘∈Λ are the symplectic Fourier coefficients of f. We define the space 𝒜(ℝ^2d/Λ) as the space of Λ-periodic functions with absolutely convergent symplectic Fourier coefficients, and 𝒜^'(ℝ^2d/Λ) as its dual, those functions with symplectic Fourier coefficients in ℓ^∞(Λ^∘). Given an M^1(ℝ^2d) function f, the periodisation P_Λ f := ∑_λ∈Λ T_λ f is an element of 𝒜(ℝ^2d/Λ), and moreover the map P_Λ: M^1(ℝ^2d)→𝒜(ℝ^2d/Λ) is bounded and surjective <cit.>. As a result of this, we can consider the symplectic Poisson summation formula for f∈ M^1(ℝ^2d): (P_Λ f)(z) = 1/|Λ|∑_λ^∘∈Λ^∘ℱ_Ω (f)(λ^∘)e^2π i Ω(z,λ^∘), ∀ z∈ℝ^2d, where the sum is absolutely convergent. The norm on Λ^∘ is given by the dual measure of ℝ^2d/Λ, and so the ℓ^p(Λ^∘)-norm is given by 𝐤_ℓ^p(Λ^∘):= 1/|Λ|^1/p(∑_λ^∘∈Λ^∘|k_λ^∘|^p)^1/p for a sequence 𝐤 = {k_λ^∘}_λ^∘∈Λ^∘. §.§ Gabor Analysis In time-frequency analysis, one is often concerned with analysis and synthesis on a lattice Λ of phase space (or the time-frequency plane) via the STFT and the time-frequency shifts. We introduce to the Gabor frame operator S_g,h,Λ for g,h∈ L^2(ℝ^d); S_g,h,Λf = ∑_λ∈ΛV_gf(λ)π(λ)h. Comparing (<ref>) with <ref>, the Gabor frame operator can be seen as a discretisation of the continuous reconstruction formula, S_g,hf analyses f by the samples of its STFT {V_gf(λ)}_λ∈Λ with respect to the analysing window g, and then synthesises using the sampled time–frequency shifts of the synthesising window h. A classical problem of Gabor analysis is to find a single Gabor atom g ∈ L^2(ℝ^d), in a given lattice Λ, such that S_g,g,Λ =: S_g,Λ is an invertible bounded linear operator in L^2(ℝ^d). This is equivalent to finding an h∈ L^2(ℝ^d), called a dual Gabor atom (with respect to g), where S_g,h,Λ=Id_L^2(ℝ^d). Finding such a g is tantamount to solving the so-called Gabor frame inequalities, and would give us perfect reconstruction of any f∈ L^2(ℝ^d) via: f = S_g,Λ^-1S_g,Λf = ∑_λ∈ΛV_gf(λ)π(λ)S^-1_g,Λg =S_g,ΛS_g,Λ^-1f = ∑_λ∈ΛV_g(S_g,Λ^-1f)π(λ)g. It is well-known that the Feichtinger's algebra M^1(ℝ^d)⊆ L^2(ℝ^d) is a particularly useful class of function space for Gabor atoms since they always give us continuous Gabor frame operators <cit.>. For our later reference, we include this as a proposition: For all lattices Λ⊆ℝ^2d and any g,h∈ M^1(ℝ^d), we have that S_g,h,Λ∈ℒ(L^2(ℝ^d)). §.§ Heisenberg Modules The Heisenberg modules were originally constructed by Rieffel in <cit.> to study the finitely generated projective modules of the higher dimensional noncommutative tori. Their connections to time–frequency analysis was first highlighted by Luef in <cit.>. We shall use the concrete description of the Heisenberg modules using the results of Austad and Enstad in <cit.>, obtained through localisation. To start, we need to define our relevant C^*-algebras. We introduce the Heisenberg 2-cocycle c: ℝ^2d×ℝ^2d→𝕋 via c(z_1,z_2) = e^-2π i x_1·ω_2 for z_1 = (x_1,ω_1), z_2 = (x_2,ω_2)∈ℝ^2d. Fixing z_3=(x_3,ω_3)∈ℝ^2d, the Heisenberg 2-cocycle satisfies the usual normalized 2-cocycle conditions: c(z_1,z_2)c(z_1+z_2,z_3) =c(z_1,z_2+z_3)c(z_2,z_3) c(z_1,0)=c(0,0) =c(0,z_2)=1. It is related to the time-frequency shifts via π(z_1)π(z_2) =c(z_1,z_2)π(z_1+z_2) = c(z_1,z_2)c(z_2,z_1)π(z_2)π(z_1). We see that π: ℝ^2d→ℒ(L^2(ℝ^d)) is not a unitary representation of the phase space ℝ^2d=ℝ^d×ℝ^d, but instead Equation (<ref>) makes π a 𝑐-projective representation of ℝ^2d on L^2(ℝ^d). Such representations can be seen as a particular instance of so-called covariant representations of twisted C^*-dynamical systems <cit.>. Let us now fix a lattice Λ of ℝ^2d, and we consider the sequence space ℓ^1(Λ) equipped with the following c-twisted convolution and involution respectively: (𝐚_1 ♮_c 𝐚_2)(λ) = ∑_μ∈μa_1(μ)a_2(λ-μ)c(μ,λ-μ) (𝐚_1)^*_c(λ) = c(λ,λ)a_1(-λ) for 𝐚_1,𝐚_2∈ℓ^1(Λ). It can be shown that ℓ^1(Λ) with the said structure along with its original norm gives us a Banach *-algebra, which we denote by ℓ^1(Λ,c). The C^*-completion (or the C^*-enveloping algebra <cit.>) of ℓ^1(Λ,c) will be hereafter denoted by A. We similarly obtain another C^*-algebra, which we denote by B, by taking the C^*-completion of the Banach *-algebra ℓ^1(Λ^∘,c) constructed by equipping ℓ^1(Λ^∘) with the following c-twisted structure: (𝐛_1 ♮_c𝐛_2)(λ^∘) = 1/|Λ|∑_μ^∘∈μ^∘b_1(μ^∘)b_2(λ^∘-μ^∘)c(μ^∘,λ-μ^∘) (𝐛_1)^*_c (λ^∘) =c(λ^∘,λ^∘)b_1(-λ^∘) for 𝐛_1,𝐛_2∈ℓ^1(Λ^∘). For the rest of this article we will interpret A and B as sequence spaces, as it was shown in <cit.> that we have the following continuous dense embeddings: A ↪ℓ^2(Λ) and B↪ℓ^2(Λ^∘). Finally, we note that both ℓ^1(Λ) and ℓ^1(Λ^∘) are unital Banach *-algebras with the twisted structure given above, therefore we conclude that their C^*-completions, A and B respectively, must both be unital as well. It is important to note that in the general case, for any lattice Γ⊆ℝ^2d, and any 2-cocyle θ: Γ×Γ→𝕋, the twisted Banach *-algebra ℓ^1(Γ,θ) is a *-semisimple *-algebra (see for example <cit.> and <cit.>) so that ℓ^1(Γ,θ) is densely embedded in its C^*-enveloping algebra. The integrated forms of the time-frequency shifts, given by π_A:A→ℒ(L^2(ℝ^d)) and π^*_B:B→ℒ(L^2(ℝ^d)), are densely defined via π_A(𝐚) =∑_λ∈Λa(λ)π(λ) π^*_B(𝐛) =1/|Λ|∑_λ^∘∈Λ b(λ^∘)π^*(λ^∘) for 𝐚∈ A, and 𝐛∈ B. These are actual C^*-representations, which are well known to be faithful <cit.>, and hence are isometries. The Heisenberg module ℰ_Λ(ℝ^d) is both a Hilbert left A-module and a Hilbert right B-module <cit.>, and in particular it is an A-B-equivalence bimodule <cit.> <cit.>. We give, through a theorem below, a concrete description of the Heisenberg modules <cit.>. Given a lattice Λ⊆ℝ^2d, we denote the associated Heisenberg module by ℰ_Λ(ℝ^d). It can be obtained by completing the Feichtinger's algebra M^1(ℝ^d) with respect to the norm g_ℰ_Λ(ℝ^d):= S_g,Λ_ℒ(L^2)^1/2. The Heisenberg module is continuously densely embedded in L^2(ℝ^d), with the following estimate: f_L^2≤√(|Λ|)f_ℰ_Λ(ℝ^d), ∀ f∈ℰ_Λ(ℝ^d). Furthermore, ℰ_Λ(ℝ^d) is an A-B equivalence bimodule with the following Hilbert left A-module and Hilbert right B-module structure, for λ∈Λ, λ^∘∈Λ^∘, 𝐚∈ A, 𝐛∈ B, and f,g∈ℰ_Λ(ℝ^d): 2 * Λfg(λ):=L^2fπ(λ)g * Λ^∘fg(λ^∘):= L^2gπ^*(λ^∘)f * 𝐚 f := π_A(𝐚)f * f 𝐛 := π^*_B(𝐛)f. The norm defined in (<ref>) is well-defined due to <ref>. It follows from <ref> that {V_gf(λ)}_λ∈Λ = Λfg∈ A whenever f,g∈ℰ_Λ(ℝ^d) and that π(𝐚)∈ℒ(ℰ_Λ(ℝ^d)) whenever 𝐚∈ A. Note as well that the structure respects the dense inclusions ℓ^1(Λ)↪ A and M^1(ℝ^d)↪ℰ_Λ(ℝ^d), in the sense that Λfg∈ℓ^1(Λ) whenever f,g∈ M^1(ℝ^d) and that 𝐚f ∈ M^1(ℝ^d) whenever 𝐚∈ℓ^1(Λ) and f∈ M^1(ℝ^d). Analogous observations hold for the Hilbert right B-module structure. There are two structure preserving maps for A-B equivalence bimodules, the so-called A and B adjointable maps. We say that T:ℰ_Λ(ℝ^d)→ℰ_Λ(ℝ^d) is an A-adjointable map if is A-linear i.e. T(𝐚f)=𝐚(Tf) for all 𝐚∈ A and f∈ℰ_Λ(ℝ^d). Furthermore, there must exist an adjoint T^*:ℰ_Λ(ℝ^d)→ℰ_Λ(ℝ^d) such that ΛTfg=ΛfT^*g for all f,g∈ℰ_Λ(ℝ^d). The definition for B-adjointable maps is similar. It can be shown that adjointability of a map implies that it has a unique adjoint, and that it is a bounded linear operator on the Heisenberg module, but the converse in not true <cit.>. For the rest of the article, we shall denote A-adjointable and B-adjointable maps by ℒ_A(ℰ_Λ(ℝ^d)) and ℒ_B(ℰ_Λ(ℝ^d)) respectively. The next result from <cit.> motivates the study of Heisenberg modules as a viable space of functions for Gabor analysis since it turns out that ℰ_Λ(ℝ^d) inherits the crucial property <ref> of M^1(ℝ^d). If g,h∈ℰ_Λ(ℝ^d), then S_g,h,Λ∈ℒ(L^2(ℝ^d)). Furthermore, the restriction (S_g,h,Λ)_|ℰ_Λ(ℝ^d) is an A-adjointable map in ℰ_Λ(ℝ^d). Since ℰ_Λ(ℝ^d) is an equivalence bimodule, the associated inner-products satisfy the associative formula, for each f,g,h ∈ℰ_Λ(ℝ^d); Λfgh = fΛ^∘gh. When written explicitly, we have ∑_λ∈ΛL^2fπ(λ)gπ(λ)h = 1/|Λ|∑_λ^∘∈Λ^∘L^2hπ^*(λ^∘)gπ^*(λ^∘)f. From which we can deduce that S_g,h,Λ = π^*_B(Bgh) = 1/|Λ|∑_λ∈Λ^∘L^2hπ(λ^∘)gπ(λ^∘), which is the well-known Janssen's representation <cit.> extended to functions coming from the Heisenberg modules <cit.>. §.§ Quantum Harmonic Analysis We define the Schatten classes 𝒮^p as the spaces with p-summable singular values. In particular, 𝒮^1 corresponds to the trace class operators where S_𝒮^1 := tr(|S|) = ∑_n ⟨ |S| e_n, e_n ⟩_L^2 is finite for any orthonormal basis {e_n}_n∈ℕ, and the Hilbert-Schmidt operators ℋ𝒮:=𝒮^2 are those operators for which S_ℋ𝒮 := ∑_n ⟨ S e_n, S e_n ⟩_L^2 is finite. Quantum Harmonic Analysis was introduced in <cit.>, wherein convolutions and Fourier transforms were extended to operators. Central to these notions is the insight that defining the operator translation α_z (S) := π(z)Sπ(z)^* for some S∈ℋ𝒮 corresponds to a translation by z of the Weyl symbol of S, that is to say σ_α_z(S) = T_zσ_S. For operators S,T∈𝒮^1, and f∈ L^1(ℝ^2d), operator-operator and function–operator convolutions can then be defined as S ⋆ T(z) := tr(Sα_z(T)) f ⋆ S := ∫_ℝ^2d f(z)α_z(S) dz, where T:=PTP and the integral can be interpreted as a Bochner integral. Note that the convolution of two operators gives a function on phase space, while the convolution of an operator and a function gives an operator. It turns out that the resulting function and operator in the definition are in L^1(ℝ^2d) and the trace class 𝒮^1 respectively, and the space of arguments can be extended according to a Young's type relation <cit.>. The convolutions correspond to convolutions on the Weyl symbols in the following sense: S ⋆ T = σ_S * σ_T σ_f⋆ S = f * σ_S. Along with convolutions, a Fourier transform for operators is introduced. For a trace class operator S, the Fourier-Wigner transform is defined as ℱ_W (S)(z) := e^-iπ x·ωtr(π(-z)S). The Fourier-Wigner transform of an operator is a function on phase space, and its inverse is the integrated Schrödinger representation <cit.>, which maps functions on phase space to operators. The Fourier-Wigner transform of an operator is closely related to its Weyl symbol: ℱ_W (S) = ℱ_Ω (σ_S). Extending the Fourier-Wigner transform to Hilbert-Schmidt operators gives a unitary transformation from ℋ𝒮 to L^2(ℝ^2d). The aforementioned convolutions and Fourier-Wigner transform follow an analogous convolution formula as one would expect based on the function case, although with respect to the symplectic Fourier transform since the convolutions are defined on phase space: ℱ_Ω (S⋆ T) = ℱ_W (S)ℱ_W (T) ℱ_W (f⋆ S) = ℱ_Ω(f)ℱ_W (S). In <cit.>, modulation spaces of operators were defined, where in the case p=q, the space ℳ^p corresponds to the operators with Weyl symbol in M^p(ℝ^2d). The Gelfand triple (M^1(ℝ^2d),L^2(ℝ^2d),M^∞(ℝ^2d)) then corresponds to the operator Gelfand triple (ℳ^1,ℋ𝒮,ℳ^∞). Operators in the space ℳ^1 correspond to the nuclear operators 𝒩(M^∞(ℝ^d);M^1(ℝ^d)), while the space ℳ^∞ are precisely the bounded operators ℒ(M^1(ℝ^d);M^∞ (ℝ^d)) <cit.>. Since the modulation spaces M^p(ℝ^2d) are invariant under metaplectic transformations, if the Weyl symbol of an operator is in some modulation space; σ_S ∈ M^p(ℝ^2d), then the same is true of the Fourier-Wigner transform of the operator, and the integral kernel of the operator <cit.>. By duality we can extend operator translations α_z to ℳ^∞. The Λ-translation invariant operators <cit.> are then those operators T∈ℳ^∞ such that T=α_λ (T) for every λ∈Λ. The canonical example of Λ-translation invariant operators are the Gabor frame operators: Given some atom g,h∈ L^2(ℝ^d), the Gabor frame operator can be written as an operator-periodisation: S_g,h,Λ =∑_λ∈Λα_λ(g⊗ h) =∑_λ∈Λπ(λ) g ⊗π(λ)h and hence, is a Λ-translation invariant operator. By <ref>, these are operators with Λ-periodic Weyl symbols, and so there exists a symplectic Fourier-series type expansion of Λ-translation invariant operators: For each Λ-translation invariant operator T∈ℳ^∞ such that σ_T ∈(M^1,L^2,M^∞)(ℝ^2d/Λ), there exists a unique 𝐤 ={k_λ^∘}_λ^∘∈Λ^∘∈(ℓ^1,ℓ^2,ℓ^∞)(Λ^∘) such that T = 1/|Λ|∑_λ^∘∈Λ^∘k_λ^∘ e^-π i λ_1^∘·λ_2^∘π(λ^∘), where the sum converges weakly in the ℳ^∞ case. In keeping with the interpretation that Λ-translation invariant operators are Λ-periodic, we refer to the unique sequence 𝐤={k_λ^∘}_λ^∘∈Λ^∘ in representation (<ref>) the operator Fourier-coefficients of T. Λ-translation invariant operators observe a pleasing symbol calculus, namely if 1/ab= n and S,T are Λ-translation invariant operators where Λ=aℤ^d× bℤ^d, then σ_S·σ_T = σ_S T. § LAMBDA-TRANSLATION INVARIANT OPERATORS VIA FINITE-RANK OPERATORS GENERATED BY THE HEISENBERG MODULE In <cit.> a Janssen-type representation formula was found for general trace-class operators, from which one can then prove a correspondence between Λ-translation invariant operators (as they should be amenable to an operator-periodisation representation) T∈ℳ^∞ and their Fourier coefficients 𝐤 as in Theorem <ref>. We further investigate the theory of Λ-translation invariant operators through a combination of this Janssen-type correspondence and the characterisation of adjointable maps of the Heisenberg modules as Gabor frame operators. We shall then make the case here that the Gabor frame operators not only give us `canonical examples' of Λ-translation invariant operators, but they can be used to topologically characterise all Λ-translation invariant operators not only in ℳ^∞, but in ℒ(L^2(ℝ^d)) as well. As a corollary of Equation <ref>, we must be able to characterise all Λ-translation invariant operators in ℳ^∞ and ℒ(L^2(ℝ^d)) in terms of some topological limit of operator-periodisation of finite-rank operators. We first start with making the connection between Λ-translation invariant operators and A-adjointable maps of the Heisenberg module ℰ_Λ(ℝ^d) explicit. Let T∈ℒ_A(ℰ_Λ(ℝ^d)), then it follows that for all 𝐚∈ℓ^1(Λ) and f∈ M^1(ℝ^d) that: T(𝐚f) = 𝐚(Tf) T( ∑_λ∈Λa(λ)π(λ)f ) = ∑_λ∈Λa(λ)π(λ) (Tf) If we take 𝐚 = δ_λ,0 in particular, then Tπ(λ)f = π(λ)(Tf) for all f∈ M^1(ℝ^d). If we combine this with the fact that ℰ_Λ(ℝ^d)↪ L^2(ℝ^d)↪ M^∞(ℝ^d), then we obtain that T_|M^1(ℝ^d)∈ℳ^∞ and is a Λ-translation invariant operator. Subsequently, we may ask, where exactly do the Fourier-coefficients 𝐤 of T_|M^1(ℝ^d) lie within ℓ^∞(Λ^∘)? To answer this, we have the following lemma regarding the A-adjointable maps of ℰ_Λ(ℝ^d). There exists an n∈ℕ such that for any T∈ℒ_A(ℰ_Λ(ℝ^d)), we have T=∑_i=1^n(S_g_i,h_i,Λ)_|ℰ_Λ(ℝ^d), where g_1,...,g_n,h_1,...,h_n∈ℰ_Λ(ℝ^d). Since we are working with a lattice Λ, then as previously noted, the coefficient C^*-algebras A and B are unital. In particular, unitality of B and fullness of ℰ_Λ(ℝ^d) with respect to B implies that there exist g_1,...,g_n,ψ_1,...,ψ_n ∈ℰ_Λ(ℝ^d) such that ∑_i=1^n Bg_iψ_i=1_B for the identity 1_B∈ B. It follows that if T∈ℒ_A(ℰ_Λ(ℝ^d)) and f∈ℰ_Λ(ℝ^d), we have Tf = T(f 1_B) = T( ∑_i=1^n fBg_iψ_i) = T(∑_i=1^n Afg_iψ_i ) = ∑_i=1^n Afg_i(Tψ_i) = ∑_i=1^n S_g_i,h_i,Λf where we took h_i = Tψ_i. We need to introduce another 2-cocyle so that we can make sense of the connection between the adjointable maps and the canonical representation (<ref>) of Λ-translation invariant operators. We define c': ℝ^2d×𝕋^2d→𝕋 via: c'((x_1,ω_1),(x_2,ω_2)) = e^π i (x_2·ω_1 - x_1·ω_2). We denote by B' the enveloping C^*-algebra obtained from the Banach *-algebra ℓ^1(Λ^∘,c'). Let ρ: Λ^∘→𝕋 via ρ(λ^∘) = e^π i λ_1^∘λ_2^∘. Then the map ρ: ℓ^1(Λ^∘)→ℓ^1(Λ^∘) ρ(𝐛)(λ^∘)=b(-λ^∘)ρ(λ) = b(-λ^∘)e^π i λ_1^∘·λ_2^∘, ∀λ^∘∈Λ^∘ extends to a C^*-isomorphism ρ:B' → B. ρ: ℓ^1(Λ^∘)→ℓ^1(Λ^∘) is obviously a Banach space automorphism with inverse (ρ)^-1(𝐛)(λ^∘) = b(-λ^∘)e^-π i λ^∘_1 ·λ^∘_2. It also follows from a straight-forward computation that: ρ(𝐛_1*_c'𝐛_2) = ρ(𝐛_1)*_cρ(𝐛_2) and ρ(𝐛_1^*_c')=ρ(𝐛_1)^*_c, for each 𝐛_1,𝐛_2∈ℓ^1(Λ^∘,c'). Therefore on the common dense subspace ℓ^1(Λ^∘), we have that ρ:ℓ^1(Λ,c')→ℓ^1(Λ,c) is a Banach *-algebra isomorphism, hence ρ must extend to a C^*-isomorphism. The isomorphism B'≅ B can be explained by the fact that the 2-cocycles c and c' are cohomologous. An account of cohomology on 2-cocycles with applications in Gabor analysis can be found in <cit.>. We now reiterate that we have the following continuous norm-dense embeddings: ℓ^1(Λ^∘)↪ B ≅ B' ↪ℓ^2(Λ^∘)↪ℓ^∞(Λ^∘). From this observation we obtain the following result. T∈ℳ^∞ is a Λ-translation invariant operator whose Fourier coeffients 𝐤 lie in B', equivalently ρ(𝐤)∈ B, if and only if T extends to an A-adjointable map. In this case, there exists an n∈ℕ such that T is a finite sum of n Gabor frame operators, and further extends to a map in ℒ(L^2(ℝ^d)) with T_ℒ(L^2)=𝐤_B'=ρ(𝐤)_B. Suppose that T∈ℳ^∞ is a Λ-translation invariant operator with Fourier coefficients 𝐤 = {k_λ^∘}_λ^∘∈Λ^∘∈ B' so that T = 1/|Λ|∑_λ^∘∈Λ^∘ k_λ^∘e^- π i λ^∘_1·λ^∘_2π(λ^∘) = 1/|Λ|∑_λ^∘∈Λ^∘k_-λ^∘e^π i λ_1^∘·λ_2^∘π^*(λ^∘)= 1/|Λ|∑_λ∈Λρ(𝐤)(λ^∘)π^*(λ^∘). We then obtain that T = π^*_B(ρ(𝐤))_|M^1(ℝ^d). Now it follows from the fact that B is unital and ℰ_Λ(ℝ^d) is full with respect to B, that there exists g_1,...,g_n,h_1,...,h_n∈ℰ_Λ(ℝ^d) such that ρ(𝐤) = ∑_i=1^n Λ^∘g_ih_i. We obtain T = ∑_i=1^nπ^*_B(Λ^∘g_ih_i)_|M^1(ℝ^d). It now follows from the Janssen's representation for the Heisenberg modules (<ref>) that T = ∑_i=1^n (S_g_i,h_i,Λ)_|M^1(ℝ^d), from which we can conclude that T extends to an A-adjointable map via Proposition <ref>. The converse easily follow from the uniqueness of the operator Fourier coefficients, Lemma <ref>, Lemma <ref>, and the Janssen's representation for the Heisenberg modules (<ref>). In particular, T = ∑_i=1^n S_g_i,h_i,Λ = π^*_B(∑_i=1^nBg_ih_i) for some g_1,...,g_n,h_1,...,h_n∈ℰ_Λ(ℝ^d) and ρ(𝐤)= ∑_i=1^n Bg_ih_i. It follows from Proposition <ref> that T extends to ℒ(L^2(ℝ^d)), and that T_ℒ(L^2) = π^*_B(∑_i=1^n Bg_ih_i)_ℒ(L^2)=ρ(𝐤)_B=𝐤_B'. For each Λ-translation invariant operator T∈ℳ^∞ with Fourier coefficients 𝐤∈ℓ^∞(Λ^∘), we have the following estimate: T_ℳ^∞≤𝐤_∞/|Λ|. We compute T_ℳ^∞ =sup{|M^∞,M^1Tfg| : f_M^1=g_M^1=1 }. Using Equation (<ref>) we obtain: T_ℳ^∞ ≤1/|Λ|𝐤_∞sup{∑_λ^∘∈Λ^∘|𝒱_fg(λ^∘)|: f_M^1=g_M^1=1} ≤1/|Λ|𝐤_∞sup{∫_ℝ^2d|𝒱_fg(x,ω)|x̣ω̣: f_M^1=g_M^1=1} ≤1/|Λ|𝐤_∞, where the last inequality follow from <cit.>. There exists an n∈ℕ such that every Λ-translation invariant operator in ℳ^∞ is the norm-limit of operator-periodisations of rank-n operators generated by functions coming from ℰ_Λ(ℝ^d). Suppose T is a Λ-translation invariant operator in ℳ^∞ with Fourier coefficients 𝐤∈ℓ^∞(Λ^∘). We know from the continuous norm-dense embeddings (<ref>) that there exists a sequence {𝐤_m}_m∈ℕ⊆ B' such that 𝐤-𝐤_m_∞→ 0. For each m∈ℕ, define the Λ-translation invariant operator T_m∈ℳ^∞ via T_m := 1/|Λ|∑_λ∈Λ^∘k_m,λ^∘e^-π i λ_1^∘·λ_2^∘π(λ^∘). It then follows from Lemma <ref> that T-T_m_ℳ^∞≤𝐤-𝐤_m_∞/|Λ|→ 0. But then we know from Theorem <ref> that there exists a fixed n∈ℕ such that each T_m is in fact a finite sum of Gabor frame operator T_m = ∑_i=1^n (S_g_i,h_i,Λ)_|M^1(ℝ^d) for g_1,...,g_n,h_1,...,h_n∈ℰ_Λ(ℝ^d). By Equation (<ref>), T_m = ∑_λ∈Λα_λ(∑_i=1^n g_i⊗ h_i), which is what we wanted to prove. We have seen that the C^*-algebraic aspects of the theory coming from the Heisenberg modules has allowed us to fully characterise Λ-translation invariant operators in ℳ^∞, where we have gone outside the usual confines of the Gelfand triple setting by considering the intermediate sequence space B and B'. We now consider the von Neumann algebraic aspect of the theory by going out of the the ℳ^∞ setting. We denote the space of Λ-invariant operators in ℒ(L^2(ℝ^d)) via: ℒ_Λ(L^2(ℝ^d)):={T∈ℒ(L^2(ℝ^d)): T=α_λ(T), ∀λ∈Λ}. The following lemma is immediate. T∈ℒ_Λ(L^2(ℝ^d)) if and only if it is an extension of a Λ-translation invariant operator in S∈ℳ^∞ such that Im(S)⊆ L^2(ℝ^d). In this case, T can be represented as in (<ref>), with operator Fourier-coefficients 𝐤∈ℓ^2(Λ^∘). The characterisation for ℒ_Λ(L^2(ℝ^d)) as stated above is immediate. We only show that T∈ℒ_Λ(L^2(ℝ^d)) will necessarily have an operator Fourier-coefficients satisfying 𝐤∈ℓ^2(Λ^∘). We have from (<ref>) that for all f,g∈ L^2(ℝ^d) L^2TfTg = 1/|Λ|^2∑_λ^∘,ν^∘∈Λ^∘k_λ^∘k_ν^∘e^-π i (λ_1^∘·λ_2^∘ - ν_1^∘·ν_2^∘)L^2π(λ^∘)fπ(λ^∘)g. By letting f=g, and taking the supremum over all f_L^2=1 in the equation above, we obtain ∞ > T_ℒ(L^2)^2≥1/|Λ|^2∑_λ^∘∈Λ^∘|k_λ^∘|^2, which shows 𝐤∈ℓ^2(Λ^∘). Note that all of our results on Λ-translation invariant operators in this section have an analogue for when we replace Λ with its adjoint Λ^∘. Relevant to this is the fact that we can construct the Heisenberg module again by switching the roles of the lattice Λ and its lattice Λ^∘, giving us ℰ_Λ^∘(ℝ^d). It is in fact also true that ℰ_Λ^∘(ℝ^d)=ℰ_Λ(ℝ^d) <cit.>. Next, if S⊆ℒ(L^2(ℝ^d)), we denote the commutant of S in ℒ(L^2(ℝ^d)) via S':= {T∈ℒ(L^2(ℝ^d)): Ts=sT, ∀ s∈ S}. The double commutant of S is S”=(S')'. We use S, S^SOT, S^WOT to denote the closure of S with respect to the norm, strong-operator, and weak-operator topologies respectively. Finally, von Neumann's double commutant theorem <cit.> will feature prominently in the subsequent proofs. We have: π_B^*(ℓ^1(Λ^∘))” = π_B^*(B)”. Note that π_Λ^∘^*(ℓ^1(Λ^∘)) is a unital *-subalgebra of ℒ(L^2(ℝ^d)), therefore it follows from the double commutant theorem and the fact that the norm-topology contains the strong operator topology, which further contains the weak operator topology, that: π_B^*(ℓ^1(Λ^∘)) ⊆π_B^*(ℓ^1(Λ^∘))⊆π_B^*(ℓ^1(Λ^∘))^WOT = π_B^*(ℓ^1(Λ^∘))^SOT = π_B^*(ℓ^1(Λ^∘))”. However, π_B^* is a *-homomorphism, and thus has a closed range, from which it follows that π_B^*(ℓ^1(Λ^∘)) = π_B^*(ℓ^1(Λ^∘)) = π_B^*(B). Therefore Equations (<ref>) and (<ref>) imply that π^*_B(B)⊆π_B^*(ℓ^1(Λ^∘))”, from which it follows that π_B^*(B)”⊆π_B^*(ℓ^1(Λ^∘))”. The reverse inclusion follows easily from the fact that π_B^*(ℓ^1(Λ^∘))⊆π_B^*(B), whence π_B^*(ℓ^1(Λ^∘))”⊆π_B^*(B)”. We have proved π^*_B(ℓ^1(Λ^∘))” = π_B^*(B)”. We have: ℒ_Λ(L^2(ℝ^d))= ℒ_Λ^∘(L^2(ℝ^d))' Suppose T∈ℒ_Λ^∘(L^2(ℝ^d))', because π(λ)∈ℒ_Λ^∘(L^2(ℝ^d)) for all λ∈Λ, then T commutes with all π(λ)∈Λ, so T∈ℒ_Λ(L^2(ℝ^d)). We obtain ℒ_Λ^∘(L^2(ℝ^d))'⊆ℒ_Λ(L^2(ℝ^d)). For the reverse inclusion, we use <ref> adapted for Λ^∘-translation invariant operators, so that T^∘∈ℒ_Λ^∘(L^2(ℝ^d)) can always be written T^∘= 1/|Λ^∘|∑_λ∈Λk_λe^-π i λ_1·λ_2π(λ) for some {k_λ}_λ∈Λ∈ℓ^∞(Λ). On the other hand, if T∈ℒ_Λ(L^2(ℝ^d)), we find due to (<ref>) that TT^∘=T^∘T, whence ℒ_Λ(L^2(ℝ^d))⊆ℒ_Λ^∘(L^2(ℝ^d))'. We now have ℒ_Λ(L^2(ℝ^d))= ℒ_Λ^∘(L^2(ℝ^d))' We now obtain a von Neumann algebraic version of the Theorem <ref>. We have: ℒ_Λ(L^2(ℝ^d))=π^*_B(B)”. As a corollary, there exists an n∈ℕ such that every operator in ℒ_Λ(L^2(ℝ^d)) is a weak-operator, or strong-operator limit of periodisations of rank-n operators generated by functions coming from ℰ_Λ(ℝ^d). Suppose T∈π^*_B(ℓ^1(Λ^∘))', because π(λ^∘)∈π^*_B(ℓ^1(Λ^∘)) for all λ^∘∈Λ^∘, then T commutes π(λ^∘) for all λ∈Λ^∘. Therefore π^*_B(ℓ^1(Λ^∘))'⊆ℒ_Λ^∘(L^2(ℝ^d)). Taking the commutant of the inclusion gives us ℒ_Λ(L^2(ℝ^d))=ℒ_Λ^∘(L^2(ℝ^d))'⊆π_B^*(ℓ^1(Λ^∘))”. On the other hand, we note that π(λ)∈π_B^*(ℓ^1(Λ^∘))' for all λ∈Λ, hence if T∈π_B^*(ℓ^1(Λ^∘))”, then Tπ(λ)=π(λ)T for all λ∈Λ. We have obtained π_Λ^∘^*(ℓ^1(Λ^∘))”⊆ℒ_Λ(L^2(ℝ^d)). All-in-all, we have shown that ℒ_Λ(L^2(ℝ^d))= ℒ_Λ^∘(L^2(ℝ^d))'=π_B^*(ℓ^1(Λ^∘))”. Equation (<ref>) now follows from Equations (<ref>) and (<ref>). As a corollary of Equation (<ref>) and the double commutant theorem: ℒ_Λ(L^2(G)) = π_B^*(B)” = π_B^*(B)^SOT = π_B^*(B)^WOT. As a consequence of unitality of B and fullness of ℰ_Λ(ℝ^d), there exists a fixed n∈ℕ such that the operators in π^*_B(B) (similar to the proof in Lemma <ref>) are exactly of the form ∑_i=1^n S_g_i,h_i,Λ where g_1,...,g_n,h_1,...,h_n∈ℰ_Λ(ℝ^d). The result now follows from Equation (<ref>). § WEYL AND SPREADING FUNCTION QUANTISATION SCHEMES AND THE PARITY OPERATOR Before we explicitly define operator modulations, we consider the background motivating this approach, following <cit.>. For any operator H∈ (ℳ^1,ℋ𝒮,ℳ^∞), there exists a unique spreading function η_H ∈ (M^1(ℝ^2d),L^2(ℝ^2d),M^∞(ℝ^2d)), such that H = ∫_ℝ^2dη_H (z) π(z) dz. The integral can be understood weakly. The spreading function thus has an intuitive interpretation, as describing how the time-frequency concentration of a function is "spread" by the operator; a spreading function concentrated near the origin will act similarly to the identity, and the effect of a translation of a spreading function is also clear. The closely related Weyl symbol of an operator is also a Gelfand triple isomorphism, recalling that this can be defined weakly in time–frequency analysis as the unique operator satisfying ⟨ L_σf, g⟩_𝒮^',𝒮 = ⟨σ, W(f,g)⟩_𝒮^',𝒮 for all f,g∈𝒮(ℝ^d). The spreading function and Weyl symbol of an operator are related by the symplectic Fourier transform, with a phase factor: η_S(z) = e^iπ x·ωℱ_Ω(σ_S)(z). We now turn our attention to a particular operator, the parity operator P. The parity operator is defined as the map P: L^2(ℝ^d) → L^2(ℝ^d) f(t) ↦ f(-t). The parity operator has the property Pπ(z) = π(-z)P, since Pπ(z)f(t) = Pe^2π i ω tf(t-x) = e^-2π i ω tf(-t-x) = π(-z)Pf(t). We then find that α_z(P) = π(z)Pπ(z)^* = e^-2π i x·ωπ(z)P π(-z) = e^-2π i x·ωπ(z)π(z)P = e^-4π i x·ωπ(2z) P. The following Lemma will also help with some of our characterisation results in the sequel: The parity operator is a self-inverse unitary map in ℒ(L^2(ℝ^d)), and restricts to an isometric isomorphism P_|M^1(ℝ^d):M^1(ℝ^d)→ M^1(ℝ^d). That P∈ℒ(L^2(ℝ^d)) is self-inverse is obvious, while unitarity follows from: ∫_ℝ^2df(z)ẓ = ∫_ℝ^2df(-z)ẓ, ∀ f∈ L^1(ℝ^d). That P restricts to an actual isometric isomorphism on M^1(ℝ^d) can be found in <cit.>. We claim the following: ∫_ℝ^2d e^2π i Ω(z,z') - i π z'_1 z'_2π(z') dz' = 2^d α_z(P). To show this we consider the integral weakly: L^2f∫_ℝ^2d e^2π i Ω(z,z') - i π z'_1 z'_2π(z') dz' g = ∫_ℝ^2d e^-2π i Ω(z,z') + i π z'_1 z'_2⟨ f,π(z') g⟩_L^2 dz' = ℱ_Ω (A_g f) (z) = W(f,g). Substituting the identity (cf. Lemma 4.3.1, <cit.>) W(f,g) = 2^d e^4π i x ω V_Pg f(2z), into the original equation then gives L^2f∫_ℝ^2d e^2π i Ω(z,z') - i π z'_1 z'_2π(z') dz' g = 2^d e^4π i x ω⟨ f, π(2z) Pg ⟩_L^2 and the result follows from <ref>. From here we find a spreading-type quantisation of a Weyl symbol directly, without needing to take the symplectic Fourier transform: Given σ∈ M^∞(ℝ^2d), the Weyl quantisation L_σ can be expressed as L_σ = 2^d ∫_ℝ^2dσ(z) α_-z(P) dz, or equivalently L_σ = ∫_ℝ^2dσ(z2) e^-π i x ·ωπ(z) P dz. We know that from the relation between the Weyl symbol and spreading function that L_σ = ∫_ℝ^2d e^iπ x·ωℱ_Ω (σ) (z)π(z) dz = ∫_ℝ^2d e^iπ x·ω∫_ℝ^2de^-2π i Ω(z,z')σ(z') dz' π(z) dz = ∫_ℝ^2dσ(z') ∫_ℝ^2d e^-2π i Ω(z,z') + iπ x·ωπ(z) dz dz'. Inserting the result of <ref> then gives L_σ = 2^d ∫_ℝ^2dσ(z') α_-z'(P) dz'. The second form follows from the identity <ref>. The above representation of an operator is also discussed in <cit.>. In this paper we consider operator modulations, and Λ-modulation invariant operators. The above quantisation intuition serves as a motivation of this approach; while in the case of Λ-translation invariant operators we have spreading quantisations in the form of <ref> of functions supported on the lattice Λ, while in this work we find Λ-modulation invariant operators are precisely those operators which are quantisations in the form <ref> of functions supported on the lattice Λ. § A MODULATION FOR OPERATORS To motivate the concept of a modulation for operators, we begin by considering translations of operators. We recall that a translation of an operator is defined by α_z(S) := π(z)Sπ(z)^*, and this α_z operation corresponds to a translation of the Weyl symbol. A cornerstone of QHA is the fact that the Fourier-Wigner transform interacts with the symplectic Fourier transform and convolutions analagously to the function case, namely that operator convolutions satisfy Fourier convolution identities with respect to the Fourier-Wigner transform. If we consider the modulation of a function as a translation of its Fourier transform, then one may reasonably assume that a modulation of an operator should be reflected in a translation of the Fourier-Wigner transform of the operator. Motivated by this, we define the operator modulation β_w (S) as follows: Let w∈ℝ^2d, and S∈ℳ^∞. Then β_w(S) := e^-π i w_1 w_2/2π(w/2)Sπ(w/2). This is precisely the operation corresponding to a symplectic modulation of the Weyl symbol: Given w∈ℝ^2d, and S∈ℳ^∞, σ_β_w(S) = M_w σ_S, where M_w is the symplectic modulation M_w F(z) = e^2π i Ω(z,w)F(z). It follows from (<ref>) that we equivalently have ℱ_W (β_w(S)) = ℱ_W (S)(z-w). As a modulation for operators, β_w shifts naturally arise when considering the twisted convolution, which is the Fourier-Wigner of the composition of two operators: ℱ_W (S · T)(z) = ∫_ℝ^2dℱ_W (S)(z')ℱ_W(β_z(T))(z')e^-2π i x'(ω-ω') dz' Recall that a Λ-translation invariant operator T∈ℳ^∞ satisfies T = π(λ) Tπ(λ)^*. We introduce the concept of a Λ-modulation invariant operator: An operator T∈ℳ^∞ is called Λ-modulation-invariant if T = e^-π i λ_1λ_2/2π(λ/2)Tπ(λ/2) for every λ∈Λ. The phase factor in <ref> arises due the use of time–frequency shitfs π(z) as opposed to the more suited symmetric time–frequency shifts ρ(z), in the translation picture the two coincide. Using the identity π(z)^*= e^-2π i x ωπ(-z), Λ-modulation-invariance condition is equivalent to the condition T = π(λ/2)Tπ(-λ/2)^*, or Tπ(-λ/2) = π(λ/2)T. While the Λ-translation invariant operators are those operators with Λ-periodic Weyl symbols, the Λ-modulation invariant operators are precisely the operators with Λ-invariant Fourier-Wigner transform by <ref>. Since Λ-modulation-invariance is equivalent to periodicity of the spreading function, these operators will not have any decay in spreading function and hence will not be in any Schatten class. We have seen that the canonical example of a Λ-translation invariant operator was the frame operator, which we can roughly think of as similar to the identity. The canonical Λ-modulation invariant operator, on the other hand, can be understood as “reflecting” the time-frequency concentration of a function in phase space. If we consider modulation invariance for the whole double phase space, as presented in <cit.>, we find the parity operator: The parity operator P:L^2(ℝ^d)→ L^2(ℝ^d) defined by Pf(t) := f(-t), is a ℝ^2d-modulation invariant operator. The result follows from a simple calculation using the characterisation <ref>: Pπ(z)f(t) = Pe^2π i ω tf(t-x) = e^-2π i ω tf(-t-x) = π(-z)Pf(t). where z=(x,ω). Discretising to a lattice gives another example similar to the discretisation of the identity to the frame operator in the case of Λ-translation invariant operators: Given some atom g∈ L^2(ℝ^d), the operator T = ∑_λ∈Λe^-π i λ_1λ_2/2π(λ/2)g⊗π(λ/2)^*g. is a Λ-modulation invariant operator. Let μ∈Λ. Then with T as above, and for convenience denoting g⊗ g =: S; β_μ(T) = ∑_λ∈Λe^-π i( μ_1μ_2 + λ_1λ_2)/2π(μ/2)π(λ/2)Sπ(λ/2)π(μ/2) = ∑_λ∈Λe^-π i( μ_1μ_2 + λ_1λ_2 + μ_1λ_2 + λ_1μ_2)/2π(μ+λ/2)Sπ(λ+μ/2) = ∑_λ∈Λe^-π i( μ_1 + λ_1)(μ_2+λ_2)/2π(μ+λ/2)Sπ(λ+μ/2) = T. We have seen that heuristically, Λ-translation invariant operators act similar to the identity, while Λ-modulation invariant operators act similar to a reflection in time-frequency concentration in phase space. As the composition of two reflection operators gives the identity, we find that the composition of two Λ-modulation invariant operators gives a Λ/2-translation invariant operator: Let S,T be two Λ-modulation invariant operators. Then the composition S∘ T is a Λ/2-translation invariant operator. Let S,T∈ℳ^∞ be Λ-modulation invariant operators. Then for λ∈Λ; π(λ/2)STπ(λ/2)^* = π(λ/2)Sπ(λ/2)π(λ/2)^*T(λ/2)^* = e^-π i λ_1λ_2π(λ/2)Sπ(λ/2)π(-λ/2)T(-λ/2) = ST where we simply used the unitarity of π(z) and the definition of Λ-modulation invariant operators. Hence ST is Λ/2-translation invariant. Since Λ◃Λ/2, the composition of Λ-modulation invariant operators is also a Λ-translation invariant operator. It follows then that the parity operator is in fact the only ℝ^2d-modulation invariant operator (up to a constant): If S∈ℳ^∞ is a ℝ^2d-modulation invariant operator, then S=c· P for some constant c∈ℂ. Let S∈ℳ^∞ be a ℝ^2d-modulation invariant operator. Then <ref> implies that ℱ_W(S)(z) = ℱ_W (S)(0), and so ℱ_W(S)(z) = c for some c∈ℂ. By the uniqueness of Fourier-Wigner transform, this implies S=c· P, since ℱ_W (P)=1. For a ℝ^2d-modulation invariant operator S∈ℒ(L^2), SPπ(z) = π(z)SP implies S=c· P by irreducibility of π(z). § DECOMPOSITIONS OF Λ-MODULATION INVARIANT OPERATORS §.§ Analysis and Synthesis of Λ-Modulation Invariant Operators The periodicity of the Fourier-Wigner transform of Λ-modulation invariant operators means they can be described in terms of a trigonometric decomposition: Given an operator T∈ℳ^∞ which is Λ-modulation invariant, the Fourier-Wigner transform ℱ_W(T) can be decomposed as ℱ_W (T)(z) = ∑_λ^∘∈Λ^∘σ_T(λ^∘) e^2π i Ω(λ^∘,z). Furthermore, the mapping T↦{σ_T(λ^∘)}_Λ^∘ is a Gelfand triple isomorphism; ℱ_W (T) ∈(M^1,L^2,M^∞)(ℝ^2d/Λ) {σ_T(λ^∘)}_Λ^∘∈( ℓ^1,ℓ^2,ℓ^∞)(Λ^∘). By the definition of Λ-modulation-invariant along with <ref>, the Fourier-Wigner transform of some Λ-modulation invariant T∈ℳ^∞ is a Λ-periodic function. By <ref>, the trigonometric decomposition follows. To show the Gelfand triple isomorphism, the space M^1(ℝ^2d/Λ) is equal to 𝒜(ℝ^2d/Λ) since ℝ^2d/Λ is compact (cf. Lemma 4.1, <cit.>), and hence the Fourier series is absolutely convergent, and conversely an absolutely convergent Fourier series implies ℱ_W(T)∈𝒜(ℝ^2d/Λ). The M^1(ℝ^2d/Λ) and M^∞(ℝ^2d/Λ) results then follow. The L^2(ℝ^2d/Λ) case is Parseval's identity. By <ref>, we can formulate <ref> as a statement on the support of the Weyl symbol: Given an operator T∈ℳ^∞ which is Λ-modulation-invariant, the Weyl symbol σ_T is given by σ_T(z) = ∑_λ^∘∈Λ^∘σ_T(λ^∘)δ_λ^∘(z), where the sum converges in the weak-* topology of M^∞, ie the Weyl symbol is supported on the adjoint lattice. We have already seen that operators of the type T = ∑_λ∈Λe^-π i λ_1λ_2/2π(λ/2)g⊗π(λ/2)^*g are Λ-modulation invariant. However on the converse, the surjectivity of the periodisation operator onto 𝒜(ℝ^2d/Λ) informs us that in fact any “smooth” Λ-modulation invariant operator is of this form, for some operator (not necessarily rank-one) in ℳ^1: Given an operator T∈ℳ^∞ which is Λ-modulation invariant, such that ℱ_W (T)∈𝒜(ℝ^2d/Λ), T can be expressed as T = ∑_λ∈Λe^-π i λ_1λ_2/2π(λ/2) S π(λ/2) for some S∈ℳ^1. Since T is Λ-modulation-invariant and ℱ_W (T)∈𝒜(ℝ^2d/Λ), the surjectivity of the periodisation operator onto 𝒜(ℝ^2d/Λ) means there must exist some g∈ M^1(ℝ^2d) such that ∑_λ∈Λ T_λ g = ℱ_W (T). Defining ℱ_W (S) = g then gives an appropriate operator. Motivated by our previous results and operator-periodisation, we introduce the following. For T∈ℳ^∞, the Fourier-Wigner periodisation of T with respect to Λ is given by ∑_λ∈Λβ_λ(T). The definition above is motivated by the fact that the Weyl correspondence sends ∑_λ∈Λβ_λ(T) to ∑_λ∈ΛM_λσ_T, however Equation (<ref>) shows that ∑_λ∈ΛM_λσ_T = ∑_λ∈ΛT_λℱ_Ωσ_T = ∑_λ∈ΛT_λℱ_W(T), ∀ z∈ℝ^2d. Therefore, ∑_λ∈ΛM_λσ_T is exactly the periodisation of the Fourier-Wigner transform of T, which corresponds to the operator ∑_λ∈Λβ_λ(T). §.§ Sampled Convolutions Recalling the definition of the convolution of two operators: T ⋆ S(z) := tr(Tα_z(S)), it is an interesting question in which cases one can reconstruct an operator T from samples of T⋆ S on, for example, a lattice. This problem appears in several settings: In the case of two rank–one operators T=f⊗ f, S=g⊗ g for f,g∈ L^2(ℝ^d), the operator convolution gives the spectrogram; (T ⋆ S)(z) = |V_g f(z)|^2, while for a rank one S=g⊗ g but arbitrary T, the discretisation of T⋆ S corresponds to the diagonal of the Gabor matrix T ⋆ S(λ) = ⟨ Tπ(λ)g, π(λ)g ⟩_𝒮^',𝒮. In the former case it is known (cf. <cit.>) that there exists no g such that T ↦ S⋆ T|_Λ is injective for the space of Hilbert-Schmidt operators. On the other hand in the latter case, an operator T∈𝔖^' with a symbol in some Paley-Wiener space can be reconstructed from a discretisation of S ⋆ T, for an appropriately chosen g and Λ <cit.>. This line of inquiry motivates Λ-modulation invariant operators, since such operators can be reconstructed from samples of convolution with an appropriate S. In what follows we let K denote a compact neighborhood of the origin not containing any λ^∘∈Λ^∘ apart from the origin. Let S∈ℳ^1 such that σ_S(0)≠ 0, and supp(σ_S)⊂ K. Then given any Λ-modulation invariant operator T∈ℳ^∞; T = 1/σ_S(0)∑_λ^∘∈Λ^∘ T⋆ S(λ^∘)· e^-4iπλ^∘_1λ^∘_2π(2λ^∘)P We will use the fact that the convolution of two operators is equal to the convolution of their Weyl symbols. By <ref>, the Weyl symbol of T is supported on the lattice Λ^∘, and can be expressed as σ_T(z) = ∑_λ^∘∈Λ^∘σ_T(λ^∘)δ_λ^∘(z). Given our conditions on S and K then follows that (T ⋆ S)(λ^∘) = (σ_T * σ_S) (λ^∘) = σ_T (λ^∘)·σ_S (0), and so we can rewrite σ_T(z) = 1/σ_S(0)∑_λ^∘∈Λ^∘ T ⋆ S(λ^∘) δ_λ^∘(z). By the second statement of <ref>, the Weyl quantisation of δ_z is the operator e^-4iπ x·ωπ(2z)P, and so T = σ_S(0)∑_λ^∘∈Λ^∘ T ⋆ S(λ^∘) e^-4iπ x·ωπ(2z)P. §.§ A Janssen Type Representation and Boundedness of Λ-Modulation Invariant Operators For the canonical example of Λ-translation invariant operators, the frame operator S_g,Λ, the Janssen's representation can be interpreted as Poisson's summation formula applied to the periodisation of the Weyl symbol of g⊗ g. Since the spreading function of every Λ-modulation invariant operator T with ℱ_W (T)∈𝒜(ℝ^2d/Λ) can be written as the periodisation of the spreading function of some S∈ℳ^1, we can follow the same approach for Λ-modulation invariant operators, but we find a subtle difference in the lattices of reconstruction due to <ref>. We begin by considering the spreading quantisation of e^2π i Ω(z,z'): For any z∈ℝ^2d, the Fourier-Wigner transform of e^-π i x·ωπ(z)P is the function f(z')=2^-d e^π i Ω(z,z'). From the second form in <ref>, the Weyl symbol of e^-π i x·ωπ(z)P is the Dirac delta distribution δ_2z(z'). Using the correspondence of Weyl symbol and Fourier-Wigner via the symplectic Fourier transform, the Fourier-Wigner of e^-π i x·ωπ(z)P is thus given by ℱ_W(e^-π i x·ωπ(z)P) = ℱ_Ω (δ_2z)(z') = 2^-d e^π i Ω(z,z'). Note that for some λ∈Λ, the Fourier-Wigner transform of e^-π i λ_1·λ_2π(λ)P is Λ/2-periodic as opposed to merely Λ-periodic, as a result of <ref>, and we see the result of this in the Janssen type representation of Λ-modulation invariant operators: Let S∈ℳ^1. Then ∑_λ∈Λβ_λ (S) = 2^d/|Λ|∑_λ^∘∈Λ^∘σ_S(λ^∘)· e^-4π i λ^∘_1·λ^∘_2 π(2λ^∘)P, or equivalently ∑_λ∈Λβ_λ (S) = 2^d/|Λ|∑_λ^∘∈Λ^∘σ_S(λ^∘)·α_λ^∘(P). We consider the Fourier-Wigner transform of the left hand side sum; ℱ_W ( ∑_λ∈Λβ_λ (S) )(z) = ∑_λ∈Λℱ_W (β_λ (S))(z) = ∑_λ∈Λ T_λ(ℱ_W (S))(z). By <ref>, the sum converges absolutely for every z∈ℝ^2d, for S∈ℳ^1. The symplectic Poisson summation formula then gives ∑_λ∈Λ T_λ(ℱ_W (S))(z) = 1/|Λ|∑_λ^∘∈Λ^∘ℱ_Ω (ℱ_W (S))(λ^∘)e^2π i Ω(λ^∘,z) = 1/|Λ|∑_λ^∘∈Λ^∘σ_S(λ^∘)e^2π i Ω(λ^∘,z). The result the follows from <ref>, and the equivalent form from <ref>. We can also show the boundedness of Λ-modulation-invariant operators on shift-invariant, and in particular modulation spaces: Let T∈ℳ^∞ be a Λ-modulation-invariant operator, such that ℱ_W(T) ∈𝒜(ℝ^2d/Λ). Then T is bounded on all modulation spaces M^p,q(ℝ^d), with T_ℒ(M^p,q)≤ C_Λℱ_W(T) _𝒜, where C_Λ = 2^d/|Λ|. Since ℱ_W(T) ∈𝒜(ℝ^2d/Λ), by <ref> and <ref> we can write T as T = C_Λ∑_λ^∘∈Λ^∘σ_T(λ^∘)· e^-4π i λ^∘_1·λ^∘_2π(2λ^∘)P, where {c_λ^∘}_Λ^∘∈ℓ^1(Λ^∘). Since the modulation spaces M^p,q(ℝ^d) are shift invariant, for any f∈ M^p,q(ℝ^d); Tf_M^p,q = C_Λ∑_λ^∘∈Λ^∘σ_T(λ^∘)· e^-4π i λ^∘_1·λ^∘_2π(2λ^∘)Pf_M^p,q ≤ C_Λ∑_λ^∘∈Λ^∘ |σ_T(λ^∘)| π(2λ^∘)Pf_M^p,q = C_Λ∑_λ^∘∈Λ^∘ |σ_T(λ^∘)| f_M^p,q = C_ΛT_𝒜f_M^p,q. § A CALCULUS FOR Λ-MODULATION INVARIANT OPERATORS We now consider the composition of Λ-modulation-invariant operators. We reiterate that these will no longer be Λ-modulation-invariant, but rather Λ/2-translation-invariant. To prove the results of this section we will consider the Weyl symbol and Fourier-Wigner transform of these operators. However, since the Weyl product and Twisted convolution may not be well defined for arbitrary operators in ℳ^∞, we begin with a technical lemma: Let S,T∈ℳ^∞ such that σ_S ♮ℱ_W (T) is well-defined. Then σ_ST = σ_S ♮ℱ_W (T). Since the Weyl symbol of an operator is the symplectic Fourier transform of the Fourier-Wigner transform, we use the composition formula for the Fourier-Wigner transform, to find: σ_ST(z) = ℱ_Ω( ℱ_W (S)♮ℱ_W (T))(z) = ∫_ℝ^2d∫_ℝ^2dℱ_W (S)(z'-z”)ℱ_W (T)(z”)e^-2π i Ω(z',z”) dz” e^-2π i Ω(z,z') dz' = ∫_ℝ^2dℱ_W (T)(z”) ∫_ℝ^2dℱ_W (S)(z'-z”)e^-2π i Ω(z-z”,z') dz' dz” = ∫_ℝ^2dℱ_W (T)(z”) ℱ_Ω(T_z”ℱ_W (S))(z-z”) dz” = ∫_ℝ^2dσ_S (z-z”)ℱ_W (T) (z”)e^-2π i Ω(z,z”) dz” = σ_S ♮ℱ_W (T) (z) as required. One can also use the identity in <ref> to calculate the Weyl product of Weyl symbols. Let S,T∈ℳ^∞ be two Λ-modulation invariant operators, and let Λ·Λ^∘⊂ℤ, that is λ·λ^∘=n∈ℤ for every λ∈Λ, λ^∘∈Λ^∘. Then S and T satisfy the perfect spreading function calculus ℱ_W(S)·ℱ_W(T) = σ_ST. Let S,T∈ℳ^∞ be Λ-modulation invariant operators where Λ·Λ^∘⊂ℤ. Then by <ref> we can consider the Fourier series expansion of the Fourier-Wigner transform ℱ_W (S)(z) = ∑_λ^∘∈Λ^∘ c_λ^∘ e^-2π i Ω(z,λ^∘) for some sequence {c_λ^∘}_λ^∘∈Λ^∘∈ℓ^∞(Λ^∘), with the sum converging in the distributional sense. Conversely, by <ref> the Weyl symbol can be expressed as σ_T = ∑_λ^∘∈Λ^∘ d_λ^∘δ_λ^∘, for some {d_λ^∘}_λ^∘∈Λ^∘∈ℓ^∞(Λ^∘), where again in the sum converges in the distributional sense. Combining these with <ref> then gives σ_S ♮ℱ_W (T)(z) = ∑_λ,λ'∈Λ^∘ c_λ⟨ d_λ'e^-2π i (Ω(z-z',λ') + Ω(z,z')), δ_λ(z') ⟩_𝒮,𝒮^' = ∑_λ,λ'∈Λ^∘ c_λ· d_λ'· e^-2π i (Ω(z,λ') + Ω(z,λ))· e^-2π i σ(λ,λ'). The condition Λ·Λ^∘⊂ℤ is then precisely the condition required to remove the final twisting term, and so σ_ST(z) = ∑_λ,λ'∈Λ^∘ c_λ· d_λ'· e^-2π i (Ω(z,λ') + Ω(z,λ)) = ℱ_W (S)·ℱ_W (T)(z). We collect some simple corollaries of the perfect spreading function calculus. Firstly, since a bounded operator is uniquely determined by its spreading function, we have the commutativity of Λ-modulation invariant operators: If S,T and Λ are as in <ref>, then ST=TS. Secondly, using <ref> along with the fact the symplectic Fourier transform is its own inverse, we can reformulate <ref> as a relation of Weyl symbols: For S,T and Λ as in <ref>; σ_S * σ_T = ℱ_W (ST). These characterisations of Λ-modulation invariant operators gives a way to identify each Λ-modulation invariant operator with a Λ/2-translation invariant operator by composing with the parity operator, and vice versa: The map Θ: S↦ SP is a self-inverse isometric isomorphism on ℳ^∞. It identifies Λ-modulation invariant operators with Λ/2-invariant operators. Moreover, in terms of symbols ℱ_W (Θ(S)) = σ_S. We will only show that Θ is an isometry on and ℳ^∞, as this will be enough to show that Θ is a self-inverse isometric isomorphism on these spaces following Lemma <ref>. Let S∈ℳ^∞, then it follows from the fact that P_|M^1(ℝ^d):M^1(ℝ^d)→ M^1(ℝ^d) is an isometric isomorphism via Lemma <ref> that we have the following: SP_ℳ^∞ = sup{|M^∞,M^1SPfg|:f_M^1=g_M^1=1 } = sup{|M^∞,M^1Sfg|: f_M^1=g_M^1=1 } =S_ℳ^∞. It follows that Θ is an isometric isomorphism on ℳ^∞. Let S be a Λ-modulation invariant operator. We recall that the parity operator P is a ℝ^2d-modulation invariant operator, and consequently a Λ-modulation invariant operator for any Λ. Hence by <ref>, SP is a Λ/2-translation invariant operator. Conversely, given some Λ/2-translation invariant operator T, the operator TP is a Λ-modulation invariant operator, since TPπ(λ2) = Tπ(-λ2)P = π(-λ2)TP, and so the result follows from <ref>. Moreover since the Weyl symbol of P is the Dirac delta distribution, from <ref>, σ_SP = ℱ_W (S). § LAMBDA-MODULATION INVARIANT OPERATORS VIA FINITE-RANK OPERATORS GENERATED BY THE HEISENBERG MODULE We revisit the characterisation results of Section <ref> now equipped with the correspondence result of Theorem <ref>. Just like how we consider the Gabor frame operator S_g,h,Λ the canonical Λ-translation invariant operator with periodisation ∑_λ∈Λα_λ(g⊗ h), we analogously find that Λ/2-periodisation of g⊗ h pre-composed with the parity operator gives us: ∑_λ∈Λα_λ/2(g⊗ h)P = ∑_λ∈Λβ_λ(Pg⊗ h)=S_g,h,Λ/2P. Since P is unitary in ℒ(L^2(ℝ^d)), Equation (<ref>) motivates us to consider operators of the form M = ∑_λ∈Λβ_λ(g⊗ h) as the canonical form of Λ-modulation invariant operators. Indeed we shall show, just like in the case of Section <ref>, that Λ-modulation invariant operators can be characterised in terms of Fourier-Wigner periodisation of finite-rank operators with generators coming from the Heisenberg module. Fix any lattice Λ⊆ℝ^2d. If ψ_1,ψ_2∈ L^2(ℝ^d), then S_Pψ_1,Pψ_2,Λ = PS_ψ_1,ψ_2,ΛP=S_ψ_1,ψ_2,Λ. As a corollary, the parity operator restricts to an isometric isomorphism P_|ℰ_Λ(ℝ^d):ℰ_Λ(ℝ^d)→ℰ_Λ(ℝ^d) on the Heisenberg module. We compute for any g∈ L^2(ℝ^d): S_Pψ_1,Pψ_2,Λg = ∑_λ∈ΛL^2gπ(λ)Pψ_1π(λ)Pψ_2 = P(∑_λ∈ΛL^2Pgπ(-λ)ψ_1π(-λ)ψ_2 ) = PS_ψ_1,ψ_2,ΛPg =S_ψ,Λg. Since P is a unitary map in ℒ(L^2(ℝ^d)), it follows in particular that for any ψ∈ L^2(ℝ^d): S_Pψ,Λ_ℒ(L^2)^2 = sup_g_L^2=1L^2PS_ψ,ΛPgPS_ψ,ΛPg = sup_g_L^2=1L^2S_ψ,ΛPgS_ψ,ΛPg =sup_g_L^2=1L^2S_ψ,ΛgS_ψ,Λg = S_ψ,Λ_ℒ(L^2)^2. It then follows from (<ref>) and (<ref>) that if f∈ℰ_Λ(ℝ^d), then: Pf_ℰ_Λ(ℝ^d)^2=S_Pf,Λ_ℒ(L^2) = S_f,Λ = f_ℰ_Λ(ℝ^d)^2. The above only shows that P is an isometry on ℰ_Λ(ℝ^d), we still have to show that Pf∈ℰ_Λ(ℝ^d). To do this, we need to find a sequence {g_n}_n∈ℕ⊆ M^1(ℝ^d) such that Pf-g_n_ℰ_Λ(ℝ^d)→ 0. Let {f_n}_n∈ℕ⊆ M^1(ℝ^d) such that f-f_n_ℰ_Λ(ℝ^d)→ 0. Due to Lemma <ref>, we can choose g_n:= Pf_n∈ M^1(ℝ^d) for each n∈ℕ. Now it follows from the isometry property (<ref>) that Pf-g_n_ℰ_Λ(ℝ^d)=Pf-Pf_n_ℰ_Λ(ℝ^d)=f-f_n_ℰ_Λ(ℝ^d)→ 0 as required. There exists an n_0∈ℕ such that every Λ-modulation invariant operator in ℳ^∞ is the norm-limit of Fourier-Wigner operator-periodisations of rank-n_0 operators generated by functions coming from ℰ_Λ/2(ℝ^d). Fix n_0∈ℕ as in Theorem <ref> applied to Λ/2-translation invariant operators. Let T∈ℳ^∞ be a Λ-modulation invariant operator. It follows from Theorem <ref> that there exists an Λ/2-translation invariant operator S∈ℳ^∞ such that T=SP. It follows from Theorem <ref> that there exists a sequence of operators {S_m}_m∈ℕ⊆ℳ^∞ where S-S_m_ℳ^∞→ 0. Furthermore for each m, S_m=∑_i=1^n_0 (S_ψ_1,h_i,Λ/2) where ψ_1,...,ψ_n,h_1,...,h_n_0∈ℰ_Λ/2(ℝ^d). But then it follows from Equation (<ref>) that S_mP = ∑_i=1^n_0β_λ(Pψ_i⊗ h_i)=∑_i=1^n_0β_λ(g_i⊗ h_i), where g_i:=Pψ_i∈ℰ_Λ(ℝ^d) by Lemma <ref>. Therefore each S_mP is the Fourier-Wigner operator-periodisation of a rank-n operator generated by functions coming from ℰ_Λ/2(ℝ^d). Lastly, we have from Theorem <ref> that T-S_mP_ℳ^∞=SP-S_mP_ℳ^∞=S-S_m_ℳ^∞→ 0. Following the results for Λ-translation invariant operators, we shall also consider Λ-modulation invariant operators in ℒ(L^2(ℝ^d)), which we shall denote by ℳ_Λ(L^2(ℝ^d)):= {T∈ L^2(ℝ^d): T=β_λ(T), ∀λ∈Λ}. Here we have a crucial observation regarding the map Θ of Theorem <ref>, it is well-defined as a map in L^2(ℝ^d), in fact we have the following result. Θ is a self-inverse isometric isomorphism in ℒ(L^2(ℝ^d)). As a map in ℒ(L^2(ℝ^d)) it is also strong-operator and weak-operator continuous. Finally, it identifies ℳ_Λ(L^2(ℝ^d)) with ℒ_Λ/2(L^2(ℝ^d)). We shall only show that Θ is an isometry as a map in ℒ(L^2(ℝ^d)). Let S∈ℒ(L^2(ℝ^d)), because P∈ℒ(L^2(ℝ^d)) is unitary, we obtain SP_ℒ(L^2)^2=sup_g_L^2=1L^2SPgSPg = sup_Pg_L^2=1L^2SgSg=S_ℒ(L^2)^2. In light of Lemma (<ref>), this is enough to show that Θ is an isometric isomorphism on ℒ(L^2(ℝ^d)). Next, let {T_i}_i∈ I be a net in ℒ(L^2(ℝ^d)) that converges to T∈ℒ(L^2(ℝ^d)) in the weak-operator topology, that is: L^2T_ifg→L^2Tfg for all f,g∈ L^2(ℝ^d). It follows from unitarity of P∈ℒ(L^2(ℝ^d)) that L^2T_iPfg→L^2TPfg for all f,g∈ L^2(ℝ^d), equivalently Θ(T_i)→Θ(T) in the weak-operator topology. The proof for continuity of Θ:L^2(ℝ^d)→ L^2(ℝ^d) in strong-operator topology is similar. The proof showing the identification of ℳ_Λ(L^2(ℝ^d)) with ℒ_Λ/2(L^2(ℝ^d)) via Θ is the same as in Theorem <ref>. There exists an n_0∈ℕ such that every operator in ℳ_Λ(L^2(ℝ^d)) is a weak-operator, or strong-operator limit of Fourier-Wigner operator-periodisations of rank-n_0 operators generated by functions coming from ℰ_Λ/2(ℝ^d). Fix n_0∈ℕ as in Theorem <ref> applied to Λ/2-translation invariant operators. Let T∈ℳ_Λ(L^2(ℝ^d)), then by <ref>, there exists a Λ/2-translation invariance operator S∈ℒ(L^2(ℝ^d)) such that T=SP=Θ(S). We know from Theorem <ref> that there exists a sequence {S_m}_m∈ℕ∈ℒ(L^2(ℝ^d)) such that S_m→ S in weak-operator or strong-operator topology. We can proceed similarly to the proof of Corollary <ref> to show that each S_mP=Θ(S_m) is a Fourier-Wigner operator-periodisation of a rank-n_0 operator generated by functions coming from ℰ_Λ/2(ℝ^d). We are now done since Θ is continuous with respect to the weak-operator and strong-operator topologies, and we obtain Θ(S_m)→Θ(S)=T in either of these topologies. Fix a lattice Λ and consider the Heisenberg module ℰ_Λ(ℝ^d), it actually follows from the proof of Lemma <ref> that the integer n can be fixed to be the minimum number of elements required to generate ℰ_Λ(ℝ^d) as a finitely generated projective left A-module. We couple this with the fact that the generators of ℰ_Λ(ℝ^d) as a left A-module are exactly multi-window Gabor frames for L^2(ℝ^d) <cit.>, and that the existence of n-multi-window Gabor frames on L^2(ℝ^d) with atoms from ℰ_Λ(ℝ^d) implies that the volume of the lattice Λ must necessarily satisfy: |Λ|≤ n <cit.>; we then have the following Lemma. For a fixed Λ, if n∈ℕ is the minimum number of elements required to generate ℰ_Λ(ℝ^d) as a left A-module, then |Λ|≤ n. We can interpret this result to mean that the sparser the lattice Λ is (i.e. larger |Λ|), the greater number of elements are required to generate ℰ_Λ(ℝ^d). Throughout this study, we have seen that there is a fixed integer n∈ℕ that would allow us to characterise Λ-translation invariant operators as operator-periodisations of rank-n operators. We have just discussed that this very same n can be fixed to be the minimum number required to generate ℰ_Λ(ℝ^d). Following Lemma (<ref>), we see that there is a discrepancy between the lowest possible rank of operators that we can use to characterise Λ-translation invariant operators via operator-periodisation versus that of Λ-modulation invariant operators via Fourier-Wigner operator-periodisation. For a fixed lattice Λ⊆ℝ^d, we have the following: * If n is the minimum number that makes the characterisation of Λ-translation invariant operators as in Theorem <ref> and Theorem <ref> true, then |Λ|≤ n. * If n_0 is the minimum number the makes the characterisation of Λ-modulation invariant operators as in Theorem <ref> and Theorem <ref> true, then |Λ|/4^d≤ n_0. * We know that n is the same as the minimum number of generators required to generate ℰ_Λ(ℝ^d), therefore it follows from Lemma <ref> that |Λ|≤ n. * The proof here is similar to Item 1, however n_0 is the minimum number of generators required to generate ℰ_Λ/2(ℝ^d). Therefore it follows from <ref> that |Λ/2 |≤ n_0. Since we always consider lattices here to be full–rank lattices, there must exist some A∈GL(2d,ℝ) such that Λ= Aℤ^2d, therefore |Λ/2| = |(A/2)|=(1/2)^2d|(A)|=(1/4)^d|Λ|. We then obtain |Λ|/4^d≤ n_0. We see from the result above that Λ-modulation invariant operators generally require less generators from the Heisenberg module to be represented as a Fourier-Wigner periodisation, compared to representations of Λ-translation invariant operators via operator-periodisation. In fact, in one special case, we have an exact result concerning these generators. Let {e_1,...,e_2d} be the standard basis for ℝ^2d. We say that a lattice Λ=Aℤ^2d, A∈GL(2d,ℝ) is non-rational if there exists i,j∈{1,...,2d} such that Ω(Ae_i,Ae_j) is not rational. A curious result concerning non-rational lattices due to <cit.> states that the Heisenberg module ℰ_Λ(ℝ^d) with non-rational lattice Λ satisfying n-1≤ |Λ|<n for n∈ℕ has exactly n generators. Therefore, in terms of the integer ceiling function · :ℝ→ℤ, we have the following result, by an argument similar to Proposition <ref>. If Λ is non-rational: * Every Λ-translation invariant operator is characterized as the topological limits of operator-periodisation in the sense of Theorem <ref> and Theorem <ref>, of exactly rank-|Λ| operators generated by ℰ_Λ(ℝ^d). * Every Λ-modulation invariant operator is characterized as the topological limits of Fourier-Wigner operator-periodisation in the sense of Theorem <ref> and Theorem <ref>, of exactly rank-|Λ/2| operators generated by ℰ_Λ/2(ℝ^d). § DISCUSSION The interesting appearance of the Λ/2 lattice in the correspondence between translation and modulation invariant operators seems peculiar at first glance, as one may initially suspect a Λ to Λ type correspondence. However, the Λ/2 arises directly as a result of using the Weyl quantisation, as considered in <ref>. In fact the need for automorphisms of the type x↦ 2x can cause problems for Weyl quantisation on compact or discrete locally compact abelian groups. However, since in this work we consider phase space as the underlying group, we are free to scale lattices as required. Nonetheless, it is instructive to consider how the concept of modulations looks for other quantisation schemes than Weyl. Consider Cohen's class quantisations, that is to say those defined in terms of a convolution with the Wigner distribution: ⟨ (a⋆ L_σ)f, g⟩ = ⟨ a*σ, W(f,g)⟩. Quantisation schemes of this type include τ-Weyl quantisation, itself encompassing Kohn-Nirenberg quantisation and operators “with right symbol”, as well as the Born-Jordan quantisation (cf. <cit.>). Since these quantisation schemes relate to Weyl quantisation via a convolution, operator translations given by α_z correspond to translations on the symbols of all of these quantisation schemes as a result of the Weyl case; ⟨π(z)(a⋆ L_σ)π(z)^*f, g⟩ = ⟨ T_z(a*σ), W(g,f) ⟩ = ⟨ a* T_zσ, W(g,f) ⟩. However, considering modulations instead of translation, we clearly lose this universality. For some τ-Weyl quantisation where τ≠ 1/2, a_τ = 2^d/|2τ-1|e^2π i22τ-1x·ω so we have the identity σ_S^τ = 2^d/|2τ-1|e^2π i22τ-1x·ω * σ_S (cf. <cit.>). Modulation in the sense of τ-Weyl quantisation must then correspond to a translation of ℱ_Ω (σ_S^τ) = 2^d/|2τ-1|ℱ_Ω (e^2π i22τ-1x·ω) ·ℱ_Ω (σ_S). In this sense Weyl quantisation, where the planar wave disappears, can be seen as the natural setting for quantum time–frequency analysis. § ACKNOWLEDGEMENTS The authors would like to thank Professor Franz Luef for helpful advice. abbrv
http://arxiv.org/abs/2406.09105v1
20240613133149
INS-MMBench: A Comprehensive Benchmark for Evaluating LVLMs' Performance in Insurance
[ "Chenwei Lin", "Hanjia Lyu", "Xian Xu", "Jiebo Luo" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.CL", "cs.LG" ]
Generative vs. discriminative modeling under the lens of uncertainty quantification Zachary W. Windom^1, 2, Daniel Claudino^2[mailto:claudinodc@ornl.govclaudinodc@ornl.gov], and Rodney J. Bartlett^1 Received Month dd, yyyy; accepted Month dd, yyyy ====================================================================================================================== § ABSTRACT Large Vision-Language Models (LVLMs) have demonstrated outstanding performance in various general multimodal applications such as image recognition and visual reasoning, and have also shown promising potential in specialized domains. However, the application potential of LVLMs in the insurance domain—characterized by rich application scenarios and abundant multimodal data—has not been effectively explored. There is no systematic review of multimodal tasks in the insurance domain, nor a benchmark specifically designed to evaluate the capabilities of LVLMs in insurance. This gap hinders the development of LVLMs within the insurance domain. In this paper, we systematically review and distill multimodal tasks for four representative types of insurance: auto insurance, property insurance, health insurance, and agricultural insurance. We propose INS-MMBench, the first comprehensive LVLMs benchmark tailored for the insurance domain. INS-MMBench comprises a total of 2.2K thoroughly designed multiple-choice questions, covering 12 meta-tasks and 22 fundamental tasks. Furthermore, we evaluate multiple representative LVLMs, including closed-source models such as GPT-4o and open-source models like BLIP-2. This evaluation not only validates the effectiveness of our benchmark but also provides an in-depth performance analysis of current LVLMs on various multimodal tasks in the insurance domain. We hope that INS-MMBench will facilitate the further application of LVLMs in the insurance domain and inspire interdisciplinary development. Our dataset and evaluation code are available at <https://github.com/FDU-INS/INS-MMBench>. § INTRODUCTION In recent years, Large Language Models (LLMs) have demonstrated remarkably powerful semantic understanding and conversational capabilities <cit.>, profoundly impacting human work and life. Building on this foundation, Large Visual Language Models (LVLMs) have taken a further step by mapping and aligning visual and textual features, enabling the processing and interaction with multimodal data <cit.>. Researchers have found that LVLMs exhibit exceptional performance in general tasks such as image recognition, document parsing, and OCR processing <cit.>. Beyond exploring general capabilities, researchers have also begun to apply LVLMs to various specialized domains such as healthcare <cit.>, autonomous driving <cit.> and social media content analysis <cit.>. By exploring the capabilities of LVLMs in specialized domains through qualitative and quantitative methods, these studies have demonstrated various application potentials. Insurance, as a discipline encompassing numerous multimodal application scenarios, involves extensive use of multimodal data and computer vision algorithms in its actual operations <cit.>. This offers vast potential for the integration of LVLMs with the insurance industry. For instance, in auto insurance, analyzing images of damaged vehicles can enable quick assessments and accurate estimations of damage <cit.>. Similarly, in property insurance, analyzing images of buildings can help evaluate potential risks <cit.>. However, existing research <cit.> has only qualitatively analyzed the application of LVLMs in the insurance domain, without systematically organizing related multimodal tasks or constructing domain-specific benchmarks. This has hindered the in-depth evaluation and promotion of LVLMs' capabilities within the insurance domain. To address this challenge, we introduce INS-MMBench, the first comprehensive LVLMs benchmark for the insurance domain (see Figure <ref>). For task design, we systematically organize and refine multimodal tasks across four representative types of insurance: auto, property, health, and agricultural insurance. Using a bottom-up hierarchical task definition methodology, we construct a total of 12 meta-tasks and 22 fundamental tasks, covering key insurance stages such as underwriting, risk monitoring, and claim processing. For data collection, we search and process datasets from multiple open-source channels, selecting datasets with high scenario relevance, task relevance, and data availability. For benchmark construction, INS-MMBench includes a total of 2.2K thoroughly designed multiple-choice visual questions. Such format facilitates convenient and objective analysis of evaluation results. These questions are formulated manually and the distractor options are generated with the help of GPT-4o. Furthermore, we select 10 LVLMs for evaluation and conduct a comprehensive analysis of the results. The key findings from the evaluation are as follows: (1) GPT-4o performs the best among all models, scoring 72.91/100. It is also the only model to score over 70, reflecting the challenging nature of the INS-MMBench; (2) There are significant differences in LVLMs' performance across different insurance types, with better results in auto insurance and health insurance compared to property insurance and agricultural insurance; (3) LVLMs exhibit marked differences in performance across different meta-tasks, closely related to the task type and the image type; (4) The gap between open-source and closed-source LVLMs is narrowing, with some open-source models now approaching or even surpassing the capabilities of closed-source models in some tasks; (5) The primary reasons for LVLMs' errors on the INS-MMBench are lack of knowledge and understanding in the insurance field, as well as perception errors. Overall, the contributions of our work are as follows: * We propose INS-MMBench, the first LVLMs benchmark for the insurance domain, which includes a total of 2.2K multiple-choice visual questions covering four types of insurance (auto, property, health, and agricultural insurance), 12 meta-tasks and 22 fundamental tasks. * We conduct an in-depth evaluation of 10 LVLMs, including 7 proprietary and 3 open-source models, representing the first quantitative assessment of LVLMs' capabilities in the insurance domain. * We conduct a further analysis of the evaluation results, providing insights into the potential applications of LVLMs in the insurance domain. This analysis also offers a reference for understanding the opportunities and challenges associated with LVLMs in this sector. § RELATED WORKS §.§ Large Vision-Language Models With the rapid development of Large Language Models (LLMs) <cit.>, researchers are leveraging the powerful generalization capabilities of these pre-trained LLMs for processing and understanding multimodal data <cit.>. A key area of focus is the use of Large Vision-Language Models (LVLMs) for visual inputs. LVLMs employ visual encoders and visual-to-language adapters to encode the visual features from image data and align these features with textual features. The combined features are then processed by pre-trained LLMs, leading to significant advancements in visual recognition and understanding <cit.>. Various open-source and closed-source LVLMs are continuously emerging. In the realm of open-source models, notable examples include LLaMA-Adapter <cit.>, LLaVA <cit.>, BLIP-2 <cit.>, MiniGPT-4 <cit.>, and InternVL <cit.>. These models have successfully integrated visual and textual modalities, achieving commendable results. In the closed-source domain, representative models include GPT-4o <cit.>, GPT-4V <cit.>, GeminiProVision <cit.>, and Qwen-VL <cit.>, all of which have demonstrated outstanding performance in numerous tests and evaluations <cit.>. We intend to evaluate both open-source and closed-source LVLMs to verify the capability of different models in the insurance domain. §.§ Benchmarks for Large Vision-Language Models As research into LVLMs intensifies, an increasing number of researchers are proposing benchmarks to evaluate the capabilities of models <cit.>. Based on the scope of capability evaluation, these studies can be categorized into three types: task-specific benchmarks, comprehensive benchmarks, and domain-specific benchmarks. Comprehensive benchmarks are characterized by their breadth and generality. Researchers construct these benchmarks by defining and categorizing the general capabilities and tasks of LVLMs, resulting in a comprehensive and wide-ranging evaluation. Representative studies include LVLM-eHub <cit.>, SEED-Bench <cit.>, MMBench <cit.>, MME, and MMT-Bench <cit.>. Task-specific benchmarks focus on particular tasks and types of visual data, providing detailed task definitions. Examples include SciFIBench <cit.> for scientific images, MMC-Benchmark <cit.> for charts, MVBench <cit.> (using video frames as input) for videos and SEED-Bench-2-Plus <cit.> for web pages, charts and maps. Domain-specific benchmarks are designed for visual tasks within specific professional domain. Due to the specialized knowledge and unique tasks of these domains, general benchmark cannot fully meet the needs of evaluating LVLMs in these areas. As a result, researchers have begun proposing specialized benchmarks for domains such as healthcare (OmniMedVQA <cit.>), mathematics <cit.>, autonomous driving (Talk2BEV-Bench <cit.>), and geography <cit.>. However, as mentioned previously, the insurance domain and even the finance domain currently lack corresponding domain-specific benchmarks for LVLMs <cit.>. Our work introduces INS-MMBench to address this gap, aiming for a significant advancements in the application of LVLMs in the insurance domain. § INS-MMBENCH §.§ Tasks Given the differences in workflows among various types of insurance in practical operations, we select four core types for building this benchmark: auto insurance, commercial/household property insurance, health insurance, and agricultural insurance. These categories cover both life and property insurance, which are the most prevalent in the insurance market and highly representative <cit.>. To ensure that our evaluation tasks closely align with real-world applications in the insurance domain and fully demonstrate the capabilities of LVLMs in this context, we have developed a bottom-up hierarchical task definition methodology. Using this methodology, we construct a systematic visual task framework specifically tailored for the insurance sector. As an example, we discuss the detailed task construction process for auto insurance (see Figure <ref>). Initially, based on the insurance value chain theory <cit.>, we select three key stages rich in multimodal data and tasks: vehicle underwriting, vehicle risk monitoring, and vehicle claim processing. At each stage, we identify the key visual elements that insurance operators need to extract. For instance, during the vehicle underwriting stage, operators must confirm elements such as license plate information, vehicle model, dashboard readings, and vehicle condition, which are crucial for information collection, condition verification, and underwriting decision-making. Further, based on these key visual elements, we define the fundamental tasks. For example, the need to extract license plate information led to the definition of the License Plate Recognition task, while the need to monitor risky driving behavior resulted in the In-car Driving Driving Behavior Detection task. By following this process, we define a total of nine fundamental tasks for auto insurance. Finally, we cluster these fundamental tasks based on their characteristics, forming four meta-tasks. Through this approach, we have constructed a comprehensive set of 12 meta-tasks and 22 fundamental tasks across the four types of insurance. §.§ Dataset collection Once the task definition is complete, we start collecting data and constructing the multi-choice visual questions. Our data collection and benchmark construction process (see Figure <ref>) is as follows: Data sources. We search for datasets using keywords related to the fundamental tasks in several popular data sources, including Google, Kaggle, Github, and Roboflow. For tasks where multiple public datasets are available, we download and compare these datasets to perform an initial screening. We select datasets with high adaptability and usability for insurance scenarios, as detailed in Table <ref>. Data processing. To facilitate LVLMs evaluation, we set the number of images and questions for each fundamental task to 100. These 100 images are randomly sampled from our selected datasets, and considering the balance of test sample types, we perform balanced sampling on datasets with categorical labels. For example, in the vehicle damage severity detection task, we ensure that the number of labels - undamaged, minor damage, moderate damage and severe damage - is balanced to maintain the validity of the evaluation. Meanwhile, we process the annotation content, converting it to text-based labels, in preparation for subsequent question and answer generation. Question and answer generation. For each fundamental task, we set questions that are directly and unambiguously related to the task. For example, the question for the number plate recognition task is “What is the number plate of the vehicle in the picture?” The number of options for the questions is set to 2 to 4. For tasks with yes/no labels, we keep the yes/no labels as options. For other tasks, we generate distractor options using the GPT-4o model, and finally combine these options into a multi-choice visual question format. In each fundamental task we ensure a balanced distribution of correct option positions. § EXPERIMENT §.§ Experimental setting Selected LVLMs. We select a representative set of 10 LVLMs for our evaluation. This set includes seven closed-source LVLMs: GPT-4o, GPT-4V, GeminiProVision, QwenVLPlus, QwenVLMax, Claude3V_Sonnet, and Claude3V_Haiku as well as three open-source LVLMs including LLaVA, BLIP-2, and Qwen-VL-Chat. Evaluation methods. We employ VLMEvalKit, an open-source evaluation toolkit for LVLMs developed by OpenCompass <cit.>, to conduct our evaluations. This toolkit supports integrated testing of both closed-source and open-source LVLMs and is adaptable to custom benchmark datasets. VLMEvalKit provides two methods for evaluating responses to multi-choice visual questions: exact matching (finding "A", "B", "C", "D" in the output strings) and LLM-based answer extraction which analyzes the answer outputs using a Large Language Model (we use GPT-3.5 here). These methods help mitigate the issue of uncontrolled free-form content generation by LVLMs. The accuracy metric is used as the evaluation criterion. §.§ Main results Tables <ref> and <ref> present the evaluation results of LVLMS across various insurance types and meta-tasks, respectively, using random guessing as the baseline. The results are organized into two sections: the first seven rows feature proprietary LVLMs, while the subsequent rows cover open-source LVLMs. Overall, GPT-4o outperforms all other models, emerging as the top-performing LVLM on the INS-MMBench with a score of 72.91. This is the only model with an overall score exceeding 70, underscoring the challenging nature of the INS-MMBench. Most LVLMs scored below 60, and some even underperformed relative to a random guess baseline of 25 in certain insurance categories, indicating significant potential for improvement in applying LVLMs within the insurance domain. Based on the data in Tables <ref> and <ref>, the following observations can be made. LVLMs show significant variance across different types of insurance. Experimental results reveal that both open-source and proprietary LVLMs perform better in tasks related to auto insurance and health insurance compared to those involving property and agricultural insurance. For instance, GPT-4o, which exhibits the best performance, scores 85.33 and 82.00 in auto and health insurance tasks respectively; however, its scores drop to 65.00 and 45.50 in property and agricultural insurance tasks, indicating a gap from practical application. This discrepancy may stem from the availability of datasets. Our data collection process highlights that publicly available datasets are more plentiful and comprehensive in the automotive and medical fields. Based on these observations, we suggest that the future deployment of LVLMs in the insurance sector should be a progressive process, initially focusing on areas like auto and health insurance where they are most effective. LVLMs show significant variance across different meta-tasks. Experimental results reveal that LVLMs demonstrate considerable performance variability across various meta-tasks, likely influenced by the specific nature of each task and the characteristics of the images involved. Most models excel in tasks like vehicle information extraction (VAE), vehicle appearance recognition (VAR), and health risk monitoring (HRA), which primarily depend on visual element perception and object detection. In contrast, performance dips in more complex tasks such as household/commercial property damage detection (HPDD) and crop growth stage identification (CGSI), which demand additional domain-specific knowledge or reasoning abilities. Furthermore, LVLMs generally struggle with tasks involving satellite or drone aerial imagery, including household/commercial property risk assessment (HPRA), crop type identification (CTI), and farmland damage detection (FDD), where unique imaging perspectives and data complexities pose additional challenges. Narrowing gap between open-source and closed-source LVLMs. A comparison of the overall performance of open-source and closed-source LVLMs on INS-MMBench indicates that, while there is still a notable gap between the two, some open-source LVLMs are nearing the performance levels of their closed-source counterparts. This trend suggests that as open-source models grow stronger and domain-specific data becomes more abundant, focusing on training high-performance, domain-specific LVLMs could become a key development strategy in the application of LVLMs within the insurance domain. §.§ Error analysis To provide further insights into the limitations of LVLMs in the insurance domain, we conduct an in-depth analysis of the errors made by selected models on the INS-MMBench. We examine the error patterns of three models: GPT-4o, GeminiProVision, and Qwen-VL-Max, categorizing the errors into four types: perception errors (where LVLMs do not recognize or detect objects or content within the image), lack of insurance knowledge or reasoning ability (where LVLMs can recognize and perceive visual content but lack the necessary insurance knowledge or reasoning skills to answer the question), refusal to answer (where LVLMs decline to respond to questions they deem sensitive or illegal), and failure to follow instructions (where LVLMs do not adhere to the provided instructions, resulting in irrelevant responses). The error analysis results for these models are illustrated in Figure <ref>. The most common error type is the lack of insurance knowledge or reasoning ability, which accounts for 59.5%, 64.0%, and 57.2% of the errors in GPT-4o, GeminiProVision, and Qwen-VL-Max, respectively. Due to insufficient specialized knowledge and analytical skills in the insurance field, LVLMs struggle to accurately assess and judge factors such as risk conditions and the extent of damage. Therefore, optimizing LVLMs for the insurance domain should primarily focus on enriching domain-specific knowledge and enhancing professional capabilities. Perception errors are the second most significant error type. Limited by the capabilities of the visual encoder, LVLMs often fail to fully recognize and capture detailed content in images, leading to misinterpretations. For instance, GPT-4o misidentifies a damaged farmland image as `an abstract or close-up view of a textured surface with blue and purple hues'. This type of error is common across LVLMs. Additionally, due to built-in safety monitoring functions, GPT-4o and GeminiProVision sometimes incorrectly flag images as illegal and refuse to respond. Qwen-VL-Max, on the other hand, struggles with following instructions, occasionally outputting content in Chinese, which compromises result accuracy. § DISCUSSIONS AND CONCLUSIONS In this paper, we introduce INS-MMBench, a multimodal benchmark tailored for the insurance domain, designed to evaluate Large Vision-Language Models (LVLMs). To the best of our knowledge, this is the first initiative to systematically review multimodal tasks within this sector and establish a specialized benchmark specifically for it. INS-MMBench comprises 2.2K multiple-choice visual questions, covering four types of insurance, 12 meta-tasks, and 22 fundamental tasks, effectively supporting the assessment of LVLMs' applications in insurance. Additionally, we evaluate several mainstream LVLMs and provide a detailed analysis of the results, offering an initial exploration into the feasibility of employing LVLMs in the insurance sector. We hope our benchmark and findings will guide future research in this field and enhance the integration of insurance academia with AI advancements, promoting interdisciplinary exchanges within the sector. However, this study has limitations. A significant constraint is the lack of open-source image datasets specific to the insurance domain, largely due to privacy concerns. The data used in this study, sourced from publicly available datasets, has been manually curated but may still harbor biases that do not fully align with real-world insurance scenarios. This issue underscores the need for collaborative efforts between insurance companies and the academic community to develop dedicated open-source image datasets for the insurance domain. Another limitation is that INS-MMBench disaggregates the tasks of LVLMs into various fundamental tasks, assessing LVLM performance from a micro perspective based on task-specific accuracy. In reality, visual tasks in insurance often entail complex integration of multiple capabilities and comprehensive analysis. Addressing this, our next objective is to construct a more complex, integrated application benchmark to enable a deeper evaluation of LVLM applications in the insurance domain. plain § EXAMPLE CASES To offer a detailed view of the task settings in INS-MMBench, we have selected sample cases for each core task and present responses from GPT-4o, GeminiProVision, and Qwen-VL-Max in this section.
http://arxiv.org/abs/2406.08219v1
20240612135038
Impact of environmental interaction on bias induced circular current in a ring nanojunction
[ "Moumita Mondal", "Santanu K. Maiti" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.dis-nn" ]
moumitamondal_r@isical.ac.in Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 Barrackpore Trunk Road, Kolkata-700 108, India santanu.maiti@isical.ac.in Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 Barrackpore Trunk Road, Kolkata-700 108, India § ABSTRACT The specific role of environmental interaction on bias driven circular current in a ring nanojunction is explored, for the first time to the best of our concern, within a tight-binding framework based on wave-guide theory. The environmental interaction is implemented through disorder in backbone sites where these sites are directly coupled to parent lattice sites of the ring via single bonds. In absence of backbone disorder circular current becomes zero for a lengthwise symmetric nanojunction, while it increases with disorder which is quite unusual, and after reaching a maximum it eventually drops to zero in the limit of high disorder. The effects of ring-electrode interface configuration, ring-backbone coupling, different types of backbone disorder and system temperature are critically investigated to make the present analysis comprehensive. All the studied results are valid for a broad range of physical parameters, giving us confidence that the outcomes of this theoretical work can be verified in a laboratory. Impact of environmental interaction on bias induced circular current in a ring nanojunction Santanu K. Maiti June 17, 2024 =========================================================================================== § INTRODUCTION Nano rings have long been the focus of intense research, revealing numerous fascinating phenomena compared to linear counterparts. When a loop conductor interfaces with external electronic baths, it generates a net circular current under specific conditions <cit.>. While researchers are familiar with transport or junction currents, bias-driven circular currents represent a relatively novel phenomenon that has yet to be thoroughly investigated. Some groups <cit.> have made attempts in this direction previously, but it was the pioneering work of Nitzan and his team <cit.> around a decade ago that brought it to prominence. Circular currents involve current distributions within individual bonds, leading to the emergence of crucial features such as the conducting properties of multi-arm loop systems, the specific role of disorder, and the nature of total current flow. The majority of literature articles have primarily explored the phenomenon of circular currents in nanojunctions under voltage bias, typically assuming disorder-free conductors <cit.>. Little attention has been paid to discussing the influence of disorder <cit.>. However, as far as we know, no one has yet tackled the impact of environmental interactions, which are often unavoidable in experimental settings. Our current study aims to fill this gap. To address this, we examine a nano ring positioned between source and drain electrodes, with each ring site directly linked to a backbone site (refer to Fig. <ref>). The environmental interaction is introduced phenomenologically by incorporating impurities into these backbone sites <cit.>. This protocol is a standard method for assessing how the environment interacts with the system, and it has been utilized in numerous studies concerning transport phenomena. Various types of disorder can be considered. It can either be fully uncorrelated (random) <cit.>, which is commonly employed, or correlated (non-random) <cit.>. Random disorder is relatively straightforward but requires extensive configuration averaging across numerous distinct configurations. Conversely, with correlated disorder, configuration averaging is unnecessary, and many nontrivial signatures emerge. One captivating example of correlated disorder is the Aubry-André-Harper (AAH) model <cit.>, renowned for its diverse and intriguing characteristics, which has been extensively explored in various contexts. In this study, we incorporate disorder in backbone sites following the AAH form and explicitly discuss its effect on bias-driven circular current. Using a tight-binding (TB) framework to illustrate the nanojunction, we obtain the circular current based on the well-known wave-guide (WG) theory <cit.>. Interestingly, we observe that the circular current increases with disorder strength, which is quite unusual. After reaching a maximum, the current decreases and practically drops to zero in the limit of high disorder strength. Thus, backbone disorder, representing environmental interaction, effectively enhances the circular current within a specific region, which is quite interesting. To make this study comprehensive, we also critically investigate the effects of (i) ring-electrode junction configuration, which plays a dominant role in transport behavior, (ii) ring-backbone coupling, (iii) other types of disorder, and (iv) system temperature. Our detailed numerical calculations reveal that environmental interaction has a strong effect on circular current. We organize our work as follows. After the brief introduction above, in Sec. II we present the ring nanojunction, its tight-binding Hamiltonian, and the necessary theoretical steps for obtaining the results. In Sec. III, we critically explain all the findings with appropriate physical arguments. Finally, we conclude in Sec. IV. § JUNCTION SETUP, HAMILTONIAN AND THEORETICAL FRAMEWORK §.§ Nanojunction and the TB Hamiltonian Let us start with the junction setup shown in Fig. <ref>. A nano ring, possessing N_R lattice sites, is clamped between two electron reservoirs, commonly referred to as source (S) and drain (D). Each lattice site of the ring (represented by a filled purple circle) is connected to a backbone site via a single bond. To denote backbone sites are disordered, we use different colors for those sites. The total number of backbone sites is mentioned by the parameter N_B, and in our setup N_B is always identical to N_R. Under a suitable condition, a net circular current, specified by I_c is generated in the ring. Its characteristic features will be studied. The ring-electrode junction system is simulated within a tight-binding framework. Since the complete system contains different parts, it is convenient to write the full Hamiltonian, H, as a sum of different sub-Hamiltonians related to different parts of the junction. The general form of TB Hamiltonian of any part looks like H_β=∑_n ϵ_β,nc_β,n^†c_β,n + ∑_n t_β(c_β,n^†c_β,n+1+h.c.) where β=S, D and ring (R). ϵ_β,n corresponds to the on-site energy of an electron at site n of the part β and t_β is the nearest-neighbor hopping (NNH) strength. Now we explicitly mention these TB parameters associated to different parts of the junction. For the side-attached electrodes we set ϵ_β,n=ϵ_0 and t_β=t_0. The electrodes are assumed to be one-dimensional, reflection-less and perfect. The source and drain are coupled to the ring via the coupling parameters τ_S and τ_D, respectively. In the junction setup, it is considered that the source is always connected to site number 1 of the ring, while the drain position (site number q) can vary (see Fig. <ref>). The physical system sandwiched between source and drain contains two different kinds of sites: sites associated to the ring (those are refereed to parent lattice sites) and the backbone sites. Any ring site is labeled as ϵ_R,n and for the backbone site it is ϵ_B,n. In our setup the ring sites are clean (viz, all the ring sites are identical), and disorder is introduced only in the backbone sites. Unless specified, we choose ϵ_B,n in the form of AAH model <cit.> ϵ_B,n=W cos(2 π b n) where W is the disorder strength and b is an irrational number. As pointed out, another type of disorder can also be taken into account. Each backbone site is connected to a parent lattice site via a single bond with the hopping strength η. For the case of rational b, the backbone sites become perfect and the periodicity depends on the choice of b and N_B. The NNH strength in the ring is mentioned by the parameter t. §.§ Theoretical Prescription To understand the description of circular current, let us start with arm currents associated with different arms of the ring geometry. For our chosen setup, if the upper and lower arms having the lengths L_U and L_L carry the currents I_U and I_L respectively then the circular current, I_c, is defined as <cit.> I_c=I_UL_U + I_LL_L/L_U + L_L where L (circumference of the ring) = L_U + L_L. When the lengths and status of the two arms are exactly identical it is simple to follow that I_U =-I_L and then I_c becomes zero. So, in order to have a non zero I_c we need to break the symmetry between the two arms. The symmetry breaking can be done in three ways: (i) by changing the arm lengths or (ii) by setting the status of the two arms different when their lengths are identical or (iii) by both. In our case, we set all those conditions one by one. In TB framework the current in any arm or any single/multiple bonds can be calculated by using the WG theory (a standard protocol). For that first we need to calculate the bond current densities in individual bonds. The current density in any bond of the ring is defined as <cit.> J_n,n+1(E)=(2e/ħ) [t C_R,n^* C_R,n+1] where t is the NNH strength in the ring (mentioned earlier), e and ħ (=h/2π) carry their usual meanings; C_R,n's are the wave amplitudes, and they are evaluated by solving (N_R+N_B+2) coupled equations (for more details, we suggest to see Refs. <cit.>). The general form of the coupled equations look like (E-ϵ_β,n) C_β,n = ∑_m t_β,m C_β,m where m is the site index referring to the nearest-neighbor sites of n. The (N_R+N_B) equations come from the ring and backbone sites, and other two equations arise from the sites where the ring is coupled to S and D. Getting all these coefficients, C_β,n, associated to the lattice sites, we can find all the bond current densities and summing over the required bonds we can calculate the current densities associated with upper and lower arms. We call them as J_U and J_L respectively. Integrating current densities over a suitable energy window we find the currents associated with different arms. The arm current is expressed as <cit.> I_U/L=∫ J_U/L(E) (f_S-f_D) dE where f_S(D) is the Fermi-Dirac distribution function for S(D), and it is f_S(D) = 1/1+e^(E-μ_S(D))/K_B T. Here T is the equilibrium temperature, K_B is the Boltzmann constant, and μ_S and μ_D are the electro-chemical potentials of S and D respectively. In terms of the equilibrium Fermi energy E_F and bias voltage V, μ_S and μ_D are written as: μ_S = E_F + eV/2 and μ_D = E_F - eV/2. Once I_U and I_L are found out, the circular current I_c is obtained from the above mentioned definition. § RESULTS AND DISCUSSION The central focus of this work is to investigate the specific role of backbone disorder on bias driven circular current in a nano ring. Unless mentioned, the disorder is introduced in backbone sites in the cosine form following the AAH model. The incommensurate factor `b' in the expression of ϵ_B,n is chosen as (1+√(5))/2 (golden mean) which is quite common for the AAH model and has been used extensively in the literature <cit.>, though any other irrational number can be taken into consideration. At the end of this section, the effects of other two types of correlated disorders on I_c are also discussed to check the sensitivity of I_c on the nature of the disorder. The results are mostly discussed for the lengthwise symmetric ring nanojunction, setting the temperature at zero Kelvin. The effects of ring-electrode junction configuration, ring-backbone coupling and temperature are studied in appropriate parts for the sake of completion. For computing the results, the values of some of the physical parameters are kept constant throughout the work, and here it is relevant to mention them. If not indicated otherwise, we set ϵ_0=0, t_0=3, τ_S=τ_D=1, ϵ_R,n=0, t=1, η=1, E_F=0, ϕ_ν=0, T=0 and N_R=10. The values of other TB parameters that are not fixed, are specified as needed during the discussion. All energies are measured in unit of eV. Let us now begin our discussion and analyze the results step by step. To understand the impact of backbone disorder it is indeed meaningful to start with the setup where all the backbone sites are free from any disorder, which is obtained by setting W=0, and to check the dependence of current densities associated with the upper and lower arms of the ring, as well as the circular current density. The results are presented in Fig. <ref> where the characteristic behaviors of J_U and J_L are shown in (a), and in (b), the variation of the circular current density J_c is depicted. J_c is obtained by summing J_U and J_L. Several important features are evident. Both for J_U and J_L we have finite peaks around some energies and for other energies they are vanishingly small. For a small window across E=0, the current densities vanish completely. These peaks in the current density profiles are associated with the energy eigenvalues of the ring-backbone system clamped between the electrodes. As the upper and lower arms are identical lengthwise as well as status wise, J_U and J_L become identical in magnitude and opposite in sign. In our formulation, positive sign is assigned when current flows in the clockwise direction. The nature of the density profile depends on many factors associated with the junction setup. For a symmetric junction configuration, the peaks are uniform around the discrete energy eigenvalues, and their widths are primarily controlled by the ring-electrode coupling strengths, τ_S and τ_D. For the weak-coupling limit (τ_S(D)<<t), the widths are narrow, while they get broadened in the strong-coupling limit which is specified by the condition τ_S(D)∼ t. Since this coupling effect is quite well-known in the context of electron transmission, here it is not explicitly discussed (interested readers can follow the Refs. <cit.>). Based on the nature of J_U and J_L (Fig. <ref>(a)), the behavior of J_c (Fig. <ref>(b)) is easily understood. Across the entire energy window the net circular density becomes zero, and hence, bias driven circular current is not expected for this case i.e., when the upper and lower arms are symmetric to each other. The circular current can be generated by breaking the symmetry between the two arms, and here it is done by introducing disorder in the backbone sites. Without presenting density profiles, like what are shown for the disorder-free case in Fig. <ref>, in Fig. <ref> we directly present the dependence of circular current, computed at a particular bias voltage, as a function of backbone disorder strength W, since our main concern is to explore the effect of backbone disorder. The results are shown for three distinct voltages. Several important features are available. The overall impression is that the circular current becomes vanishingly small for lower values of W, and then it starts increasing with W and after reaching a maximum it decreases and drops close to zero for too large W. This pattern is reflected from all the three curves computed at three distinct biases. This is one of our central results, and here we explain the nature of the curves for different regimes of W. We choose three points in the cyan curve (other curves can also be taken into account) associated with three different values W, low, moderate and high, that are represented by A, B and C respectively. For these three cases of W, the variations of total circular current density J_c are shown in Fig. <ref> (exact values of W are mentioned in the sub-figures). The particular energy window from -0.2 to +0.2 is used since V=0.4V and temperature is fixed at zero. When W=1, corresponding to A point, the current density is extremely small for the entire energy window (Fig. <ref>(a)), and naturally, I_c becomes vanishingly small. On the other hand, for moderate W (W=4), J_c is appreciable and most importantly it is highly asymmetric around E=0 (or we can say across J_c=0 line) (Fig. <ref>(b)). The net area of the J_c-E curve is thus reasonably large yielding a higher circular current. Finally when we reach to the very large value of W (denoting the C point), it is seen from Fig. <ref>(c) that the positive and negative areas under J_c-E curves are quite comparable, resulting in a smaller circular current, like the weak W limit. Thus, form the behavior of current density profile, the nature of circular current with W can be clearly understood. Now, it is also important to explain these characteristic features of I_c in different disorder regimes with proper `physical arguments' which are as follows. As already pointed out, the primary requirement to have a finite I_c is to break the symmetry between the upper and lower arms. For W=0, all the backbone sites are uniform and the arms become symmetric. Once the backbone disorder is introduced, the symmetry is lost, but for weak W the symmetry breaking effect is too small. At the same time, within our selected energy window, there are almost no peaks or dips in J_c, resulting in a vanishingly small current. With increasing W, the arms become more asymmetric relative to each other and therefore we get higher circular current with W. The situation becomes counterintuitive beyond a critical disorder where the currents gets reduced with the enhancement of W. This can be interpreted from the concept of an ordered-disordered scenario. The system which is clamped between the electrodes contains two regions. In one region (ring) all the sites are ordered, while for the other region the sites are disordered, and these two regions are coupled to each other. For weak W, the ordered region is affected by the disordered part, and this effect increases with W which is easy to understand. But the fact is that the coupling between the two regions gets weakened with increasing W, and for large enough W the ring part is almost decoupled from the backbone sites. In that case, the symmetry between the upper and lower arms of the ring is restored, resulting in a vanishing I_c. Since the nature of circular current is greatly influenced by the interaction between the ordered region i.e., the ring system and the disordered backbone region, here it is important to check the effect of η on I_c. In Fig. <ref> we plot I_c as a function of W at three typical values of η considering the lengthwise symmetric ring nanojunction with N_R=10. All the three curves appear identical in nature. The key observation is that, the critical W, where I_c reaches it extremum, shifts towards higher W with increasing η. The underlying mechanism depends on the strength of the connection between the ordered and disordered parts. A stronger connection requires a higher W to decouple these two regions, from which point the symmetry between the two arms of the ring begins to reemerge. The atypical behavior of I_c with W which is discussed above perfectly holds for the other ring-electrode junction configurations as well, which we claim from the curves shown in Fig. <ref>. In this figure, three junction configurations are taken into account, two are length-wise asymmetric and one for length-wise symmetric, and the results are worked out for a typical bias voltage V=0.4V. Along with these, some other configurations are also checked and in each case the dependence of W on I_c remains exactly same. It is well-known that ring-electrode configuration plays a significant role on transport behavior, due to the modification of quantum interference of the wave functions associated with different branches of the ring, and therefore, the maxima points (in the -ve side of each curve) get shifted, but the overall response of I_c with W is unchanged. To inspect the robustness i.e., whether the above discussed nature is specific to AAH type disorder or it is general with respect to the other type of backbone disorder, here we check the behavior of I_c-W curve in presence of other types of backbone disorder. We choose two other types of disorder in backbone sites: one is the Fibonacci type, and the other is the Bronze Mean (BM) one <cit.>. These are quite common examples of correlated disorders and they can be constructed using two different kinds of atomic sites, say A and B, by arranging them according to specific rules <cit.>, unlike AAH type where all the sites are different. The inflation rule for the Fibonacci sequence is: A→ AB and B→ A. So the first few Fibonacci generations are A, AB, ABA, ABAAB, etc. On the other hand, the inflation rule for the BM sequence is: A→ AAAB and B→ A, and here the first few generations are: A, AAAB, AAABAAABAAABA, etc. For these two different atomic sites (A and B) we refer the site energies ϵ_A and ϵ_B respectively, and their strengths are specified by the parameter W (like the AAH case). Inserting the Fibonacci and BM disorders in the backbone sites, the variations of circular current with respect to W are shown in Fig. <ref>. Here we consider a 13-site ring, to have a comparative ring size with our previously studied AAH case (10-site rings cannot be obtained with these sequences). As the ring size is odd, identical arm lengths are not possible, and we couple the electrodes in such a way that the length difference between the two arms is just a single bond. For this particular figure (Fig. <ref>), the chosen TB parameters are also quite different compared to the other figures. It is just because to capture all the information within the varied W window. We make t_0 quite large, and thus, we proportionately change other parameters such that energy band widths of the electrodes are larger than the system placed between them. In each type of backbone disorder, the variation of I_c with W is shown for three distinct bias voltages, and both for the two types of disorders the nature of I_c with W remains exactly similar with what is already obtained for the AAH case. The only difference is the sign reversal. The circular current can have both positive and negative signs (unlike transport current which always exhibits one sign for a particular polarity) as it depends on the contributing peaks and dips in the current density profile. The sign of I_c depends on which dominates among the peaks and dips of J_c. Figure <ref> clearly suggests that the overall signature of I_c with W remains unchanged with the type of disorder. Here we would like to note that, in addition to these three distinct types of disorder, the behavior of I_c remains the same with other different backbone configurations which we firmly confirm through our detailed numerical calculations, and to avoid any repetition, we do not add those results. The results analyzed so far are computed for zero temperature. For a more realistic scenario it is also crucial to examine the role of temperature. The temperature dependence enters into the current expression through the Fermi-Dirac distribution functions, f_S and f_D, associated with the source and drain electrodes, respectively. The effect of temperature is presented in Fig. <ref>, where I_c-W curves are shown for three distinct temperatures. The result of zero temperature is also superimposed for comparison. Apart from a slight reduction of current with temperature, we find that the overall nature of I_c-W curve remains unaltered. The reduction of current due to temperature can be understood from the following arguments. At absolute zero temperature, the current is obtained by integrating the current density profile within the energy window from E_F-eV/2 to E_F+eV/2. Within this energy zone, depending on the dominating peaks and dips we get the net current. Here the chance of mutual cancellations is relatively less. On the other hand, when the temperature is finite, we need to consider the full available energy window where the chance of mutual cancellations may slightly increase though it depends on many other factors, especially W, and depending on the weight factor (f_S-f_D) we get the resultant current. A more comprehensive dependence of I_c on temperature is given in Fig. <ref>, where the variation of I_c at a particular voltage is shown by continuously changing the temperature in a broad range. A smooth reduction of I_c with temperature is obtained, following the above arguments. The key aspect is that, I_c remains finite even at very high temperatures. § CLOSING REMARKS In this work, we examine the critical role of environmental interactions on bias-driven circular currents in a nano ring. The environmental effects are phenomenologically incorporated by connecting the ring sites to disordered backbone sites, each directly coupled to a parent lattice site of the ring via a single bond. The ring is sandwiched between two contact electrodes, source and drain. When a finite bias is applied between these electrodes, a net circular current is induced in the ring. The characteristics of this current are studied in detail under various input conditions. Using a tight-binding framework to describe the quantum system, all results are derived based on wave-guide theory. The effect of backbone disorder is particularly noteworthy. As the disorder strength increases, the current magnitude initially rises, reaches a maximum, and then decreases, eventually vanishes at very high disorder strengths. These behaviors are thoroughly analyzed with appropriate mathematical results and physical explanations. Additionally, it is established that the effect of disorder remains largely unchanged regardless of the type of disorder. Temperature dependence is also discussed considering a more realistic scenario, showing that significant current can still be obtained over a wide temperature range. Finally, we emphasize that if environmental interactions can be controlled through suitable laboratory methods, transport behavior can be selectively regulated. This potential for control could be highly interesting and important. 99 cr1 S. Nakanishi and M. Tsukada, Jpn. J. Appl. Phys. 37, L1400 (1998). cr2 S. Nakanishi and M. Tsukada, Phys. Rev. Lett. 87, 126801 (2001). cr3 M. Ernzerhof, H. Bahmann, F. Goyer, M. Zhuang, and P. Rocheleau, J. Chem. Theory Comput. 2, 1291 (2006). cr4 N. Tsuji, S. Takajo, and H. Aoki, Phys. Rev. B 75, 153406 (2007). cr5 K. Tagami, M. Tsukada, W. Yasuo, T. Iwasaki, and H. Nishide, J. Chem. Phys. 119, 7491 (2003). nt1 D. Rai, O. Hod, and A. Nitzan, J. Phys. Chem. C 114, 20583 (2010), and the references therein. nt2 D. Rai, O. Hod, and A. Nitzan, Phys. Rev. B 85, 155440 (2012). skm1 M. Patra and S. K. Maiti, Sci. Rep. 7, 43343 (2017). skm2 S. Ganguly and S. K. Maiti, J. Phys.: Condens. Matter 33, 045301 (2020). rai1 U. Dhakal and D. Rai, J. Phys.: Condens. Matter 31 125302 (2019). wg1 Y. J. Xiong and X. T. Liang, Phys. Lett. A 330, 307 (2004). bb1 D. K. Suhendro, E. Yudiarsah, and R. Saleh, Physica B 405, 4806 (2010). bb2 D. Klotsa, R. A. Römer, M. S. Turner, Biophys. J. 89, 2187 (2005). bb3 J. X. Zhong, in: B. Romamowicz, M. Laudon (Eds.), Proceedings of the 2003 Nanotechnology Conference, Computational Publications, vol. 2, p. 105 (2003). bb4 A. -M. Guo, Z. Yang, H. -J. Zhu, and S. -J. Xiong, J. Phys.: Condens. Matter 22, 065102 (2010). bb5 A. -M. Guo, S. -J. Xiong, Z. Yang, and H. -J. Zhu, Phys. Rev. E 78, 061922 (2008). bb6 E. Maciá, S. Roche, Nanotechnology 17, 3002 (2006). bb7 S. Kundu and S. N. Karmakar, Phys. Lett. A 379, 1377 (2015). r1 P. W. Anderson, Phys. Rev. 109, 1492 (1958). r2 P. A. Lee and T. V. Ramakrsihnan, Rev. Mod. Phys. 57, 287 (1985) and the references therein. r3 N. Mott, J. Phys. C 20, 3075 (1987). r4 E. Abrahams, P. W. Anderson, D. C. Licciardello, and T. V. Ramakrishnan, Phys. Rev. Lett. 42, 673 (1979). r0 J. C. Flores, J. Phys.: Condens. Matter 1, 8471 (1989). r5 S. Aubry and G. Andre, Ann. Israel Phys. Soc. 3, 133 (1980). r6 P. G. Harper, Proc. Phys. Soc. A 68, 874 (1955). r7 M. Verbin, O. Zilberberg, Y.E. Kraus, Y. Lahini, Y. Silberberg, Phys. Rev. Lett. 110, 076403 (2013). r8 Y. E. Kraus, Y. Lahini, Z. Ringel, M. Verbin, and O. Zilberberg, Phys. Rev. Lett. 109, 106402 (2012). r9 J. Biddle and S. Das Sarma, Phys. Rev. Lett. 104, 070601 (2010). r10 S. Sil, S.K. Maiti, A. Chakrabarti, Phys. Rev. Lett. 101, 076803 (2008). r11 M. Saha and S. K. Maiti, Physica E 93, 275 (2017). r12 A. M. Guo, Phys. Rev. E 75, 061915 (2007). r13 H. Lei, J. Chen, G. Nouet, S. Feng, Q. Gong, and X. Jiang, Phys. Rev. B 75, 205109 (2007). ros M. Rossignolo and L. Dell'Anna, Phys. Rev. B 99, 054211 (2019). cp1 K. Walczak, Phys. Status Solidi (b) 241, 2555 (2004). cp2 S. K. Maiti, Physica E 36, 199 (2007).
http://arxiv.org/abs/2406.08091v1
20240612111616
Musielak-Orlicz-Sobolev embeddings: Necessary and Sufficient Conditions
[ "Ankur Pandey", "Nijjwal Karak" ]
math.FA
[ "math.FA" ]
[ Andrew Pearce-Crump June 17, 2024 ======================= § ABSTRACT In this paper we study the necessary and sufficient conditions on domain for Musielak-Orlicz-Sobolev embedding of the space W^1,Φ(·,·)(Ω) where Φ(x,t):=t^p(x)(log(e+t))^q(x). Keywords: Orlicz spaces, Orlicz-Sobolev spaces, Musielak-Orlicz spaces, Musielak-Orlicz-Sobolev spaces, Variable exponent Sobolev spaces. 2020 Mathematics Subject Classification: 46E35, 46E30. § INTRODUCTION We assume throughout the paper that Ω is an open subset of ℝ^n and the variable exponents p and q are continuous functions defined on Ω or ℝ^n, satisfying (p1) 1≤ p^-:= inf_x∈ℝ^n p(x)≤ sup_x∈ℝ^n p(x)=:p^+<∞ (q1) -∞<q^-:=inf_x∈ℝ^n q(x)≤ sup_x∈ℝ^n q(x)=:q^+<∞. The following two conditions on p and q will also be used which, in literature, are known as the log-Hölder continuous and the log-log-Hölder continuous respectively: (p2) |p(x)-p(y)|≤C/log(e+1/|x-y|) whenever x∈ℝ^n and y∈ℝ^n (q2) |q(x)-q(y)|≤C/log(e+(e+1/|x-y|)) whenever x∈ℝ^n and y∈ℝ^n. For the variable exponent Sobolev space W^1,p(·)(Ω), the Sobolev-type (continuous) embedding W^1,p(·)↪ L^p^*(·)(Ω) was established in <cit.> for bounded domains with locally Lipschitz boundary, with the condition (p2) on the exponent p. For Musielak-Orlicz-Sobolev spaces, Sobolev-type embedding have been studied in <cit.>. In this paper, we concentrate on the class of functions Φ(x,t):=t^p(x)(log(e+t))^q(x). For this class of functions Φ(x,t) the following embedding was established in <cit.> for the space W^1,Φ(·,·)_0 (Ω). Let p satisfy (p1), (p2) and q satisfy (q1), (q2). If p^+<n then for every u∈ W^1,Φ(·,·)_0 (Ω), ||u||_L^Ψ(·,·)(Ω)≤ C|| u||_W^1,Φ(·,·)(Ω), where Φ(x,t):=t^p(x)(log(e+t))^q(x) and Ψ(x,t):=t^p^*(x)(log(e+t))^q(x)p^*(x)/p(x). Here p^*(x) denotes the Sobolev conjugate of p(x), that is 1/p^*(x) =1/p(x)- 1/n. Here we establish the embedding for W^1,Φ(·,·) (Ω) for bounded domains Ω with Lipschitz boundary. Let Ω be an open and bounded set with Lipschitz boundary so that (Ω)>0. Suppose that the exponent p:Ω→ [1,∞) is log-Hölder continuous with 1≤ p^-≤ p^+<n and the exponent q:Ω→ (-∞,∞) is log-log-Hölder continuous with p^+ + q^+ ≥ 1. Consider Φ(x,t):=t^p(x)(log(e+t))^q(x) and Ψ(x,t):=t^p^*(x)(log(e+t))^q(x)p^*(x)/p(x), where 1/p^*(x) =1/p(x)-1/n. Then there exists a constant C such that whenever u∈ W^1,Φ(·,·)(Ω), ||u||_L^Ψ(·,·)(Ω)≤ C|| u||_W^1,Φ(·,·)(Ω). For the necessary part, it was shown in <cit.> that Ω must satisfy the measure density condition to have the embedding W^1,p(·)(Ω)↪ L^p^*(Ω), if p satisfies the log-Hölder condition. Note that Ω satisfies measure density condition if there exists a constant c>0 such that for every x in Ω̅ and each R in ]0,1/2], one has | B_R(x)∩Ω|≥ cR^n. This condition was first appeared as a necessary condition for Sobolev embedding in <cit.>. Recently, a weaker version of the measure density condition, namely log-measure density condition, has appeared in <cit.> as a necessary condition of certain Orlicz-Sobolev embedding and also in <cit.> as a necessary condition of Sobolev-type embedding of W^1,p(·)(Ω) if p is log-log-Hölder continuous on Ω. A subset Ω of ℝ^n is said to satisfy the log s-measure density condition if there exist two positive constants c and α such that for every x in Ω̅ and each R in ]0,1/2] one has c R^s (log (1/R))^-α≤ | B_R(x)∩Ω|. If s=n, one says that Ω satisfies the log-measure density condition. Here we prove that if the embedding holds, then Ω satisfies log-measure density condition. Let Ω be an open subset of ℝ^n, Φ(x,t):=t^p(x)(log(e+t))^q(x) and Ψ(x,t):=t^p^*(x)(log(e+t))^q(x)p^*(x)/p(x) with p^- + q^-≥ 1, where p^*(x) denotes the Sobolev conjugate of p(x), that is, 1/p^*(x) =1/p(x)-1/n. Suppose that 1. The exponent p(· ) is log-Hölder continuous with p^+<n, 2. W^1,Φ(·,·)(Ω) ↪ L^Ψ(·,·)(Ω) . Then Ω satisfies the log-measure density condition. Note that, we do not require the log-log-Hölder continuity of q here, unlike Theorem <ref>. On the other hand, the condition p^- + q^-≥ 1 implies the condition p^+ + q^+≥ 1 of Theorem <ref>, both of which are trivial if q^- ≥ 0. § NOTATIONS AND PRELIMINARY RESULTS A function f:(0,∞) →ℝ is almost increasing if there exists a constant a≥ 1 such that f(s) ≤ af(t) for all 0< s< t . Similarly, a function f:(0,∞) →ℝ is almost decreasing if there exists a constant b≥ 1 such that f(s) ≥ bf(t) for all 0< s< t . Let f:(0,∞)→ℝ and p,q>0.We say that f satisfies (i) (Inc)_p if f(t)/t^p is increasing; (ii) (aInc)_p if f(t)/t^p is almost increasing; (iii)(Deq)_q if f(t)/t^q is decreasing; (iv) (aDeq)_q if f(t)/t^q is almost decreasing. Let (Ω,Σ,μ) be a σ-finite, complete measure space. A function Φ: Ω× [0,∞)→ [0,∞] is said to be a (generalized) Φ-prefunction on (Ω,Σ,μ) if x→Φ(x,|f(x)|) is measurable for every f∈ L^0(Ω,μ) and Φ(x,.) is a Φ-prefunction for μ-almost every x∈Ω. We say that the Φ-prefunction Φ is a (generalized weak) Φ-function if it satisfies (aInc)_1. The sets of generalized weak Φ-function is denoted by Φ_w(Ω,μ). We say that Φ∈Φ_w(Ω,μ) satisfies (A0) if there exits a constant β∈(0,1] such that β≤Φ^-1(x,1)≤1/β for μ-almost every x∈Ω. Let Φ∈Φ_w(Ω,μ). We say that Φ satisfies (A1) if there exists β∈(0,1) such that βΦ^-1(x,t)≤Φ^-1(y,t) for every t∈[1,1/|B|], almost every x,y∈ B∩Ω and every ball B with |B|≤ 1. We say that Φ∈Φ_w(Ω,μ) satisfies (A2) if for every s>0 there exists β∈(0,1] and h∈ L^1(Ω)∩ L^∞(Ω) such that βΦ^-1(x,t)≤Φ^-1(y,t) for almost every x,y∈Ω and every t∈[h(x)+h(y),s]. Here we prove an elementary result regarding a class of functions satisfying the above conditions. If p(·) satisfies (p1),(p2), and q(·) satisfies (q1),(q2) then the function Φ(x,t):= t^p(x)(log(e+t))^q(x) with q(x)≥ 0 satisfies (A0), (A1), (A2) and (Dec)_p^+ +q^+. It is easy to verify the existence of constants c_1 and c_2 such that c_2t^1/p(x)/(log(e+t))^q(x)/p(x)≤Φ^-1(x,t)≤c_1t^1/p(x)/(log(e+t))^q(x)/p(x) and hence c_2/2^q^+/p^-≤c_2/(log(e+1))^q(x)/p(x)≤Φ^-1(x,1)≤c_1/(log(e+1))^q(x)/p(x)≤ c_1 . Now we can choose c_1≥ 1 so that c_1c_2 ≥ 2^q^+/p^-. Then (A0) follows by choosing β =1/c_1. To show condition (A1), by symmetry, we may assume that p(x)<p(y). If t∈ [1,1/|B|], then Φ^-1(x,t)/Φ^-1(y,t) ≤ c_1 t^1/p(x)-1/p(y)(log(e+t))^q(y)/p(y)-q(x)/p(x)≤ c_1 t^1/p(x)-1/p(y)(log(e+t))^q(y)-q(x)/p^- ≤ c_1|B|^-c/log(e+1/|x-y|)(log(e+1/|B|))^C/(p^-) log(e+(e+1/|x-y|)) ≤ c_1 e^cn log1/|x-y|/log(e+1/|x-y|) e^c_0log(e+(e+1/|x-y|^n))/log(e+(e+1/|x-y|)) ≤ c_1 e^cn e^max{c_0,c_0ln n/ln(ln(e+1))}. This yields that βΦ^-1(x,t) ≤Φ^-1(y,t) where 1/β = c_1 e^cn e^max{c_0,c_0 ln n/ln(ln(e+1))} and hence (A1) follows. To show condition (A2), first note that the function p(·) satisfies Nekvinda's decay condition, that is, there exist c_1∈(0,1) and p_∞∈[1,∞] such that ∫_(p(x)≠p_∞) c_1^1/|1/p_∞-1/p(x)|dx < ∞. When p(x) < p_∞, it follows from Young's inequality that (c_1 s)^p(x) ≤ s^p_∞ +c_1^1/|1/p_∞-1/p(x)| (c_1 s)^p(x)(log(e+s))^q(x) ≤ s^p_∞(log(e+s))^q(x)+c_1^1/|1/p_∞-1/p(x)|(log(e+s))^q(x) ≤ s^p_∞(log(e+s))^q_++c_1^1/|1/p_∞-1/p(x)|(log(e+1))^q^+, and the opposite case follows analogously. Hence there holds Φ(x, c_1s) ≤Φ_∞(s) + h(x), where h(x):= c_1^1/|1/p_∞-1/p(x)|(log(e+1))^q^+. So by Lemma 4.2.7 of <cit.>, (A2) follows (see also <cit.>). Finally, the condition (Dec)_p^+ +q^+ follows easily, since we have for 0≤ s≤ t, Φ(x,t)/t^p^+ +q^+ = t^p(x)-p^+-q^+(log(e+t))^q(x)≤ s^p(x)-p^+-q^+(log(e+s))^q(x)=Φ(x,s)/s^p^+ +q^+, where we have used the fact that the function t^Q(x)(log(e+t))^q(x) is decreasing when Q(x)+q^+ ≤ 0. Let Φ∈Φ_w(Ω,μ) and let ρ_Φ be given by ρ_Φ(f) := ∫_ΩΦ(x,|f(x)|)dμ(x) for all f∈ L^0(Ω,μ). The function ρ_Φ is called a modular. The set L^Φ(·,·)(Ω,μ) := {f∈ L^0(Ω,μ) :ρ_Φ(λ f)< ∞ for some λ > 0 } is called a generalized Orlicz space or Musielak-Orlicz (M-O) space. Let Φ∈Φ_w(Ω,μ). The function u∈ L^Φ(·,·)∩ L^1_loc(Ω) belongs to Musielak-Orlicz-Sobolev space W^1,Φ(·,·)(Ω) if its weak partial derivatives δ_α u exist and belong to L^Φ(·,·)(Ω) for all |α|≤ 1. We define a semimodular on W^1,Φ(·,·)(Ω) by ρ_W^1,Φ(·,·)(Ω)(u) := ∑_0≤|α|≤ 1ρ_Φ (δ_α u). It induces a (quasi-) norm ||u||_W^1,Φ(·,·) (Ω) := inf{λ >0 : ρ_ W^1,Φ(·,·) (Ω)(u/λ)≤ 1}. Here we prove three lemmas to estimate the norm of the characteristic function of a measurable set, considering three different sets of values of q. Let Φ:Ω×[0,∞)→[0,∞) be given by Φ(x,t):=t^p(x)(log(e+t))^q(x) with q(x)≥ 0 for all x and A⊂Ω is a measurable set. Then min{|A|^1/p_A^+,|A|^1/p_A^-}≤1_A_L^Φ(·,·)(Ω)≤max{ |A|^1/p_A^+ (log (e+1/|A|))^q_A^+ ,|A|^1/p_A^- (log(1+e))^q_A^+}, We start with the proof of the second inequality of (<ref>). Let u>|A| and assume first that u≤ 1. Then _A Φ(x,1/u^1/p_A^+(log(e+1/u))^q_A^+)dx = _A (log(e+1/u^1/p _A^+(log(e+1/u))^q_A^+))^q(x)/u^p(x)/p_A^+(log(e+1/u))^p(x)q_A^+ dx ≤ |A|(log(e+1/u^1/p_A^+(log(e+1/u))^q_A^+))^q_A^+/u(log(e+1/u))^q_A^+ < 1, where in the final inequality we have used the fact that u^1/p_A^+-1(log(e+1/u))^q_A^+≥ 1. Hence we have 1_A_L^Φ(·,·)(Ω)≤ u^1/p_A^+ (log (e+1/u))^q_A^+. If u>1, we can similarly show that 1_A_L^Φ(·,·)(Ω)≤ u^1/p_A^- (log(1+e))^q_A^+. The second inequality follows as u → |A|^+. Let us then prove the first inequality of (<ref>). Let u<|A| and assume first that u≤ 1. Then _A Φ(x,1/u^1/p_A^-)dx = _A (log(e+1/u^1/p_A^-))^q(x)/u^p(x)/p_A^- dx ≥|A|/u >1. Hence we get u^1/p_A^-≤1_A_L^Φ(·,·)(Ω). If u>1, we can similarly show that u^1/p_A^+≤1_A_L^Φ(·,·)(Ω). The first inequality follows as u → |A|^-. Let Φ:Ω×[0,∞)→[0,∞) be given by Φ(x,t):=t^p(x)(log(e+t))^q(x) with q(x) < 0 for all x. Assume that p_A^- + q_A^- ≥ 1 and A⊂Ω is a measurable set with |A|< 1/2. Then |A|^1/p_A^- (log (e+1/|A|))^q_A^-/p_A^-≤1_A_L^Φ(·,·)(Ω)≤max{|A|^1/p_A^+,|A|^1/p_A^-}, We start with the proof of the second inequality of (<ref>). Let u>|A| and assume that u≤ 1. Then we have _A Φ(x,1/u^1/p_A^+)dx = _A (log(e+1/u^1/p_A^+))^q(x)/u^p(x)/p_A^+ dx ≤|A|/u <1, and hence u^1/p_A^+≥1_A_L^Φ(·,·)(Ω). If u>1, we can similarly show that u^1/p_A^-≥1_A_L^Φ(·,·)(Ω). The second inequality follows as u → |A|^+. Let us then prove the first inequality of (<ref>). Let u<|A|. Then _A Φ(x,1/u^1/p_A^-(log(e+1/u))^q_A^-/p_A^-)dx = _A (log(e+1/u^1/p _A^-(log(e+1/u))^q_A^-/p_A^-))^q(x)/u^p(x)/p_A^-(log(e+1/u))^p(x)q_A^-/p_A^- dx ≥ |A|(log(e+1/u^1/p_A^-(log(e+1/u))^q_A^-/p_A^-))^q_A^-/u(log(e+1/u))^q_A^- > 1, where we have used the fact that u^1/p_A^- -1(log(e+1/u))^q_A^-/p_A^- > 1 under the assumption p_A^- + q_A^- ≥ 1 which follows from the increasing property of the function Φ(t) = t^r/(log(e+t))^m for all t>2 when r≥ m, m>0. Therefore we have u^1/p_A^-(log(e+1/u))^q_A^-/p_A^-≤ ||1_A||_L^Φ(·,·)(Ω). The first inequality follows as u → |A|^-. Let Φ:Ω×[0,∞)→[0,∞) be given by Φ(x,t):=t^p(x)(log(e+t))^q(x) where q(x) < 0 for some x and q(x)≥ 0 for some x. Assume that p_A^- + q_A^- ≥ 1 and A⊂Ω is a measurable set with |A|< 1/2. Then, there exist constants b_1>0, b_2>0 such that b_1|A|^1/p_A^-(log (e+1/|A|))^q_A^-/p_A^-≤1_A_L^Φ(·,·)(Ω)≤ b_2 max{ |A|^1/p_A^+(log (e+1/|A|))^q_A^+ ,|A|^1/p_A^-(log(1+e))^q_A^+}, We start with the proof of the second inequality of (<ref>). Let u>|A| and assume first that u≤ 1. Then _A Φ(x,1/2u^1/p_A^+(log(e+1/2u))^q_A^+)dx = _A (log(e+1/2u^1/p _A^+(log(e+1/2u))^q_A^+))^q(x)/2^p(x) u^p(x)/p_A^+(log(e+1/2u))^p(x)q_A^+ dx = _A∩ (x: q(x)≥ 0)(log(e+1/2u^1/p _A^+(log(e+1/2u))^q_A^+))^q(x)/2^p(x)u^p(x)/p_A^+(log(e+1/2u))^p(x)q_A^+ dx + _A∩ (x: q(x)< 0)(log(e+1/2u^1/p _A^+(log(e+1/2u))^q_A^+))^q(x)/2^p(x)u^p(x)/p_A^+(log(e+1/2u))^p(x)q_A^+ dx ≤ 1/2|A|(log(e+1/2u^1/p_A^+(log(e+1/2u))^q_A^+))^q_A^+/u(log(e+1/2u))^q_A^+ + |A|/2u < 1, where in the final inequality we have used the fact that u^1/p_A^+-1(log(e+1/u))^q_A^+≥ 1. Hence we have 1_A_L^Φ(·,·)(Ω)≤ 2u^1/p_A^+ (log (e+1/2u))^q_A^+. If u>1, we can similarly show that 1_A_L^Φ(·,·)(Ω)≤ u^1/p_A^- (log(1+e))^q_A^+. The second inequality follows as u → |A|^+. Let us then prove the first inequality of (<ref>). Let u<|A|. Then _A Φ(x,1/2^1/p_A^+u^1/p_A^-(log(e+1/u2^1/p_A^+))^q_A^-/p_A^-)dx = _A (log(e+1/2^1/p_A^+u^1/p _A^-(log(e+1/u2^1/p_A^+))^q_A^-/p_A^-))^q(x)/2^p(x)/p_A^+u^p(x)/p_A^-(log(e+1/u2^1/p_A^+))^p(x)q_A^-/p_A^- dx = _A ∩ (x: q(x) ≥ 0)(log(e+1/2^1/p_A^+u^1/p _A^-(log(e+1/u2^1/p_A^+))^q_A^-/p_A^-))^q(x)/2^p(x)/p_A^+u^p(x)/p_A^-(log(e+1/u2^1/p_A^+))^p(x)q_A^-/p_A^- dx + _A ∩ (x: q(x) < 0)(log(e+1/2^1/p_A^+u^1/p _A^-(log(e+1/u2^1/p_A^+))^q_A^-/p_A^-))^q(x)/2^p(x)/p_A^+u^p(x)/p_A^-(log(e+1/u2^1/p_A^+))^p(x)q_A^-/p_A^- dx ≥ |A|/2u + |A|(log(e+1/2^1/p_A^+u^1/p_A^-(log(e+1/u2^1/p_A^+))^q_A^-/p_A^-))^q_A^-/2u(log(e+1/u2^1/p_A^+))^q_A^- > 1, where in the final inequality we have used the fact that u^1/p_A^- -1(log(e+1/u2^1/p_A^+))^q_A^-/p_A^- > 1 when p_A^- + q_A^- ≥ 1. Therefore we have 2^1/p_A^+u^1/p_A^- (log (e+1/u2^1/p_A^+))^q_A^-/p_A^-≤1_A_L^Φ(·,·)(Ω). The first inequality follows as u → |A|^-. In the following lemma, we extend the exponent p from Ω to ℝ^n preserving the modulus of continuity as well as upper and lower bounds, using the technique of Edmunds and Rákosník <cit.> which was originally introduced by Hestenes <cit.>. The same technique was also used in <cit.>. We recall the proof here for the convenience of the readers. Also note that, here we will use the lemma only to extend the exponent functions. Let Ω⊂ℝ^n be an open, bounded set with Lipschitz boundary. Let p:Ω→ (-∞,∞) satisfy the uniform continuity condition |p(x)-p(y)|≤ρ(|x-y|) for all x,y ∈Ω where ρ is concave for t≥ 0 and ρ(t) →0 for t→ 0^+. Then there exists an extension p_1 on ℝ^n of p and a constant C>0, such that |p_1(x)-p_1(y)|≤ρ(C|x-y|) for all x,y ∈Ω Moreover, there holds p_1^-=p^- and p_1^+=p^+. Let V_j, j= 1, . . ., k, be the covering of the boundary ∂Ω which corresponds to the local description of ∂Ω. More precisely, for each j= 1, . . ., k, there is a local coordinate system (x', x_n) such that V_j= {(x', x_n): |x_i|<δ, i= 1, . . ., n-1, a_j(x')-β<x_n<a_j(x')+β}, V_j∩Ω = {x∈ V_j : a_j(x')<x_n<a_j(x')+β} and {x∈V̅_̅j̅ : x_n<a(x')}∩Ω̅ = ∅, where β, δ are some fixed positive numbers and a_j∈ C^0,1((-δ,δ)^n-1) are the functions describing the boundary. Define the mappings T_j : (-δ,δ)^n-1× (-β,β)→ℝ^n, j= 1, . . ., n, by T_j(x',x_n) = (x', x_n+ a_j(x')). Then the T_j are bi-Lipschitz mappings. To these flattened domains T_j^-1(V_j), define the reflection operator Ef(x) = f(x' ,x_n) for x_n≥0, f(x' ,x_n) for x_n<0 and the functions p_1_j on V_j∪Ω by p_1_j(x) = p(x) for x∈Ω, Er_j(T_J^-1(x))) for x∈ V_j/Ω, where r_j := p∘ T_j. Note that since E, T_j, and T_j^-1 are Lipschitz there exists C>0 such that |p_1_j(x)-p_1_j(y)| ≤ρ(C|x-y|) for all x,y ∈Ω Then extend the functions p_1_j on Ω to p̃_̃1̃_j on ℝ^n preserving their upper and lower bounds. Note that this extension is possible due to McShane <cit.> and the fact that ρ is concave with ρ(t) →0 for t→ 0^+. Define p_1:ℝ^n → (-∞,∞) by p_1(x) := min_j=1,...,kp̃_̃1̃_j(x) for x∈ℝ^n Thus there holds |p_1(x)-p_1(y)|≤ρ(C|x-y|) for all x,y ∈Ω This proves the theorem. Let Ω⊂ℝ^n be an open, bounded set with Lipschitz boundary. Suppose that p satisfies (p1), (p2) and q satisfies (q1) and (q2). Then there exists an extension p_1 on ℝ^n of p with p_1^-=p^-, p_1^+=p^+ and an extension q_1 on ℝ^n of q with q_1^-=q^- and q_1^+=q^+, which satisfies the same local uniform continuity conditions (with possibly different constants). Since the mapping ρ:t→ C/log(e+1/t) is concave for t≥ 0 and ρ(t) →0 for t→ 0^+ and p satisfies uniformly the local continuity condition such that |p(x)-p(y)|≤ρ(|x-y|) for all x,y ∈Ω, Due to Lemma <ref> it follows that there exists an extension p_1 on ℝ^n of p, which possesses all the desired properties. The proof for the extension of q is similar. Let Ω be an (ϵ,δ) -domain with rad(Ω)>0. Suppose that Φ∈Φ_w(Ω,μ) satisfies (A0), (A1), (A2) and (aDec)_q with q≥1. Let Ψ∈Φ_w(ℝ^n,μ) be the extension of Φ which also satisfies (A0), (A1), (A2) and (aDec)_q with q≥1. Then there exists an operator Λ : W^1,Φ(·,·)(Ω) ↪ W^1,Ψ(·,·)(Ω) and a constant B such that ||Λ u||_W^1,Ψ(·,·)(ℝ^n)≤ B ||u||_W^1,Φ(·,·)(Ω), for every u∈ W^1,Φ(·,·)(Ω) . § MAIN RESULTS Proof of Theorem <ref> By Lemma <ref>, we obtain an extension p_1 on ℝ^n of p and an extension q_1 on ℝ^n of q. Consider Φ_1(x,t):=t^p_1(x) (log(e+t))^q_1(x) and Ψ_1(x,t):=t^p_1^*(x)(log(e+t))^q_1(x)p_1^*(x)/p_1(x). Note that p_1 satisfies the conditions (p1) and (p2) whereas q_1 satisfies the conditions (q1) and (q2), and therefore by Lemma <ref>, Φ_1 satisfies (A0), (A1), (A2) and (Dec)_p_1^+ +q_1^+. Since p^+ + q^+≥ 1, we get, by Theorem <ref>, a linear extension operator ℰ:W^1,Φ(·,·)(Ω) → W^1,Φ_1(·,·)(ℝ^n) and a constant c_1 such that ||v||_W^1,Φ_1(·,·)(ℝ^n)≤ c_1 ||u||_W^1,Φ(·,·)(Ω) and v|_Ω=u for all u∈ W^1,Φ_1(·,·)(Ω), where ℰu=:v. On the other hand, using Theorem <ref>, we get a constant c_2 such that ||v||_L^Ψ_1(·,·)(ℝ^n)≤ c_2||v||_W^1,Φ_1(·,·)(ℝ^n) for all v∈ W^1,Φ_1(·,·)_0(ℝ^n). Also, since Φ_1 satisfies (A0), (A1), (A2) and (Dec)_p_1^+ +q_1^+, by Theorem 6.4.4 of <cit.> we have W^1,Φ_1(·,·)_0(ℝ^n) =W^1,Φ_1(·,·)(ℝ^n). Hence the inequality (<ref>) holds for all v∈ W^1,Φ_1(·,·)(ℝ^n). Finally, we use the inequalities (<ref>), (<ref>) and the facts that Φ_1, Ψ_1 are the extensions of Φ and Ψ respectively, we obtain, for all u∈ W^1,Φ(·,·)(Ω), ||u||_L^Ψ(·,·)(Ω)=||v||_L^Ψ(·,·)(Ω)≤ ||v||_L^Ψ_1(·,·)(ℝ^n)≤ c_2||v||_W^1,Φ_1(·,·)(ℝ^n) ≤ C ||u||_W^1,Φ(·,·)(Ω), where C=c_1c_2. Note that in the above proof, we are using Lemma <ref> only to extend the functions p and q which satisfy the same local uniform continuity condition and not using the boundedness of the linear extension operator. We do not know, if we can avoid the lemma and prove the extension of the functions p and q in more general domains. Proof of Theorem <ref> For a fixed x in Ω̅ define A_R:= B_R(x)∩Ω. It is enough to consider the case when |A_R| ≤ 1, otherwise |A_R|≥ 1 ≥ R^n whenever R≤ 1 and there is nothing to prove. Moreover, it is enough to consider R≤ r_0 for some 0<r_0≤ 1/4. For such an R, denote by R̃≤ R the smallest real number such that |A_R|= 1/2 |A_R| To prove Theorem <ref>, we need following Lemma: If we have the same assumptions as in Theorem <ref>, then there exist positive constants C_1, C_2, C_3 such that for all x in Ω̅ and every R in ]0,1] we have R- R̃≤ C_1 |A_R|^1/n+1/p_A_R^+-1/p_A_R^- (log (e+1/|A_R|))^q_A_R^+ when q(x) ≥ 0 for all x, R- R̃≤ C_2 |A_R|^1/n+1/p_A_R^+-1/p_A_R^- (log (e+1/|A_R|))^Q_A_R when q(x) < 0 for all x, and R- R̃≤ C_3 |A_R|^1/n+1/p_A_R^+-1/p_A_R^- (log (e+1/|A_R|))^X_A_R when q(x) < 0 for some x and q(x)≥ 0 for some x. Since W^1,Φ(·,·)(Ω) ↪ L^Ψ(·,·)(Ω), there exists a constant c_1 >0 such that whenever u∈ W^1,Φ(·,·) (Ω) one has the inequality ||u||_L^Ψ(·,·)(Ω)≤ c_1|| u||_W^1,Φ(·,·)(Ω). For a fixed x∈Ω̅ let u(y):=ϕ(y-x), where y∈Ω and ϕ is a cut-off function so that * ϕ:ℝ^n → [0,1], * spt ϕ⊂ B_R(0), * ϕ|_B_R̃(0)=1, and * |∇ϕ | ≤c̃ / (R-R̃) for some constant c̃. Note that we have the inequalities 1_B_R̃_L^Ψ(·,·)(Ω)≤u_L^Ψ(·,·)(Ω), u_L^Φ(·,·)(Ω)≤1_B_R_L^Φ(·,·)(Ω) and ∇ u_L^Φ(·,·)(Ω)≤c̃/R-R̃1_B_R B_R̃_L^Φ(·,·)(Ω)≤c̃/R-R̃ 1_B_R_L^Φ(·,·)(Ω) . Use these inequalities in inequality (<ref>) to obtain 1_B_R̃_L^Ψ(·,·)(Ω) ≤ c_1(1_B_R_L^Φ(·,·)(Ω)+c̃/R-R̃ 1_B_R_L^Φ(·,·)(Ω)) ≤ 2c_1max{1,c̃}/R-R̃1_B_R_L^Φ(·,·)(Ω) and hence R- R̃≤c_21_B_R_L^Φ(·,·)(Ω)/ 1_B_R̃_L^Ψ(·,·)(Ω), where c_2:=2c_1max{1,c̃}. Case-1 (q(x) ≥ 0 for all x∈Ω): Using the norm estimates in Lemma <ref>, we get R- R̃ ≤ c_2 |A_R|^1/p_A_R^+ (log (e+1/|A_R|))^q_A_R^+/|A_R̃|^1/p*_A_R^- = c_2 |A_R|^1/p_A_R^+ (log (e+1/|A_R|))^q_A_R^+/|A_R̃|^1/p_A_R^- -1/n = c_2 2^1/p_A_R^- -1/n|A_R|^1/p_A_R^+ -1/p_A_R^- +1/n (log (e+1/|A_R|))^q_A_R^+ ≤ c_2 2^1/p^- -1/n |A_R|^1/p_A_R^+ -1/p_A_R^- +1/n (log (e+1/|A_R|))^q_A_R^+ as claimed. Case-2 (q(x) ≤ 0 for all x∈Ω): Using the norm estimates in Lemma <ref>, we get R- R̃ ≤ c_2 |A_R|^1/p_A_R^+/|A_R̃|^1/p*_A_R^- (log (e+1/|A_R̃|))^q_A_R^- p*_A_R^+/p*_A_R^- p_A_R^+ = c_2 |A_R|^1/p_A_R^+/|A_R̃|^1/p_A_R^- -1/n(log (e+1/|A_R̃|))^-Q_A_R ≡ c_2 2^1/p_A_R^- -1/n|A_R|^1/p_A_R^+ -1/p_A_R^- +1/n (log (e+1/|A_R|))^Q_A_R ≤ c_2 2^1/p^- -1/n |A_R|^1/p_A_R^+ -1/p_A_R^- +1/n (log (e+1/|A_R|))^Q_A_R where Q_A_R =-q_A_R^- (n-p_A_R^-)/p_A_R^- (n-p_A_R^+) ≥ 0. Case-3 (q(x) < 0 for some x∈Ω and q(x)≥ 0 for some x∈Ω): Using the norm estimates in Lemma <ref>, we get R- R̃ ≤ c_2' |A_R|^1/p_A_R^+(log (e+1/|A_R|))^q_A_R^+/|A_R̃|^1/p*_A_R^- (log (e+1/|A_R̃|))^q_A_R^- p*_A_R^+/p*_A_R^- p_A_R^+ = c_2' |A_R|^1/p_A_R^+(log (e+1/|A_R|))^q_A_R^+/|A_R̃|^1/p_A_R^- -1/n(log (e+1/|A_R̃|))^-Q_A_R ≡ c_2' 2^1/p_A_R^- -1/n|A_R|^1/p_A_R^+ -1/p_A_R^- +1/n (log (e+1/|A_R|))^T_A_R ≤ c_2' 2^1/p^- -1/n |A_R|^1/p_A_R^+ -1/p_A_R^- +1/n (log (e+1/|A_R|))^T_A_R ≤ c_2' 2^1/p^- -1/n |A_R|^1/p_A_R^+ -1/p_A_R^- +1/n (log (e+1/|A_R|))^S_A_R where c_2'= c_2b_2/b_1, T_A_R = q_A_R^+ + Q_A_R = q_A_R^+ - q_A_R^-(n-p_A_R^-)/p_A_R^- (n-p_A_R^+) and S_A_R = max{q_A_R^+,Q_A_R,T_A_R} To continue the proof of Theorem <ref>, construct the sequence { R_i } by setting R_0:=R, and then define R_i+1:=R̃_̃ĩ inductively for i ≥ 0. It follows that |A_R_i|=1/2^i | A_R|, with lim_i →∞ R_i=0. Case-1 (q(x) ≥ 0 for all x∈Ω): Using Lemma <ref> one obtains R_i -R_i+1 ≤ C_1 |A_R_i|^1/n+1/p_A_R_i^+-1/p_A_R_i^-(log (e+1/|A_R_i|))^q_A_R_i^+ ≤ C_1 |A_R_i|^1/n+1/p_A_R^+-1/p_A_R^-(log (e+1/|A_R_i|))^q_A_R^+ = C_1 |A_R|^η_R/2^i η_R(log (e+2^i/|A_R|))^q_A_R^+, where η_R :=1/n+1/p_A_R^+-1/p_A_R^-. Since we have (log (e+2^i/|A_R|))^q_A_R^+≤ i^q_A_R^+(log (e+2/|A_R|))^q_A_R^+≤ 2^q^+i^q_A_R^+(log (e+1/|A_R|))^q_A_R^+ for i≥ 1, we obtain R_i -R_i+1 ≤ c_3i^q_A_R^+|A_R|^η_R/2^i η_R(log (e+1/|A_R|))^q_A_R^+, where c_3=C_12^q^+. Note that η_R ≥η :=1/n+1/p^+-1/p^- > 0 and by integral test ∑_i=1^∞i^q_A_R^+2^-iη_R≤(q_A_R^+)!/(η_R ln2)^(q_A_R^+ +1) and hence R = ∑_i=0^∞(R_i -R_i+1) ≤ c_3 |A_R|^η_R(log (e+1/|A_R|))^q_A_R^+(1+∑_i=1^∞i^q_A_R^+2^-iη_R) ≤ c_3 |A_R|^η_R(log (e+1/|A_R|))^q_A_R^+((q_A_R^+)!/(η_R ln2)^(q_A_R^+ +1)+1) ≤ c_3 |A_R|^η_R(log (e+1/|A_R|))^q_A_R^+((q_A_R^+)!/(ηln2)^(q_A_R^+ +1)+1) ≤ c_3 |A_R|^η_R(log (e+1/|A_R|))^q_A_R^+((q^+)!/(min(1,η)ln2)^(q^+ +1)+1) Moreover, since c_4:=1/ max{1,(c_3(q^+)!/(min(1,η)ln2)^(q^+ +1)+c_3)}≤ 1 one has |A_R| (log (e+1/|A_R|))^q_A_R^+/η_R≥ c_4^1 /η_R R^1/η_R≥ c_4^ 1 /η R^1 /η_R = c_4^1 / η R^n R^β_R / η_R , where β_R := 1-n η_R. Now, we would like to find a constant η̃>0, independent of x and R, such that 1/s+1/p_A_R^+-1/p_A_R^- =: η_R≥η̃>0 for all R≤ r_0. Towards this end, the log-Hölder continuity of p gives, for any z and y in A_R, |1/p(z)-1/p(y)|≤C_log/log(e+1/|z-y|), and taking the supremum over all pairs of points in A_R one gets 1/p_A_R^--1/p_A_R^+≤C_log/log(1/(2R)). Suppose now that for some R≤ 1/4 we have that η_R≤ 0. Then (<ref>) gives 1/s≤C_log/log(1/2R), which further implies R≥1/2e^-sC_log. Hence we have the following conclusion: * If 1/2e^-sC_log>1/4, then there is no R≤1/4 for which η_R≤ 0. * If 1/2e^-sC_log≤1/4, then η_R≤ 0 implies R≥1/2e^-sC_log. Therefore if we choose r_0=1/2min{1/4,1/2e^-sC_log}, then η_r_0>0, and also 1/s>C_log/log(1/(2r_0)). But η_r_0 may depend on the point x fixed at the beginning of the proof. To obtain the required η̃, we apply again log-Hölder continuity of 1/p on A_r_0, to obtain 1/p_A_r_0^--1/p_A_r_0^+≤C_log/log(1/(2r_0)), and (<ref>) together with (<ref>) give η_r_0=1/s+1/p_A_r_0^+-1/p_A_r_0^-≥1/s-C_log/log(1/(2r_0))>0. Choosing η̃:=1/s-C_log/log(1/(2r_0)), we get that η_R≥η_r_0≥η̃>0 for all R≤ r_0. This is our desired η̃. Therefore, from equation (<ref>) one sees that if a positive lower bound for R^β_R / η_R is provided, the proof of Theorem <ref> is finished. To achieve such a lower bound, we see that from the log-Hölder continuity of p, | p(z)-p(y)|≤C_log/log(e+1/|z-y|) ; taking the supremum over pairs of points in A_R one gets p_A_R^+-p_A_R^-≤C_log/log(1/(2R)) , or log(1 / (2R)^p_A_R^+-p_A_R^-)≤ C_log, therefore R^p_A_R^+-p_A_R^-≥e^-C_log/2^p_A_R^+- p_A_R^- ≥e^-C_log/2^(p^+-p^-). But R^β_R/η_R≥ R^β_R/η = R^n(p_A_R^+ - p_A_R^-)/η p_A_R^+ p_A_R^- ≥ (R^p_A_R^+ - p_A_R^-)^n / η (p^-)^2 , hence using (<ref>) the required bound R^β_R/η_R≥(e^-C_log/2^(p^+-p^-))^n / η (p^-)^2 =: c_5 >0. Taking f(t)=t (log (e+1/t))^q_A_R^+/η_R, we see that (<ref>) becomes f(|A_R|)≥ cR^n, where c := c_4^1 / η c_5 and hence |A_R|≥ f^-1(cR^n) which further implies that c R^n (log (e+1/R))^-q^+/η≤ c R^n (log (e+1/R))^-q_A_R^+/η_R≤ | B_R(x) ∩Ω |. So, Ω satisfies the log-measure density condition. Case-2 (q(x) ≤ 0 for all x∈Ω): Using Lemma <ref> one obtains R_i -R_i+1 ≤ C_2 |A_R_i|^1/n+1/p_A_R_i^+-1/p_A_R_i^-(log (e+1/|A_R_i|))^Q_A_R ≤ C_2 |A_R_i|^1/n+1/p_A_R^+-1/p_A_R^-(log (e+1/|A_R|))^Q_A_R = C_2 |A_R|^η_R/2^i η_R(log (e+2^i/|A_R|))^Q_A_R ≤ c_62^Qi^Q_A_R|A_R|^η_R/2^i η_R(log (e+1/|A_R|))^Q_A_R, where η_R :=1/n+1/p_A_R^+-1/p_A_R^- and c_6=C_2(2)^Q . Note that η_R ≥η :=1/n+1/p^+-1/p^- > 0 and hence R = ∑_i=0^∞(R_i -R_i+1) ≤ c_6 |A_R|^η_R(log (e+1/|A_R|))^Q_A_R(1+∑_i=1^∞i^Q_A_R2^-iη_R) ≤ c_6 |A_R|^η_R(log (e+1/|A_R|))^Q_A_R(1+(Q_A_R!)/(η_R ln2)^(Q_A_R +1)) ≤ c_6 |A_R|^η_R(log (e+1/|A_R|))^Q_A_R(1+(Q_A_R )!/(ηln2)^(Q_A_R +1)) ≤ c_6 |A_R|^η_R(log (e+1/|A_R|))^Q_A_R(1+(Q )!/(min(1,η) ln2)^(Q +1)) where Q = -q^-(n-p^-)/p^-(n-p^+) ≥ Q_A_R≥ 0. Moreover, since c_7:=1/ max{(1,c_6+c_6(Q )!/(min(1,η) ln2)^(Q +1))}≤ 1 one has |A_R| (log (e+1/|A_R|))^Q_A_R/η_R≥ c_4^1 /η_R R^1/η_R≥ c_7^ 1 /η R^1 /η_R = c_7^1 / η R^n R^β_R / η_R , where β_R := 1-n η_R. Now we can proceed similarly as in case-1 to obtain c R^n (log (e+1/R))^-Q/η≤ c R^n (log (e+1/R))^-Q_A_R/η_R≤ | B_R(x) ∩Ω |. So, Ω satisfies the log-measure density condition. Case-3 (q(x) < 0 for some x∈Ω and q(x)≥ 0 for some x∈Ω): Using Lemma <ref> one obtains R_i -R_i+1 ≤ C_3 |A_R_i|^1/n+1/p_A_R_i^+-1/p_A_R_i^-(log (e+1/|A_R_i|))^S_A_R ≤ C_3 |A_R_i|^1/n+1/p_A_R^+-1/p_A_R^-(log (e+1/|A_R|))^S_A_R = C_3 |A_R|^η_R/2^i η_R(log (e+2^i/|A_R|))^S_A_R ≤ c_82^Si^S_A_R|A_R|^η_R/2^i η_R(log (e+1/|A_R|))^S_A_R, where η_R :=1/n+1/p_A_R^+-1/p_A_R^- and c_8=C_3(2)^S . Note that η_R ≥η :=1/n+1/p^+-1/p^- > 0 and hence R = ∑_i=0^∞(R_i -R_i+1) ≤ c_8 |A_R|^η_R(log (e+1/|A_R|))^S_A_R(1+∑_i=1^∞i^S_A_R2^-iη_R) ≤ c_8 |A_R|^η_R(log (e+1/|A_R|))^S_A_R(1+(S_A_R!)/(η_R ln2)^(S_A_R +1)) ≤ c_8 |A_R|^η_R(log (e+1/|A_R|))^S_A_R(1+(S_A_R )!/(ηln2)^(S_A_R +1)) ≤ c_8 |A_R|^η_R(log (e+1/|A_R|))^S_A_R(1+(S )!/(min(1,η) ln2)^(S +1)) where S= max{q^+,Q,T}≥ S_A_R≥ 0 and T = q^+ -q^-(n-p^-)/p^-(n-p^+) ≥ T_A_R. Moreover, since c_9:=1/ max{(1,c_8+c_8(S )!/(min(1,η) ln2)^(S +1))}≤ 1 one has |A_R| (log (e+1/|A_R|))^S_A_R/η_R≥ c_9^1 /η_R R^1/η_R≥ c_9^ 1 /η R^1 /η_R = c_9^1 / η R^n R^β_R / η_R , where β_R := 1-n η_R. Now we can proceed similarly as in case-1 to obtain c R^n (log (e+1/R))^-S/η≤ c R^n (log (e+1/R))^-S_A_R/η_R≤ | B_R(x) ∩Ω |. So, Ω satisfies the log-measure density condition. References alpha Ankur Pandey Department of Mathematics, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, Hyderabad-500078, India p20210424@hyderabad.bits-pilani.ac.in; pandeyankur600@gmail.com Nijjwal Karak Department of Mathematics, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, Hyderabad-500078, India nijjwal@gmail.com ; nijjwal@hyderabad.bits-pilani.ac.in
http://arxiv.org/abs/2406.07875v2
20240612050851
Carbon Market Simulation with Adaptive Mechanism Design
[ "Han Wang", "Wenhao Li", "Hongyuan Zha", "Baoxiang Wang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.MA" ]
[ Ji Zhang ============ § ABSTRACT A carbon market is a market-based tool that incentivizes economic agents to align individual profits with the global utility, i.e., reducing carbon emissions to tackle climate change. Cap and trade stands as a critical principle based on allocating and trading carbon allowances (carbon emission credit), enabling economic agents to follow planned emissions and penalizing excess emissions. A central authority is responsible for introducing and allocating those allowances in cap and trade. However, the complexity of carbon market dynamics makes accurate simulation intractable, which in turn hinders the design of effective allocation strategies. To address this, we propose an adaptive mechanism design framework, simulating the market using hierarchical, model-free multi-agent reinforcement learning (MARL). Government agents allocate carbon credits, while enterprises engage in economic activities and carbon trading. This framework illustrates agents' behavior comprehensively. Numerical results show MARL enables government agents to balance productivity, equality, and carbon emissions. Our project is available at <https://github.com/xwanghan/Carbon-Simulator>. § INTRODUCTION Climate change has emerged as a pressing worldwide concern  <cit.>, significantly imperiling global ecosystems, economic systems, and sociopolitical stability. The United Nations reports that in developing regions, one in ten individuals subsists on less than US$ 1.90 daily  <cit.>, with 2.2 billion people deprived of access to safely managed potable water resources  <cit.>. The burgeoning climate crisis amplifies these challenges, as worldwide temperature escalations provoke droughts and rising sea levels, exacerbating famines and enhanced forced displacements  <cit.>. In 2016, 196 nations endorsed the Paris Agreement to mitigate climate change collaboratively. However, pursuing transnational environmental targets often conflicts with short-term interests, requiring mechanisms to reconcile local and international objectives  <cit.>. Carbon markets exemplify such mechanisms  <cit.>, incentivizing economic agents to curb emissions. The cap and trade format, predominant in carbon markets, involves allocating and trading allowances  <cit.>. Economic agents must possess sufficient allowances to offset emissions or face penalties for surplus emissions. The cap and trade system sets a predetermined limit on allowances within an economy, with a central authority introducing and allocating allowances based on specified objectives. While this policy helps balance efficiency and fairness, determining the optimal allocation remains challenging in general economic contexts. The high-dimensional dynamics of the carbon market, influenced by rational, self-interested, and far-sighted economic agents, lead to market simulation reliance on models like CGE (computable general equilibrium)  <cit.> or ABM (agent-based modeling)  <cit.> frameworks, employing simplifying assumptions that are arduous to validate, such as production and trading behaviors. Given the unique nature of the carbon market, we integrate the AI Economist  <cit.> to simulate market dynamics. Our adaptive mechanism design framework, employing hierarchical, model-free MARL, mimics the carbon market. Lower-level enterprise agents engage in realistic economic activities, such as emitting carbon dioxide, trading emission credits, and investing in emission reduction projects. Higher-level government agents analyze diverse allocation strategies to achieve balanced efficiency and fairness, leading to significant carbon emission reductions. The framework demonstrates the conduct of rational, self-interested, and far-sighted agents within the carbon market. We emphasize that our approach is not a simple transfer of the AI-Economist from taxation to carbon credit allocation. Simulating the carbon market is challenging due to limited data, fluctuating regulations, and non-market factors. To validate our simulator, we conducted comparisons with several widely adopted indicator allocation approaches at the firm level <cit.>. The simulation results indicate reasonable action responses by enterprise agents to these allocation policies. Additionally, numerical findings demonstrate that government agents, through MARL, effectively discover allocation policies capable of balancing productivity, equity, and carbon emissions. Our primary contributions encompass: 1) We propose a systematic carbon market simulator featuring carbon credits allocation and trading, and achieve realistic carbon economy simulation based on hierarchical, model-free MARL. 2) We implement several widely adopted indicator allocation approaches at the firm level as baselines. 3) We observe that learning-based allocation policies possess the potential to effectively balance productivity, equity, and carbon emissions. § PRELIMINARIES AND RELATED WORKS §.§ Carbon Market and Cap and Trade A carbon market can manifest at various scales, from local to global. This paper primarily focuses on regional carbon markets, which involve the participation of the government and many enterprises. Carbon market allowances can be classified into mandatory and voluntary types, with the former being strictly regulated by the government. At the same time, the latter encourages enterprises to decrease their emissions through investment in carbon credits generated by government-certified projects, facilitating emissions avoidance, reduction, or removal from the atmosphere. The most prevalent form of carbon markets is the cap and trade system, encompassing a predefined cap on total allowances within the economy at a specific time. A central authority or government manages the introduction of these allowances, with the cap value being established to align with their objectives. This mechanism ensures an accurately targeted level of emissions. Carbon credits can be introduced into the market through 3 primary means. The initial method, termed free allocation, entails the government determining the number of free credits to allocate to enterprises based on factors such as the enterprise's size and historical emissions. Another approach involves the government selling allowances in an auction market. The final method incorporates certified projects, enabling enterprises to garner additional allowances by undertaking and completing energy conservation and emissions reduction projects, typically facilitated by technological investment and advancement. This paper exclusively focuses on free allocation and government-certified projects. In the context of free allocation, a pivotal decision confronting the government is the establishment of both the overall volume of allowances for a given period and the distribution of these allowances among enterprises within distinct regions  <cit.>. Allocation policies can be primarily divided into four approaches: indicator, optimization, game theoretic, and hybrid approaches. Each approach possesses strengths and weaknesses in terms of productivity and equity. The indicator approach is the most frequently employed in practice; however, consensus on indicator selection remains elusive, as differing indicator methodologies can yield substantially varied allocation outcomes and are challenging to validate  <cit.>. In our simulations, we utilized the indicator policy as a baseline to compare against the MARL-based allocation policy. The paramount component within the cap and trade is carbon trading  <cit.>. Enterprises that exceed allotted emissions limits can elude penalties by purchasing allowances from entities possessing surplus allowances. This paper simulates carbon trading using a bid-and-ask auction mechanism, facilitating allowance-based transactions between enterprises, analogous to the AI Economist <cit.>. §.§ Carbon Market Simulated Models The prevailing quantitative simulator encompass the computable general equilibrium model(CGE)  <cit.> and the agent-based model(ABM)  <cit.>. CGE models are economic frameworks that obtain empirical data as input to emulate economic structures and the behavioral response of economic entities as accurately as feasible to examine potential influences of varied policies and other disturbances. CGE and dynamic CGE models have been employed extensively to broadly explore climate policies, specifically, the carbon market  <cit.>. Moreover, multi-agent-based models have been utilized to replicate distinct industries' behavior at a national scale  <cit.>.  <cit.> propose a multi-agent simulation framework to emulate regional emissions trading systems and assess numerous regulatory policies and carbon auction regulations. Additionally, other multi-agent-oriented models have been applied for exploring specific aspects of emissions trading schemes, such as prospective carbon auction rules  <cit.> or international transportation patterns  <cit.>. Rafieisakhaei et al.  <cit.> introduces models targeting the EU ETS and the global oil market, subsequently examining the interrelation between these components. §.§ Machine Learning for Economic Design In the domain of economic design, contemporary investigations have chiefly centered on automated mechanism design  <cit.> and auction design  <cit.>, with the primary aim being the augmentation of market efficacy and resource allocation via the employment of machine learning methodologies. Historically, agents within such research contexts have functioned as static entities devoid of learning capabilities. However, recent scholarly contributions have begun to incorporate MARL for the examination of security games  <cit.> on Stackelberg equilibrium, as well as the investigation of resource allocation games  <cit.>. Furthermore, certain studies have extended the application of RL by enabling agents to modify their conduct in adherence to novel regulatory frameworks, thereby intensifying the refinement of market design  <cit.> and aligning with the "Positronic Economist" paradigm  <cit.>. Moreover, AI economists  <cit.> have successfully optimized taxation policies within a two-level simulation environment, surpassing the effectiveness of extant real-world tax frameworks. §.§ Multi-Agent Reinforcement Learning In the simulated environment under investigation, numerous enterprise agents adapt in response to distinct government policy interventions. These agents' training process is grounded in the principles of MARL. Owing to the complexity of environment state transitions, which encompass not only stochastic environmental factors but also the actions of all agents involved, a single agent's perspective may inadvertently conflate environmental randomness with the behavior of other agents, ultimately leading to improper agent updates  <cit.>. A growing body of recent research in the domain of MARL has been devoted to improving multi-agent learning in non-stationary environments  <cit.>, employing strategies such as reward sharing  <cit.> or state information sharing  <cit.> amidst agents. This paper adopts an alternative approach to bolster the learning process: sharing agent parameters  <cit.> while precluding the exchange of individual agent information. § CARBON MARKET MODELING In this section, we will present an approach to conduct simulations of carbon markets within the context of model-free MARL. The MARL framework exhibits a hierarchical structure, consisting of the higher-level government RL agent and lower-level enterprise RL agents. Consequently, it is referred to as a hierarchical model-free MARL framework, which has also been known as a manager-worker architecture  <cit.> in existing literature. Section <ref> will provide an in-depth mathematical representation of this framework, while Sections <ref> and <ref> will elucidate the instantiated models for government and enterprise RL agents, respectively. We use notations reference from RL and economics, see Table <ref>. §.§ Hierarchical Multi-Agent RL Framework On the one hand, the issues encountered by higher-level government agents in the carbon market (primarily referring to carbon allowances allocation within this paper) can be modeled as a standard Markov decision process(MDP)  <cit.> ℳ_h = (𝒮, 𝒜, 𝒫, ℛ, γ), given the fixed lower-level enterprise agents. Given states s, s^'∈𝒮 and an action a ∈𝒜, the transition probability function is expressed as 𝒫(s^'| s, a): 𝒮×𝒜×𝒮→[0, 1] and the reward function is defined by ℛ(s, a, s^'): 𝒮×𝒜×𝒮→ℝ. The discount factor is denoted as γ∈ (0, 1] and the policy is represented as π: 𝒮×𝒜→[0, 1]. For the timestep t ∈ [1, T], the discounted return is identified by G_t = ∑_t^' = t^T γ^t^' - t r_t^'. The objective of the government agent is to determine a policy π that maximizes J = 𝔼_a_t ∼π(·| s_t), s_t + 1∼𝒫(·| s_t, a_t)[∑_t = 1^T γ^t - 1 r_t(s_t, a_t, s_t + 1)]. On the other hand, the problems (production, trade, etc.) encountered by the lower-level enterprise agents in carbon markets can formally defined by a partially observable stochastic game(POSG)  <cit.> 𝒢_l:(𝒮,𝒜_i=1^|ℐ|,ℛ_i=1^|ℐ|,𝒫,𝒪_i=1^|ℐ|,ℐ), given a fixed higher-level government agent. At each timestep t∈ T, lower-level enterprise agent i∈ℐ obtains an observation, o_t^i = 𝒪_i(s_t), where 𝒪_i(s_t) is emission or observation function. Then each agent i execute an action a_t^i ∈𝒜_i simultaneously according to their historical-dependent policy π_i:𝒪^1_i ×𝒜^1_i ×⋯×𝒪^t_i ×𝒜^t_i → [0, 1]. After that, environment will return rewards r_t^i ∈ℛ_i(s_t, a_t) to agent i, where a_t denotes the joint action of all agents. Then state s_t will transitions to the next state s_t+1 according to transition function 𝒫(s_t+1| s_t,a_t). Each agent's objective is find an optimal policy π to maximize the γ-discounted expected return max_π_i𝔼_a^i ∼π_i, a^-i∼π_-i, s' ∼𝒫[∑_tγ^t r_t^i], where π_-i means the joint policy of other agents. Note that government and enterprises are make decisions in different timescales. Subsequently, we instantiate ℳ_h and 𝒢_l, representing government and enterprises within the carbon market context, and provide definitions of state spaces, action spaces, and reward functions. §.§ Lower-Level: Enterprise Modeling The Gather-Trade-Build game, as proposed in AI-Economist  <cit.>, is employed to emulate enterprise economic behavior within the context of a carbon market. This two-dimensional grid world enables agents to traverse the grid, acquire resources, amass coins (representing profit) through resource usage in house construction endeavors, and partake in trading activities with other agents by exchanging resources for coins. More specifically, enterprises are stochastically initialized on grid cells at the onset of each episode. Subsequently, these entities initiate actions at every timestep throughout an episode, excluding when the government intervenes (to be detailed later on). Upon the completion of each episode, a penalty equal to ee*p is imposed on enterprises' coin holdings in response to surpassing their allotted carbon emission credits. Reward Function. The objective of each enterprise is to maximize its utility which is defined as: u(z,l) = z^1-η-1/1-η - c^l*l, η = 0.23, where z is income that the sum of enterprise‘s coins, and l is also the cumulative labor during all previous time steps. And for η is isoelastic coefficient, we use isoelastic utility function  <cit.> to model the income component of reward, it is a nonlinear curve with higher η lower sensitivity to the income change. During an episode, labor-income coefficient c^l is changing, it follows equation α*(1-e^t/β), where α,β is constant and t refers to time. Skill. Enterprise agents' skills include S and Rc, representing enterprise size and research and development (R&D) capabilities. Action Space. An episode is divided into several periods. At the start of each period, each enterprise is allocated a certain number of carbon emission credits by the government agent. At each timestep within a period, enterprises must choose from a set of actions, including , , , or (no operation). action consumes one unit of labor. Enterprise agents can move to any of the four neighboring grids. Still, this action will be blocked by the edge of the grid world or other enterprises' properties (or houses). Suppose an enterprise agent moves to a grid already allocated a government-certified project. The enterprise will consume coins and labor and receive carbon emission credits. Once completed, the project will remain on the grid and influence the carbon emission process of all agents. action consumes one unit of labor, x^l carbon emission credits, and provides the agent with 10S coins. This action is available if the current position in the grid environment is not occupied by the enterprise's property or a government-certified project. Additionally, when a large amount of carbon emission is created, it will randomly pollute the value of nearby blank grids. This will introduce a discount on the coins received when the enterprise produces in such grids. action can decrease an enterprise's carbon emission level. This action is available when an enterprise has positive coins. It consumes one unit of labor and 5/Rc coins. The carbon emission level, denoted as x^l, is a continuously changing enterprise attribute, with a floating range between 0 and 1. At the beginning of the episode, all enterprises have their x^l set to 1. Through investments, an enterprise can change its x^l. The carbon emission level is defined as follows: x^l_i = Pc_i * (1-Gr), where Pc_i is defined as Pc_i = exp(-δ * Rc_i * n_i^r) and Gr is: Gr = n^p/∑_j^I(Pc_j*S_j)+n^p. We use power consumption Pc to represent the power consumed by an enterprise when producing 10 coins. This value depends on the number of investments n_i^r, the enterprise's R & D capabilities, and constant δ. The green rate Gr is used to represent the proportion of green energy generated from government-certified projects in total energy. It decreases with a higher number of completed government-certified projects located in the grid world, denoted as n^p, or a lower total power consumption ∑_j^I(Pc_j*S). Additionally, we have set lower bounds to ensure that x^l_i remains within a reasonable range and Gr is non-negative. We have introduced delay, forgetting, and random failures in the count of investment actions n_i^r to model actual carbon emission reduction investments. encompasses bidding and asking in carbon trading, similar to AI Economist. The action space for is twice the number of specified price levels. When an enterprise publishes a bid or ask request, that request remains active for a predetermined number of timesteps until it is matched with a higher-priced ask or a lower-priced bid. The range for the lifetime of a request should be smaller than the length of an episode because, at the start of each new episode, market requests are cleared. Publishing a request in the market incurs a lower labor cost than other actions, encouraging trade activity. However, this action is subject to limitations. Enterprises can only publish a maximum allowed number of requests, and they must have a sufficient amount of coins and carbon emission credit to participate in trading. Observation Space. The range of observation for enterprises is significantly smaller than that of the government. Regarding position information, enterprises can gather information about their properties' positions and a limited square range of the grid world. Within this limited square range, enterprises can observe which grid areas are polluted by carbon emissions resulting from their production actions or purified by completing government-certified projects. Enterprises will only know their skills and attributes but can access information about the average carbon emission level. In carbon trading, enterprises can observe the embedded price history and all published requests, similar to what the government can monitor. Additionally, enterprises' observations will include self-published requests. §.§ Higher-Level: Government Modeling Reward Function. Government's objective is to maximize social welfare, which is defines as: swf = prod * eq*exp(-c^e * ee), where prod is defined as: prod = ∑_i(x_i^c), and eq defined from concept of gini index: eq = 1- ∑_i=1^N∑_j=1^N |x_ci - x_cj|/2(N-1)∑_i=1^Nx_ci, 0<eq<1. Action Space. The government agent makes decisions at the first timestep of each period. Its action is represented by a multi-dimensional discrete vector. The first dimension specifies the proportion of the total credit for this period that will be allocated to the remaining unallocated carbon emission credits for all periods. The next |ℐ| dimensions, where |ℐ| refers to the number of enterprise agents, determine the allocation weight of carbon emission credits for each enterprise agent in each period. The total credit available for the period is calculated based on the specified proportion of the period's full credit. The government then allocates 10% of this credit to a government-certified project randomly placed on an empty grid. The remaining 90% of the total credit is distributed to enterprises according to their respective allocation weights. Each dimension of the action space has a size of 101, with the remaining action 0 used for masking at the rest of the timesteps in each period. To ensure the proper functioning of a cap and trade, it is essential for the government agent not only to allocate carbon credits to each period and enterprise agent but also to establish the severity of penalties for exceeding emissions. Thus, the |ℐ|+2-th dimension determines the severity of punishments. Regarding different facets of government policy, the parameter that exerts the most significant influence on enterprise behavior is the punishment setting, as it directly impacts the utility of enterprises. When the punishment is excessively high, enterprises will avoid exceeding the carbon emission credit at all costs, negatively affecting productivity and trade. Conversely, suppose the punishment is set too low. In that case, production actions will go unchecked, resulting in low equality and exceedingly high carbon emissions. Therefore, we have maintained a constant punishment for most of our experiments. After each period, the carbon emission credits allocated to enterprises are reset while the government project remains intact. Observation Space. The government possesses access to comprehensive information. Firstly, it can observe the entire grid world, including the positions of all enterprise agents, their properties' locations, and the government-certified project's location. Next, concerning the private information of enterprises, the government can collect data on their skills and attributes. Additionally, real-time market information, including embedded price history and all published requests, is readily available. Furthermore, the government has access to information related to carbon emission credits, such as excess records and the total remaining credit. Remark: Notably, in developing a carbon market model similar to AI-Economist, we opt for a selection of hyperparameters without referencing empirical data, relying solely on fundamental economic principles. This endows the simulation model with several advantages. Firstly, government and enterprise agents possess no prior knowledge of external markets or economic theory and lack any understanding of each other's behavioral patterns. Consequently, the simulator can optimize arbitrary social outcomes through hyperparameter configuration. Secondly, carbon market data is scarce and typically encompasses relatively small market scales. Our proposed simulator, by using appropriate hyperparameter settings, transcends the limitations of real-world data, enabling the examination of allocation strategies and the impact of economic activities under varying carbon market scales. Lastly, the simulator can validate the rationality of different economic theories by adjusting hyperparameters following various theoretical perspectives. § ENTERPRISE BEHAVIOR SIMULATION Upon the completion of carbon market modeling, we can employ (MA)RL to train the government (enterprise) agent(s), enabling them to exhibit behavior that closely resembles real-world scenarios, thereby achieving a realistic simulation of the carbon market. This section primarily focuses on the simulation of the critical entities in the carbon market, namely the behavior of enterprise agents. Based on reward function we defined in Equation (<ref>), we can obtain desired policy π_i by maximizing total discounted reward: max_π_i𝔼_a^i ∼π_i, a^-i∼π_-i, s' ∼𝒯[ ∑_t^T γ^t (u_i,t - u_i,t-1)+u_i,0]. Like AI Economist, we use a strong baseline in MARL, the independent PPO <cit.>, to train adaptive response enterprise agents for government policy. We incorporate 5 enterprise agents into the carbon market in the simulation. §.§ Training Details To enhance agents' performance, the observation of enterprises includes position information, which can be represented as an image. To process this position information, a convolutional neural network (CNN) is employed. Other information is added to the CNN's output, and the combined data is passed through a pipeline consisting of fully connected (FC) layers, a long short-term memory (LSTM) network, and additional FC layers. This process yields action logits. Agents' policies are obtained by applying an action mask and performing a operation on these logits. MARL training aims to discover an allocation policy that maximizes the Government's reward r, while also finding a balance between productivity, equality, and carbon emissions under the designated economy-climate coefficient c^e. For the joint optimization of enterprise and government policies, we first initialize the parameters of enterprise agents to those trained under government policies based on Flat and Enterprise size as the indicator (SI) scenario. Subsequently, the parameters of government policies are randomly initialized. During training, we utilize the PPO algorithm <cit.> under the RLlib framework <cit.>. Additionally, we experiment with various hyperparameters for both enterprise and government agents, including learning rate and entropy regularization. Following training with 400 million samples, we find both enterprise and government agents to converge to stable policies, which effectively balance productivity, equity, and carbon emissions. (Figure <ref>) §.§ Baseline Allocation Policies We utilize the indicator approach <cit.> to allocate the proportion of total carbon credits for each enterprise annually, with Emission (also called grandfathering or GF <cit.>), Emission intensity (also called benchmarking or BM <cit.>), and Enterprise size selected as indicators (shorten as SI). Due to the large temporal scale of our simulation spanning 10 years, the government agent needs to allocate the total carbon emission credits for each year over the 10-year period. We refer to the global carbon emission historical data and future forecasts provided by the IPCC <cit.> to establish emission scenarios on a large temporal scale: we call this scenario Convex. Additionally, we provide scenarios where the annual emission targets decrease gradually over time (Decreasing), and scenarios where the annual emission targets remain constant over time (Flat). §.§ Simulation Details In this section, we delve into the specifics of our simulation setup and the observed behaviors of enterprise agents under various government policies. We analyze key activities such as trade, production, and the construction of government-certified projects, assessing their impact on overall productivity, equality, and carbon emissions. Through these simulations, we aim to illustrate how different allocation policies influence enterprise behavior and the resulting economic and environmental outcomes. Trade Activities. In each activity, a carbon emission credit circulates among the enterprises, along with coins corresponding to the current price of the carbon emission credit. The price of emission credit fluctuates significantly with changes in enterprise carbon emission level, making it challenging to analyze. Therefore, we analyze enterprise trading activities by averaging trading volumes over multiple episodes. As shown in the graph in the middle-upper part of Figure <ref>, under various government policies, the circulation of coins among enterprises due to trading aligns closely with the equality of government policies. Production Activities. action is the only action that generates coins, and the coins generated in a single production are solely related to the enterprise's size. Therefore, the total income of all enterprises in an episode is closely related to the total number of actions, which is essentially the number of properties. Government Certified project. Government-certified projects in our setup represent government-led public green energy initiatives, aligning with real-world scenarios. In our simulator, these projects can store carbon credits across periods. When constructed, they initially reduce the carbon emissions of all enterprises (defined in Equation <ref>). They also pay 1 unit of carbon emission credit to the constructor (i.e., enterprise) but charge high coins and labor costs. As observed in the bottom-left part of Figure <ref>, all baseline policies exhibit lower total government-certified project construction. In contrast, the MARL-based allocation policy stands out with significantly higher government-certified project construction. This difference may stem from the MARL-based policy's more diversified allocation strategies, resulting in higher social welfare. Carbon emission. According to Equation (<ref>), actions can reduce the carbon emissions of the investing enterprise while also having a minor impact on the carbon emissions of other enterprises. The overall carbon emissions are related to the overall production activities and the carbon emissions of individual enterprises. From the lower-middle and lower-right parts of Figure <ref>, in conjunction with the previous analyses of production activities and government-certified projects, we can observe different behavioral patterns associated with the 4 different policies. Under the policy, enterprises engage in more total production and investment activities, resulting in higher carbon emissions. However, due to the lack of differentiation in this policy, some enterprises exceed their allocated carbon credits significantly. In the policy, economic activities of enterprises are subdued, leading to lower productivity but higher equality and fewer excess carbon emissions. For the policy, there are fewer total production activities but more investment activities, resulting in lower and fewer excess carbon emissions. Under the MARL-based policy, enterprises construct more government-certified projects that can enhance overall income. Due to differentiated allocation strategies, even with lower investment activity, there are more total production activities and higher carbon emissions but fewer excess carbon emissions. § VISUALIZATION In this section, we present the visual tools developed to illustrate the simulation results and analyze the behaviors of the agents in our carbon market model. Visualization plays a crucial role in understanding the complex interactions and outcomes within the multi-agent reinforcement learning (MARL) environment. Our visual tools provide insights into the dynamics of enterprise and government agents, their economic activities, and the impact of different policies. §.§ Dashboard Overview The simulator dashboard (Figure <ref>) offers a comprehensive view of the simulation at any given time step within an episode. It includes detailed information about the attributes, assets, and actions of enterprises, as well as visual representations of the average carbon prices, trade volumes, and rewards for both enterprises and the government. This allows for an intuitive comparison between different baseline and MARL strategies. The dashboard is divided into several key sections: * Skills: Displays the research ability and manufacturing volume of each enterprise. * Grid World: Visualizes the positions and movements of enterprises within the grid world. * Carbon Price: Tracks the fluctuations in carbon emission credit prices. * Rewards: Shows the cumulative rewards for both enterprises and the government. * Carbon Emission Credits: Illustrates the remaining carbon credits for each enterprise. * Coins: Displays the accumulation of coins by each enterprise. * Carbon Emission Level: Monitors the changes in carbon emission levels over time. * Labor: Tracks the labor usage of each enterprise. By leveraging these visualizations, we gain valuable insights into the trade-offs between productivity, equality, and carbon emissions, ultimately guiding the development of more effective carbon market policies. § CLOSING REMARKS In this paper, enterprise and government agents participate in carbon market simulations via MARL-based adaptive mechanism design. By fine-tuning the government's reward function, we can exploit the adaptability to strike a balance between various economic and climate objectives. Unlike the commonly used indicator approach, MARL-based agents can incorporate more comprehensive information, enabling them to formulate more personalized and diversified allocation strategies. We also illustrates the practicality of employing hierarchical model-free MARL for carbon market simulation. It envisions the potential of machine learning to contribute to global emission reduction endeavors. However, the proposed simulator still needs to be improved, notably the absence of empirical modeling for emissions reduction investments made by enterprises. Consequently, future simulations can enhance their realism by integrating more real-world data. § ETHICAL STATEMENT In our development of the carbon market simulator, we adhere to principles of transparency, integrity, and fairness, ensuring compliance with the highest ethical standards while advancing understanding in environmental economics. We prioritize privacy, equity, and social responsibility throughout our research and development process. However, it's important to acknowledge that our simulator may not encompass all aspects of the real world. As such, we do not endorse the use of learned policies for actual policy making. § ACKNOWLEDGEMENTS Baoxiang Wang is partially supported by National Natural Science Foundation of China (62106213, 72394361). unsrt
http://arxiv.org/abs/2406.08790v1
20240613034823
Direct generation of multi-photon hyperentanglement
[ "Peng Zhao", "Jia-Wei Ying", "Meng-Ying Yang", "Wei Zhong", "Ming-Ming Du", "Shu-Ting Shen", "Yun-Xi Li", "An-Lei Zhang", "Lan Zhou", "Yu-Bo Sheng" ]
quant-ph
[ "quant-ph" ]
^1College of Electronic and Optical Engineering & College of Flexible Electronics (Future Technology), Nanjing University of Posts and Telecommunications, Nanjing 210023, China ^2Institute of Quantum Information and Technology, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China ^3College of Science, Nanjing University of Posts and Telecommunications, Nanjing, 210023, China 03.67.Pp, 03.67.Hk, 03.65.Ud § ABSTRACT Multi-photon hyperentangement is of fundamental importance in optical quantum information processing. Existing theory and experiment producing multi-photon hyperentangled states have until now relied on the outcome post-selection, a procedure where only the measurement results corresponding to the desired state are considered. Such approach severely limits the usefulness of the resulting hyperentangled states. We present the protocols of direct production of three- and four-photon hyperentanglement and extend the approach to an arbitrary number of photons through a straightforward cascade of spontaneous parametric down-conversion (SPDC) sources. The generated multi-photon hyperentangled states are encoded in polarization-spatial modes and polarization-time bin degrees of freedom, respectively. Numerical calculation shows that if the average photon number μ is set to 1, the down conversion efficiency is 7.6*10^-6 and the repetition frequency of the laser is 10^9 Hz, the number of the generation of three-photon and four-photon hyperentanglement after cascading can reach about 5.78*10^-2 and 4.44*10^-7 pairs per second, respectively. By eliminating the constraints of outcome post-selection, our protocols may represent important progresses for multi-photon hyperentangement generation and providing a pivotal role in future multi-party and high-capacity communication networks. Direct generation of multi-photon hyperentanglement Peng Zhao^1,2, Jia-Wei Ying^1,2, Meng-Ying Yang^1,2, Wei Zhong^2, Ming-Ming Du^1, Shu-Ting Shen^1, Yun-Xi Li^3, An-Lei Zhang^3, Lan Zhou^3[Email address: zhoul@njupt.edu.cn], and Yu-Bo Sheng^1,2[Email address: shengyb@njupt.edu.cn] Received ; accepted =========================================================================================================================================================================================================================================== § INTRODUCTION Quantum entanglement serves as a fundamental resource for quantum information processing, embodying the core principles of quantum theory, namely coherence and spatial non-locality. These notable characteristics have found widespread applications in quantum communication, including quantum key distribution (QKD) <cit.>, quantum secret sharing (QSS) <cit.>, and quantum secure direct communication (QSDC) <cit.>. Numerous methodologies for generating entanglement have been proposed by researchers, and their experimental validations have been undertaken, addressing diverse quantum tasks <cit.>. Researchers have extensively leveraged photons for their rapid transmission speed in quantum entanglement. By exploiting the polarization degree of freedom (DOF) of photons, various communication tasks have been undertaken using polarization Bell states <cit.>. Photons possess not only the polarization DOF but also other DOFs, including time-bin, frequency, spatial mode, orbital angular momentum (OAM) and so on. The entanglement of photons in two or more distinct DOFs is termed as hyperentanglement <cit.>. Different forms of hyperentanglement, such as polarization-spatial-mode hyperentanglement <cit.>, polarization-frequency hyperentanglement <cit.>, and polarization-OAM hyperentanglement <cit.>, polarization-time-bin hyperentanglement <cit.> have been proposed theoretically and experimentally demonstrated. Hyperentanglement finds wide applications due to its capacity to enhance channel capacity <cit.>, facilitate comprehensive Bell state analysis <cit.>, and enable efficient entanglement purification and concentration <cit.>. Recently, the preparation of two-photon hyperentanglement in three DOFs were proposed and demonstrated <cit.>. Multi-particle entanglement, such as Greenberger-Horne-Zeilinger (GHZ) state <cit.> has found applications in QSS <cit.>, QSDC <cit.>, quantum teleportation (QT) <cit.>, and also distributed quantum computation <cit.>. The generation of multi-particle GHZ states has been realized in ions <cit.>, photons <cit.>, and nitrogen-vacancy centers in diamond <cit.>, and so on. Multi-photon hyper-entangled states will also play an important role in increasing the capacity of multi-party quantum communication channels <cit.>, assisting in distinguishing GHZ states <cit.>, and achieving single-copy entanglement purification <cit.>. The preparation of six-photon hyperentangled states using nonlinear Kerr media has been proposed <cit.>. Preparing 18-qubit entanglement with six photons encoded in paths, polarization, and orbital angular momentum three DOFs was first reported <cit.>. Currently, the existing established method for generating photonic hyperentanglement is spontaneous parametric downconversion (SPDC). On the other hand, the experiment with multi-photon hyperentanglement relied on combining photons from two or more different SPDC sources using linear optics and employing outcome post-selection, i. e., the approach that selecting only a specific subset of measurement results while discard others <cit.>. The post-selection approach is the action of observing the photons both creates and destroy the entangled states at the same time, which will restrict the further usefulness. Creating multiphoton hyperentanglement without post-selection would provide a significant advance in photonic quantum communication and quantum network. In this paper, we propose two protocols to generate multi-photon hyperentangled GHZ states using cascade approach instead of post-selection <cit.>. The first multi-photon hyperentangled GHZ states is encoded in polarization-spatial-mode DOFs and the second one is encoded in polarization-time-bin DOFs. The structure of this paper is as follows: In Sec. II, we demonstrate the preparation of three-photon polarization-spatial-mode hyperentangled GHZ states. In Sec. III, we extend the preparation method to four-photon polarization-spatial-mode hyperentangled GHZ states and further generalizes it to m-photon case. In Sec. IV, we showcase the preparation of three-photon polarization-time-bin hyper-entangled GHZ states. In Sec. V, we extend the methodology to four-photon and m-photon polarization-time-bin multi-photon hyperentangled GHZ states. Finally, in Sec. VI, we provide a discussion and conclusion. We also give a detailed calculation of the number of hyperentanglement generated by multi-photon events in the appendix. § GENERATION OF THREE-PHOTON HYPERENTANGLEMENT IN POLARIZATION-SPATIAL-MODE DOFS In this section, we will provide a detailed introduction to the generation method of three-photon hyperentanglement in the polarization-spatial-mode. Before delving into the specifics, let's briefly elucidate the principle of Sagnac interferometer-based polarization entanglement generation <cit.>. Upon traversing a polarization beam splitter (PBS), the pump light bifurcates into both directions of the periodically poled potassium titanyl phosphate (ppKTP) crystal within the Sagnac interferometer. The horizontally polarized component |H⟩ pumps in the counterclockwise (CCW) direction, while the vertically polarized component |V⟩ pumps in the clockwise (CW) direction. Subsequently, they reunite in the PBS to engender entangled states in polarization. (1) As shown in Fig. 1, the laser generates a stream of horizontally polarized pump light |H⟩_1, subject to transformation into diagonally polarized light 1/√(2)(|H⟩+|V⟩)_2 via a preset angle of 22.5^∘ with the aid of half wave plate (HWP_1). Subsequently, a 50:50 non-polarizing beam splitter (NPBS) divides the photons along distinct trajectories. That is 1/√(2)(|H⟩+|V⟩)_2 1/2(|H⟩_a+|H⟩_b+|V⟩_a+|V⟩_b). (2) Photons from path a or b enter the Sagnac interferometer through dichroic mirrors (DM_1) and DM_2 followed by PBS_1. They propagate either CW or CCW, subsequently traversing the ppKTP crystal and HWP_2 set at a fixed angle of 45^∘ to generate a two-photon state. HWP_2 facilitates the conversion between horizontal polarization and vertical polarization (|H⟩⇌ |V⟩). Consequently, the CW photon passes through ppKTP and then HWP_2, while the CCW photon traverses HWP_2 and then ppKTP. We can represent this process as follows. |H⟩_a |H⟩_a_4 |V⟩_a_3|H⟩_a_3 |V⟩_a_2|H⟩_a_1, |V⟩_a |V⟩_a_3 |H⟩_a_4|V⟩_a_4 |H⟩_a_2|V⟩_a_1, |H⟩_b |H⟩_b_4 |V⟩_b_3|H⟩_b_3 |V⟩_b_2|H⟩_b_1, |V⟩_b |V⟩_b_3 |H⟩_b_4|V⟩_b_4 |H⟩_b_2|V⟩_b_1. (3) The photons, arriving at PBS_1 for the second time in both CW and CCW directions, undergo path differentiation through PBS_1. Consequently, Eq. (<ref>) ultimately evolves into Eq. (<ref>) after traversing PBS_1. 1/2(|H⟩_a+|H⟩_b+|V⟩_a+|V⟩_b) 1/2(|V⟩_a_2|H⟩_a_1+|H⟩_a_2|V⟩_a_1+|V⟩_b_2|H⟩_b_1 + |H⟩_b_2|V⟩_b_1) = 1/2(|HV⟩+|VH⟩)⊗(|a_1a_2⟩+|b_1b_2⟩). (4) In Eq. (<ref>), it is evident that we have achieved polarization-spatial-mode two-photon hyperentanglement. Subsequently, by directing one of the particles into a Sagnac interferometer, successful photon splitting will yield polarization-spatial hyperentanglement among three photons. Specifically, we connect the Sagnac interferometer to paths a_1 and b_1. The states described in Eq. (<ref>) will initially pass through PBS_2 and undergo transformation into the new state as 1/2(|HV⟩+|VH⟩)⊗(|a_1a_2⟩+|b_1b_2⟩) = 1/2(|V⟩_a_2|H⟩_a_1+|H⟩_a_2|V⟩_a_1+|V⟩_b_2|H⟩_b_1 +|H⟩_b_2|V⟩_b_1) 1/2(|V⟩_a_2|H⟩_a_6+|H⟩_a_2|V⟩_a_5+|V⟩_b_2|H⟩_b_6 +|H⟩_b_2|V⟩_b_5). (5) Analogously, the quantum states traveling in both CW and CCW directions will experience the subsequent transformation within the Sagnac interferometer. That is |H⟩_a_6 |V⟩_a_5|H⟩_a_5 |V⟩_d_2|H⟩_d_1, |V⟩_a_5 |H⟩_a_6|V⟩_a_6 |H⟩_d_2|V⟩_d_1, |H⟩_b_6 |V⟩_b_5|H⟩_b_5 |V⟩_c_2|H⟩_c_1, |V⟩_b_5 |H⟩_b_6|V⟩_b_6 |H⟩_c_2|V⟩_c_1. (6) The photons, traversing in both CW and CCW directions, re-enter PBS_2 for the second time and undergo path separation. By combining the expressions in Eqs. (<ref>) and (<ref>), we can obtain 1/2(|V⟩_a_2|H⟩_a_6+|H⟩_a_2|V⟩_a_5+|V⟩_b_2|H⟩_b_6 +|H⟩_b_2|V⟩_b_5) 1/2(|V⟩_a_2|V⟩_d_2|H⟩_d_1+|H⟩_a_2|H⟩_d_2|V⟩_d_1 +|V⟩_b_2|V⟩_c_2|H⟩_c_1+|H⟩_b_2|H⟩_c_2|V⟩_c_1) = 1/2(|HVH⟩+|VHV⟩)⊗(|c_2c_1b_2⟩+|d_2d_1a_2⟩). (7) Ultimately, we employ six long pass filters (LPs) in modes a_2, b_2, c_1, d_1, c_2, d_2 to filter the three-photon hyperentanglement in polarization-spatial-mode DOFs as shown in above equation. § GENERATION OF FOUR-PHOTON AND M-PHOTON HYPERENTANGLEMENT IN POLARIZATION-SPATIAL-MODE DOFS The symmetry of the structure becomes apparent when generating four-photon hyperentanglement. As shown in Fig. 2, the utilization of photons in paths a_2 and b_2 allows for the creation of four-photon hyperentanglement in polarization-spatial-mode DOFs. It is noteworthy that the initial three steps in generating four-photon hyperentanglement mirror those in generating three-photon hyperentanglement, establishing a seamless transition. For continuity, let's commence with Eq. (<ref>). (1) Firstly, we suppose that we have generated two-photon hyperentanglement in the spatial modes a_1 and a_2, b_1 and b_2. When photons from spatial modes a_1 and a_2, b_1 and b_2 initially enter PBS_2 and PBS_3, respectively, Eq. (<ref>) can be evolved as 1/2(|HV⟩+|VH⟩)⊗(|a_1a_2⟩+|b_1b_2⟩) 1/2(|V⟩_a_7|H⟩_a_6+|H⟩_a_8|V⟩_a_5+|V⟩_b_7|H⟩_b_6 +|H⟩_b_8|V⟩_b_5). (2) Analogously, the quantum states in the CW and CCW directions will experience the subsequent transformation within the introduced Sagnac interferometer. That is |H⟩_a_8 |V⟩_a_7|H⟩_a_7 |V⟩_d_4|H⟩_d_3, |V⟩_a_7 |H⟩_a_8|V⟩_a_8 |H⟩_d_4|V⟩_d_3, |H⟩_b_8 |V⟩_b_7|H⟩_b_7 |V⟩_c_4|H⟩_c_3, |V⟩_b_7 |H⟩_b_8|V⟩_b_8 |H⟩_c_4|V⟩_c_3. (3) After propagating CW or CCW in the Sagnac interferometer, the photons in paths a_5, a_6, b_5, and b_6 are separated into different paths upon re-entering PBS_2 for the second time. The same principle applies to the photons on paths a_7, a_8, b_7, and b_8. Thus, in conjunction with Eqs. (<ref>) and (<ref>), we can obtain the state in Eq. (<ref>). This represents the four-photon hyperentanglement in polarization-spatial-mode DOFs as 1/2(|V⟩_a_7|H⟩_a_6+|H⟩_a_8|V⟩_a_5+|V⟩_b_7|H⟩_b_6 +|H⟩_b_8|V⟩_b_5) 1/2(|H⟩_d_4|V⟩_d_3|V⟩_d_2|H⟩_d_1+|V⟩_d_4|H⟩_d_3 ⊗ |H⟩_d_2|V⟩_d_1 +|H⟩_c_4|V⟩_c_3|V⟩_c_2|H⟩_c_1 +|V⟩_c_4|H⟩_c_3|V⟩_c_2|V⟩_c_1) = 1/2(|HVVH⟩+|VHHV⟩)⊗(|d_4d_3d_2d_1⟩ +|c_4c_3c_2c_1⟩). (4) Finally, we employ LPs to eliminate residual pump light and background photons, ensuring the acquisition of four-photon hyperentanglement in polarization-spatial-mode DOFs. So far, we have provided comprehensive details on the preparation of three- and four-photon hyperentanglements in polarization-spatial-mode DOFs. Subsequently, we will briefly describe the scenario for preparing m-photon hyperentanglement. From Fig. 3, it is evident that the theoretical achievement of m-photon hyperentanglement is feasible through this straightforward cascaded SPDC scheme. The legend in Fig. 3 explains the meaning of the dashed box, where m = 3, 4, ⋯ k, denotes the scenario of preparing m-photon hyperentanglement in polarization-spatial-mode DOFs. a_m+2, b_m+2, a_m+3, b_m+3, c_m-1, d_m-1, c_m-2, d_m-2 mean the corresponding spatial modes after cascading. ppKTP_m-1 indicates the requirement for m-1 ppKTP crystals in the m-photon scenario. Without loss of generality, we assume that m is even. The desired m-photon hyperentanglement in polarization-spatial-mode DOFs as given in Eq. (<ref>) can be generated. 1/(√(2))^m(|HV… VH⟩+|VH… HV⟩) ⊗(|d_2m-4d_2m-3… d_2d_1⟩+|c_2m-4c_2m-3… c_2c_1⟩). It is noteworthy that in practical laser sources, multi-photon events are inevitable. Investigating the impact of multi-photon events on hyperentanglement generation is crucial. Fortunately, researchers have already explored the influence of multi-photon events on the generation of hyperentanglement in three DOFs, and we will not delve into it in this paper. For the proposed scheme of generating multi-photon hyperentanglement in this study, we will provide a detailed discussion of the efficiency of multi-photon events in the appendix. § GENERATION OF THREE-PHOTON HYPERENTANGLEMENT IN POLARIZATION-TIME-BIN DOFS In this section, we describe the protocol to generate the three-photon, four-photon and m-photon hyperentanglements in polarization-time-bin DOFs, respectively. We first describe the generation of three-photon hyperentanglement as follows: (1) As shown in Fig. 4, the laser generates a beam of horizontally polarized pump light |H⟩_1, which is converted into diagonally polarized light 1/√(2)(|Ht_1⟩+|Ht_2⟩) through a 50:50 NPBS_1. Here t_1 and t_2 mean that the photon is in the short and long arm, respectively. Subsequently, the state undergoes a transformation as follows. 1/√(2)(|Ht_1⟩+|Ht_2⟩) 1/2(|Ht_1⟩_2+|Ht_1⟩_c+|Ht_2⟩_2-|Ht_2⟩_c). Here we focus only on path 2, omitting path c for simplicity. The photon in path 2 passes through a 22.5^∘ HWP_1, resulting in 1/√(2)(|Ht_1⟩_2+|Ht_2⟩_2) 1/2(|Ht_1⟩_3+|Vt_1⟩_3+|Ht_2⟩_3+|Vt_2⟩_3). (2) The polarized photon in spatial mode 3 successively passes through the DM, PBS, polarization Sagnac interferometer, and PBS to generate the polarization-time-bin hyperentangled photon pair in the SPDC process. Specifically, the evolution process of the state can be written as |Ht_1⟩_3 |Ht_1⟩_4 |Vt_1⟩_5|Ht_1⟩_5 |Vt_1⟩_b|Ht_1⟩_a, |Vt_1⟩_3 |Vt_1⟩_5 |Ht_1⟩_4|Vt_1⟩_4 |Ht_1⟩_b|Vt_1⟩_a, |Ht_2⟩_3 |Ht_2⟩_4 |Vt_2⟩_5|Ht_2⟩_5 |Vt_2⟩_a|Ht_2⟩_b, |Vt_2⟩_3 |Vt_2⟩_5 |Ht_2⟩_4|Vt_2⟩_4 |Ht_2⟩_a|Vt_2⟩_b. In this manner, Eq. (<ref>) ultimately transforms into Eq. (<ref>) after passing through PBS_1. 1/2(|Ht_1⟩_3+|Vt_1⟩_3+|Ht_2⟩_3+|Vt_2⟩_3) 1/2(|Vt_1⟩_b|Ht_1⟩_a+|Ht_1⟩_b|Vt_1⟩_a+|Vt_2⟩_a|Ht_2⟩_b +|Ht_2⟩_a|Vt_2⟩_b) = 1/2(|HV⟩+|VH⟩)_ab⊗(|t_1t_1⟩+|t_2t_2⟩)_ab. (3) From Eq. (<ref>), it is actually a polarization-time-bin two-photon hyperentanglement <cit.>. Subsequently, by directing one of photons into a Sagnac interferometer, successful photon splitting will yield polarization-time-bin hyperentanglement among three photons. Specifically, we cascade another Sagnac interferometer on paths a and b. States in Eq. (<ref>) will first pass PBS_2 and become 1/2(|HV⟩+|VH⟩)_ab⊗(|t_1t_1⟩+|t_2t_2⟩)_ab 1/2(|Vt_1⟩_b|Ht_1⟩_6+|Ht_1⟩_b|Vt_1⟩_7+|Vt_2⟩_7|Ht_2⟩_b +|Ht_2⟩_6|Vt_2⟩_b). Similarly, the CW and CCW quantum states will undergo the following transformation in the Sagnac interferometer. |Ht_1⟩_6 |Vt_1⟩_7|Ht_1⟩_7 |Vt_1⟩_b_1|Ht_1⟩_a_1, |Vt_1⟩_7 |Ht_1⟩_6|Vt_1⟩_6 |Ht_1⟩_b_1|Vt_1⟩_a_1, |Ht_2⟩_6 |Vt_2⟩_7|Ht_2⟩_7 |Vt_2⟩_b_1|Ht_2⟩_a_1, |Vt_2⟩_7 |Ht_2⟩_6|Vt_2⟩_6 |Ht_2⟩_b_1|Vt_2⟩_a_1. (4) Photons in the CW and CCW directions then enter the PBS_2 for the second time. In this way, states in Eqs. (<ref>) and (<ref>) will evolve into 1/2(|Vt_1⟩_b|Ht_1⟩_6+|Ht_1⟩_b|Vt_1⟩_7+|Vt_2⟩_7|Ht_2⟩_b +|Ht_2⟩_6|Vt_2⟩_b) → 1/2(|Vt_1⟩_b|Vt_1⟩_b_1|Ht_1⟩_a_1+|Ht_1⟩_b|Ht_1⟩_b_1|Vt_1⟩_a_1 +|Ht_2⟩_b_1|Vt_2⟩_a_1|Ht_2⟩_b +|Vt_2⟩_b_1|Ht_2⟩_a_1|Vt_2⟩_b) = 1/2[(|HVH⟩+|VHV⟩)⊗(t_1t_1t_1+t_2t_2t_2)]_b_1a_1b, which is the target three-photon hyperentanglement in polarization-time-bin DOFs. (5) LPs are also utilized to filter out the remaining pump light and background photons, ensuring the attainment of a pure target state. § GENERATION OF FOUR-PHOTON AND M-PHOTON HYPERENTANGLEMENTS IN POLARIZATION-TIME-BIN DOFS The generation of four-photon hyperentanglement in polarization-time-bin DOFs relies on simultaneously utilizing the photons generated in Fig. 4. For clarity and convenience, we begin with Eq. (<ref>). (1) Firstly, as shown in Fig. 5, we suppose that we generated the two-photon hyperentanglement in polarization-time-bin DOFs like ref. <cit.>. As photons from paths a and b initially enter PBS_2 and PBS_3, respectively, states in the Eq. (<ref>) undergoes evolution, resulting into 1/2(|HV⟩+|VH⟩)_ab⊗(|t_1t_1⟩+|t_2t_2⟩)_ab 1/2(|Ht_1⟩_6|Vt_1⟩_9+|Ht_2⟩_6|Vt_2⟩_9 +|Vt_1⟩_7|Ht_1⟩_8+|Vt_2⟩_7|Ht_2⟩_8). (2) The polarized photons in the added Sagnac interferometer will undergo the following transformations when the photons go through either CW or CCW. |Ht_1⟩_8 |Vt_1⟩_9|Ht_1⟩_9 |Vt_1⟩_b_2|Ht_1⟩_a_2, |Vt_1⟩_9 |Ht_1⟩_8|Vt_1⟩_8 |Ht_1⟩_b_2|Vt_1⟩_a_2, |Ht_2⟩_8 |Vt_2⟩_9|Ht_2⟩_9 |Vt_2⟩_b_2|Ht_2⟩_a_2, |Vt_2⟩_9 |Ht_2⟩_8|Vt_2⟩_8 |Ht_2⟩_b_2|Vt_2⟩_a_2. (3) Photons in spatial modes 6, 7, 8, and 9 will then enter in PBS_2 and PBS_3 for the second time. Afterwards, photons are separated into different spatial modes by PBS_2 and PBS_3. In this way, combined with Eqs. (<ref>) and (<ref>) we can obtain 1/2(|Ht_1⟩_6|Vt_1⟩_9+|Ht_2⟩_6|Vt_2⟩_9+|Vt_1⟩_7|Ht_1⟩_8 +|Vt_2⟩_7|Ht_2⟩_8) → 1/2(|Ht_1⟩_b_2|Vt_1⟩_a_2|Vt_1⟩_b_1|Ht_1⟩_a_1+|Vt_1⟩_b_2|Ht_1⟩_a_2 ⊗|Ht_1⟩_b_1|Vt_1⟩_a_1+|Ht_2⟩_b_1|Vt_2⟩_a_1|Vt_2⟩_b_2|Ht_2⟩_a_2 +|Vt_2⟩_b_1|Ht_2⟩_a_1|Ht_2⟩_b_2|Vt_2⟩_a_2) =1/2[(|HVVH⟩ +|VHHV⟩)⊗(t_1t_1t_1t_1 +t_2t_2t_2t_2)]_b_1a_1b_2a_2, which is the target four-photon hyperentanglement in polarization-time-bin DOFs. (4) Finally, it is imperative to utilize LPs to eliminate the pump light and background photons in each mode. So far, we have provided detailed explanations for three- and four-photon hyperentanglements in polarization-time-bin DOFs. Next, we briefly describe the scenario for preparing m-photon hyperentanglement. As depicted in Fig. 6, this can be theoretically achieved by cascading SPDCs. The legend in Fig. 6 explains the meaning of the black dashed box. Here, m = 3, 4, ⋯ k represent the devices that need to be added to generate m-photon hyperentanglement in polarization-time-bin DOFs. m+3, m+4, a_m-2, b_m-2 represent the path labels. The subscripts m and m-1 of HWP_m, ppKTP_m-1, PBS_m-1, DM_m-1 represent the number of optical devices required to generate m-photon hyperentanglement in polarization-time-bin DOFs. Without loss of generality, we assume that m is even. Then the quantum state of successfully generated m-photon hyperentanglement can be represented as Eq. (<ref>). Similarly, we will provide an in-depth analysis of multi-photon events in the context of multi-photon hyperentanglement in polarization-time-bin DOFs in the appendix. 1/(√(2))^2m-4[(|HV… VH⟩+|VH… HV⟩) ⊗(|t_1t_1… t_1t_1⟩+|t_2t_2… t_2t_2⟩)]_b_1a_1 … b_m-2a_m-2. § DISCUSSION AND CONCLUSION We have provided a detailed description of the protocol for generating polarization-space mode and polarization-time-bin cascaded hyperentangled states. Under actual experimental conditions, the laser source may emits multi-photon events with a certain probability. In actual experiments, the repetition frequency of the laser lies between 10^6 and 10^9 Hz <cit.>. We can calculate the number of generated hyperentangled pairs (see the Appendix for more details), assuming that the repetition rate of pulses is 10^9 Hz and the ppKTP efficiency is 7.6*10^-6 <cit.>, as N_tot = F*∑_n=0^∞p(n)*P_n^m(tot)_succ = F*(1-e^-μ p_s^(m-1)). Here F denotes the repetition frequency of laser. p(n) = e^-μμ^n/n! is the probability for the n-photon emission, which follows the Poisson distribution with the mean photon number μ <cit.>. N_tot is the amount of the generated hyperentanglement per second. To obtain a more intuitive conclusion, we took the logarithm of variable N_tot. As shown in Fig. 7, the horizontal axis represents generating m-photon hyperentanglement, and the vertical axis represents the number of hyperentangled pairs generated. When repetition frequency of laser and ppKTP are given, the number of hyperentangled pairs generated increases with the increase in the average photon number of the laser source. This is because multi-photon events provide positive contribution. Specifically, if the repetition rate of the laser is 10^9 Hz and the ppKTP efficiency is 7.6*10^-6, we can respectively get three-photon hyperentanglement approximately 2.89*10^-2 pairs per second for μ =0.5, 5.78*10^-2 pairs per second for μ = 1, 1.16*10^-1 pairs per second for μ = 2, and 2.31*10^-1 pairs per second for μ = 4. Due to the positive gain from multi-photon events, the number of generated hyperentangled pairs increases with a higher mean photon number. However, in practical experiments, a larger average photon number may lead to crystal breakdown and device damage. Therefore, the specific parameter requirements need to be determined based on practical considerations. From Fig. 7, we can also observe that as the value of m increases, the generated photon pairs decrease. This is evident because a larger m value implies a lower probability of successful cascading. If the repetition rate of the laser is 10^9 Hz, the efficiency of ppKTP is 7.6*10^-6 and μ =1, the number of the generation of three-photon and four-photon hyperentanglement after cascading can reach approximately 5.78*10^-2 and 4.44*10^-7 pairs per second, respectively. For our cascaded generation of m-photon hyperentanglement source, we can represent the state as follows, |Cas⟩_m = Pr(m,0)|0⟩ + Pr(m,1)|ϕ⟩ + Pr(m,2)|ϕ⟩^⊗2 +⋯ + Pr(m,n)|ϕ⟩^⊗ n. Here, |0⟩, |ϕ⟩, |ϕ⟩^⊗2 and |ϕ⟩^⊗ n represent the generation of zero pair, one pair, two pairs, and n pairs of m-photon hyperentanglement, respectively. |ϕ⟩ is the m-photon hyperentanglement like Eqs. (<ref>) and (<ref>). Despite the preparation of entanglement has achieved significant advancements <cit.>, the preparation of multi-photon hyperentangled states remains some challenges in current experimental conditions, demanding highly precise control to ensure entanglement between photons. For instance, the SPDC process necessitates satisfying phase matching conditions, and the requirements in the cascaded SPDC source scenario may be even more stringent. Taking Fig. 4 as an example, in the nonlinear crystal ppKTP, a pump photon with a frequency of ω_p will, with a relatively low probability, undergo down-conversion, splitting into a pair of twin photon with frequencies ω_1 and ω_2 respectively. Clearly, this process must satisfy energy conservation: ħω_p = ħω_1 + ħω_2. Here, ħ is the reduced Planck constant. Subsequently, the photon with frequency ω_1 will, again with a certain probability, undergo SPDC, splitting into another pair of twin photon with ω_3 and ω_4. This cascaded SPDC process naturally satisfy energy conservation: ħω_p = ħω_2 + ħω_3 + ħω_4. From the Ref. <cit.>, we can obtain a simple expression for the frequency-space of this cascaded SPDC source. Φ_C_3≈∫_ω_2∫_ω_3G_1(ω_2, ω_p - ω_2) G_2(ω_3, ω_p - ω_2 - ω_3) a_1^†(ω_2)a_2^†(ω_3)a_3^†(ω_p - ω_2 - ω_3)|0⟩ dω_2dω_3. Here, G_1(ω_2, ω_p - ω_2) and G_2(ω_3, ω_p - ω_2 - ω_3) represent the joint spectral functions generated by phase matching conditions in the first and second ppKTP crystals respectively. a_1^†(ω_2), a_2^†(ω_3) and a_3^†(ω_p - ω_2 - ω_3) are the creation operators. Obviously, this cascaded approach can be extended to more photons (although the probability of successful implementation will be very small). As shown in Fig. 6, the entire cascaded SPDC process also needs to satisfy the energy conservation ħω_p = ħω_2^i-1 + ħω_2^i+⋯+ħω_2^i+1-2. Note that for the sake of convenience, we provide the energy conservation condition using the example of generating 2^i entangled photons. Here, ω_2^i-1, ω_2^i, ⋯, ω_2^i+1-2 represent the frequencies of the split photons after SPDC. As mentioned in <cit.>, we can also provide a simplified expression for an n-photon state in frequency-space. Φ_C_2^i≈∫_ω_2^i-1⋯∫_ω_2^i+1-3G_1(ω_1, ω_2) G_2(ω_3, ω_4) × G_i(ω_2^i+1-3, ω_2^i+1-2) a_1^†(ω_2^i-1)a_2^†(ω_2^i)⋯ a_2^i^†(ω_2^i+1-2)|0⟩ dω_2^i-1⋯ dω_2^i+1-2, where G_i(ω_2^i+1-3, ω_2^i+1-2) represents the joint spectral functions generated by phase matching conditions in the i-th ppKTP crystal. a_2^i^†(ω_2^i+1-2) means the creation operator for the i-th photon. During the preparation of entanglement, the birefringence effect of the crystal will introduce a relative phase between the down-converted photons. Taking Eq. (<ref>) as an example, |HV⟩ and |VH⟩ will have a relative phase θ, directly resulting in the prepared quantum state not being an ideally maximally entangled state. Fortunately, as early as 1995, Zeilinger et al. showed that by using of an additional birefringent phase shifter or by slightly rotating the converting crystal itself, the value of θ can be adjusted as desired, e.g., set to 0 or π <cit.>. Ursin et al. also pointed out that it is possible to compensate for the phase difference caused by the different group velocities of the pump light and down-converted photons in ppKTP by using a dual-wavelength HWP <cit.>. Therefore, birefringent devices can be employed to achieve entangled states with a relative phase of 0 as required in Eq. (<ref>), and a similar approach can be applied to realize other cascade quantum states without elaborating further here. The efficiency of conversion under non-linear crystals is also a major factor affecting the generation of entanglement. In the three-photon entangled state generation using cascaded photons directly from the source as proposed by Hübel et al., it was noted that the down-conversion efficiency of SPDC is extremely low <cit.>. In nonlinear crystal barium borate (BBO), the down-conversion efficiency for each pump photon can only reach 10^-11 <cit.>. With the development of nonlinear optics, materials such as periodically poled lithium niobate (PPLN) and ppKTP have improved the efficiency to 10^-9 <cit.>. By introducing waveguides, the down-conversion efficiency can be further enhanced to 10^-6 <cit.>. Moreover, in the cascaded SPDC source proposed by Hamel et al. for generating three-photon polarization entanglement, the authors indicated that the down-conversion efficiency can reach (6.9±0.7)*10^-6 <cit.> and (1±0.1)*10^-6 <cit.>, respectively. In addition, the preparation of three-photon time-energy entanglement has also been proposed and experimentally demonstrated by researchers <cit.>. Although the above approaches are associated with low counting rate, it is believed that the efficiency of down-conversion will continue to improve with advancements in research technology and an increase in researchers' expertise, thereby promoting the increase of entanglement quantity. Brightness characterizes the rate of entanglement generation. Numerous factors influence brightness, including the efficiency of down-conversion in the crystal, transmission losses, device losses, detector efficiency, and dark counts. Taking Fig. 1 as an example, assuming the coincident counts for spatial modes c_1, c_2, and b_2 are denoted as C_c_1c_2b_2, and for spatial modes d_1, d_2, and a_2 are denoted as C_d_1d_2a_2. Then, the brightness of the source can be expressed as C_c_1c_2b_2+C_d_1d_2a_2. Consequently, in the scenario depicted in Fig. 3, the brightness of the source can be analogously represented as C_d_2m-4d_2m-3… d_2d_1+C_c_2m-4c_2m-3… c_2c_1. Here, C_d_2m-4d_2m-3… d_2d_1 and C_c_2m-4c_2m-3… c_2c_1 represent the coincident counts for spatial modes d_2m-4, d_2m-3… d_2, d_1 and c_2m-4, c_2m-3… c_2, c_1, respectively. In conclusion, we proposed the protocols of direct generation of three- and four-photon hyperentanglement with cascaded downcoversion, and the hyperentangled states are encoded in polarization-spatial modes and polarization-time-bin degrees of freedom, respectively. We also extended such approach to multi-photon hyperentangled states. The most advantage of these protocols are they do not relied on the post-selection strategy, and the produced states are the desired multi-photon hyperentangled state. This work has the potential to demonstrate that combining multiparticle entanglement with multiple DOFs can provide an efficient route to increase both the number of effective qubits and capacity of future quantum communication and quantum network. § APPENDIX This appendix provides additional computations concerning the generation probability of multi-photon hyperentanglement, aiming to support and extend the quantitative analyses presented in the main text. The following detailed descriptions of additional calculations contribute to a more comprehensive understanding of the research outcomes presented in this paper. We start by illustrating the successful generation of two-photon hyperentanglement in polarization-spatial-mode DOFs in the case of a two-photon event. In cases leading to the creation of a pair of three-photon hyperentanglement, two distinct scenarios arise. In the first scenario, one photon is selected from the two-photon event and successfully undergoes splitting on ppKTP_1 and ppKTP_2, while the other photon fails to split on the ppKTP_1. In the second scenario, one photon is chosen from the two-photon event and successfully undergoes splitting on ppKTP_1 and ppKTP_2, but the other photon successfully splits on ppKTP_1 and fails to split on ppKTP_2. This probability can be expressed by the following equation. P_2^3(1)_succ = C_2^1p_s^2(1-p_s)+C_2^1p_s^2p_s(1-p_s), where P_2^3(1)_succ represents the probability of a two-photon event successfully generating one pair of three-photon hyperentanglement. C denotes the combination calculation. p_s is the probability of successful splitting on ppKTP. Subsequent success probabilities have similar meanings, and they are not further elaborated in this paper. In the case of generating two pairs of three-photon hyperentanglement, both photons must undergo successful splitting. Therefore, it can be expressed as follows. P_2^3(2)_succ = p_s^2p_s^2. In this way, the total probability of successfully generating three-photon hyperentanglement in two-photon event can be expressed as follows. P_2^3(tot)_succ = P_2^3(1)_succ+P_2^3(2)_succ = C_2^1p_s^2(1-p_s)+C_2^1p_s^2p_s(1-p_s)+p_s^2p_s^2 = 2p_s^2-p_s^4. We can verify the accuracy of our calculations by backward computing the probabilities of failure. Specifically, we categorize cases of failure in a two-photon event (where three-photon hyperentanglement is not generated) into three cases. Case 1 involves both photons not undergoing splitting as they pass through the ppKTP_1. It can be expressed as follows. P_2^3(0)_fail = (1-p_s)(1-p_s). Here, P_2^3(0)_fail represents the probability of failure generating three-photon hyperentanglement in two-photon event, where the failure occurs due to the non-simultaneous splitting of the two photons on ppKTP. The number inside the parentheses indicates the events of failure in multi-photon cases, representing the scenario where a certain number of photons simultaneously underwent splitting on ppKTP_1 but failed to generate three-photon hyperentanglement successfully. The subsequent failure probabilities have similar meanings and will not be reiterated here. Case 2 is when one of the photons successfully undergoes splitting on ppKTP_1, but this photon fails to split on ppKTP_2. This can be expressed as follows. P_2^3(1)_fail = C_2^1p_s(1-p_s)(1-p_s). Case 3 entails both photons successfully undergoing splitting on ppKTP_1 but failing to split on ppKTP_2. The probability can be represented as P_2^3(2)_fail = p_s^2(1-p_s)^2. In this way, the total failure probability in a two-photon event is given by the following expression. P_2^3(tot)_fail = P_2^3(0)_fail+P_2^3(1)_fail+P_2^3(2)_fail = (1-p_s)(1-p_s)+C_2^1p_s(1-p_s)(1-p_s) +p_s^2(1-p_s)^2 = 1-2p_s^2+p_s^4. It can be observed that the sum of the success probability and the failure probability is equal to 1, validating the accuracy of our calculations. Similarly, we conduct numerical calculations for the case of generating three-photon hyperentanglement in a three-photon event. The three-photon event can result in the creation of one pair, two pairs, and three pairs of three-photon hyperentanglement. In the case of generating one pair of three-photon hyperentanglement, there are three scenarios. Scenario 1 involves selecting one photon from the three photons, and this photon successfully undergoes splitting on ppKTP_1 and ppKTP_2, while the remaining two photons fail to split on ppKTP_1. Scenario 2 includes selecting one photon from the three photons, and this photon successfully undergoes splitting on ppKTP_1 and the ppKTP_2, while the remaining two photons undergo successful splitting on ppKTP_1 but fail to split on ppKTP_2. Scenario 3 comprises selecting one photon from the three photons, and this photon successfully undergoes splitting on ppKTP_1 and ppKTP_2, while one of the remaining two photons undergoes successful splitting on ppKTP_1 but fails to split on ppKTP_2, and the last remaining photon fails to split on ppKTP_1. Therefore, we can calculate the probabilities for these three scenarios as follows. P_3^3(1)_succ = C_3^1p_s^2(1-p_s)^2+C_3^1p_s^4(1-p_s)^2 +C_3^1p_s^2C_2^1p_s(1-p_s)^2. There are two scenarios in the case of generating two pairs of three-photon hyperentanglement. Scenario 1 involves selecting two photons from the three photons, and these two photons successfully undergo splitting on ppKTP_1 and ppKTP_2, while the remaining photon fails to split on ppKTP_1. Scenario 2 includes selecting two photons from the three photons, and these two photons successfully undergo splitting on ppKTP_1 and ppKTP_2, while the remaining photon successfully undergoes splitting on ppKTP_1 but fails to split on ppKTP_2. The probability of generating two pairs of three-photon hyperentanglement in this case is given by the following expression. P_3^3(2)_succ = C_3^2p_s^4(1-p_s)+C_3^2p_s^5(1-p_s). In the scenario of generating three pairs of three-photon hyperentanglement, there is only one scenario where all three photons successfully undergo splitting on ppKTP_1 and ppKTP_2. The probability in this scenario can be expressed as P_3^3(3)_succ = p_s^6. In this way, the probability of successfully generating three-photon hyperentanglement in a three-photon event can be expressed as follows. P_3^3(tot)_succ = P_3^3(1)_succ+P_3^3(2)_succ+P_3^3(3)_succ = 3p_s^2-3p_s^4+p_s^6. Similarly, we can validate our calculations by backward computing the probabilities of failure. In the case of failure, there are four scenarios. Scenario 1 is when all three photons fail to split on ppKTP_1. This can be expressed as follows. P_3^3(0)_fail = (1-p_s)^3. Scenario 2 involves one photon successfully undergoing splitting on ppKTP_1, but this photon fails to split on ppKTP_2. The probability in this case is given by the following expression. P_3^3(1)_fail = C_3^1p_s(1-p_s)^3. Scenario 3 occurs when two photons successfully split on ppKTP_1, but both of these photons fail to split on ppKTP_2. The probability in this scenario is expressed as follows. P_3^3(2)_fail = C_3^2p_s^2(1-p_s)^3. Scenario 4 entails successful splitting of all three photons on ppKTP_1, while none of these photons successfully splits on ppKTP_2. The probability in this case is given by the following expression. P_3^3(3)_fail = p_s^3(1-p_s)^3. Naturally, the total failure probability is given by the following expression. P_3^3(tot)_fail = P_3^3(0)_fail+P_3^3(1)_fail+P_3^3(2)_fail +P_3^3(3)_fail = 1-3p_s^2+3p_s^4-p_s^6. As expected, the sum of the failure probability and the success probability is equal to 1, validating the accuracy of the calculations for the generation of three-photon hyperentanglement in a three-photon event. In the preceding sections, we calculated the probabilities of generating three-photon hyperentanglement in dual-photon and tri-photon events. We validated the results by computing the probabilities of failure. In practical laser sources, events involving more than three photons may occur. Now, we present the general formula and computation process for the n-photon event. Here, we calculate the probability of success by computing the probability of failure, as it is relatively straightforward. In the case of an n-photon event successfully generating three-photon hyperentanglement, there are n+1 scenarios. Scenario 1 is when all n photons fail to split on ppKTP_1, expressed as follows. P_n^3(0)_fail = (1-p_s)^n. Scenario 2 involves selecting one photon from the n photons, which successfully undergoes splitting on ppKTP_1 while this photon fails to split on ppKTP_2. It can be expressed as follows. P_n^3(1)_fail = C_n^1p_s(1-p_s)^n-1(1-p_s). Scenario 3 refers to selecting two photons from the n photons, where both successfully undergo splitting on ppKTP_1 while these two photons fail to split on ppKTP_2. The probability in this case can be expressed as follows. P_n^3(2)_fail = C_n^2p_s^2(1-p_s)^n-2(1-p_s)^2. Continuing in this manner, we can write the probability for scenario i+1. This represents selecting i photons from the n photons, where these i photons successfully undergo splitting on ppKTP_1 and fail to split on ppKTP_2. Here, i∈{0,1,2… n }. The general formula can be expressed as follows. P_n^3(i)_fail = C_n^ip_s^i(1-p_s)^n-i(1-p_s)^i. By summing up the general formula for the probability of failure, we can obtain the overall probability of success, given by the following expression. P_n^3(tot)_succ = 1-∑_i=0^nP_n^3(i)_fail = 1-∑_i=0^nC_n^ip_s^i(1-p_s)^n-i(1-p_s)^i = 1-(1-p_s)^n(1+p_s)^n=1-(1-p_s^2)^n. By substituting n=2 and n=3 into Eq. (<ref>), we can verify the correctness of Eqs. (<ref>) and (<ref>). So far, we have provided the probability of successfully generating three-photon hyperentanglement in the case of multi-photon events. For clarity, we will proceed to calculate the probability of successfully generating four-photon hyperentanglement in multi-photon events. Through these two examples, we aim to derive the probability of successfully generating m-photon hyperentanglement in multi-photon events. Clearly, we should start by calculating the probability of successfully generating four-photon hyperentanglement in a two-photon event, which includes two cases: the creation of one pair of four-photon hyperentanglement and two pairs of four-photon hyperentanglement. The first case consists of two scenarios. In Scenario 1, one photon from the two-photon event successfully undergoes splitting on ppKTP_1, ppKTP_2 and ppKTP_3, while the remaining photon fails to split on ppKTP_1. In Scenario 2, one photon from the two-photon event successfully undergoes splitting on ppKTP_1, ppKTP_2 and ppKTP_3, while the remaining photon successfully undergoes splitting on ppKTP_1 but fails to split on ppKTP_2 and ppKTP_3 simultaneously. In this case, the probability of the first case is given by the following expression. P_2^4(1)_succ = C_2^1p_s^3(1-p_s)+C_2^1p_s^4(1-p_s^2). The second case is obvious that both photons in the two-photon event must successfully undergo splitting on ppKTP_1, ppKTP_2 and ppKTP_3, which is expressed as follows. P_2^4(2)_succ = p_s^6. In this way, the probability of successfully generating four-photon hyperentanglement in a two-photon event can be expressed as follows. P_2^4(tot)_succ = P_2^4(1)_succ+P_2^4(2)_succ=2p_s^3-p_s^6. Similar to the consideration in the previous discussion, we can verify the correctness of the probability of success by calculating the probability of failure. This involves three scenarios. In Scenario 1, both photons in the two-photon event fail to split on ppKTP_1. In Scenario 2, one photon from the two-photon event successfully undergoes splitting on ppKTP_1, but this photon fails to split on ppKTP_2 and ppKTP_3 simultaneously. In Scenario 3, both photons from the two-photon event successfully undergo splitting on ppKTP_1, but these two photons fail to split on ppKTP_2 and ppKTP_3 simultaneously. The probabilities for these three scenarios can be represented by Eqs. (<ref>), (<ref>) and (<ref>), respectively. P_2^4(0)_fail = (1-p_s)^2. P_2^4(1)_fail = C_2^1p_s(1-p_s)(1-p_s^2). P_2^4(2)_fail = p_s^2(1-p_s^2)^2. In this case, the probability of not successfully generating four-photon hyperentanglement in a two-photon event is given by the following expression. P_2^4(tot)_fail = P_2^4(0)_fail+P_2^4(1)_fail+P_2^4(2)_fail = (1-p_s)^2+C_2^1p_s(1-p_s)(1-p_s^2) +p_s^2(1-p_s^2)^2 = 1+p_s^6-2p_s^3. We can observe that this is consistent with the fact that the sum of the probability of failure and the probability of success equals 1. The successful generation of four-photon hyperentanglement in a three-photon event includes three cases: the creation of one pair, two pairs, and three pairs of four-photon hyperentanglement. In the case of creating one pair, there are three scenarios. In Scenario 1, one photon from the three-photon event successfully undergoes splitting on ppKTP_1, ppKTP_2, and ppKTP_3, while the remaining photon fail to split on ppKTP_1. In Scenario 2, one photon from the three-photon event successfully undergoes splitting on ppKTP_1, ppKTP_2 and ppKTP_3, while one of the remaining photons successfully undergoes splitting on ppKTP_1 but fails to split on ppKTP_2 and ppKTP_3 simultaneously. In Scenario 3, one photon from the three-photon event successfully undergoes splitting on ppKTP_1, ppKTP_2, and ppKTP_3, while the remaining two photons successfully undergo splitting on ppKTP_1, but these two photons fail to split on ppKTP_2 and ppKTP_3 simultaneously. In this case, the probability of creating one pair of four-photon hyperentanglement is given by the following expression. P_3^4(1)_succ = C_3^1p_s^3(1-p_s)^2+C_3^1p_s^3C_2^1p_s(1-p_s^2)(1-p_s) + C_3^1p_s^5(1-p_s^2)^2. In the case of creating two pairs of four-photon hyperentanglement, there are two scenarios. Scenario 1 involves three photons, where two photons successfully undergo splitting on ppKTP_1, ppKTP_2, and ppKTP_3, while the remaining photon fails to split on ppKTP_1. Scenario 2 involves three photons, where two photons successfully undergo splitting on ppKTP_1, ppKTP_2, and ppKTP_3, while the remaining photon successfully undergoes splitting on ppKTP_1 but fails to split on ppKTP_2 and ppKTP_3 simultaneously. In this case, the probability of creating two pairs of four-photon hyperentanglement is given by the following expression. P_3^4(2)_succ = C_3^2p_s^6(1-p_s)+C_3^2p_s^7(1-p_s^2). The case of creating three pairs of four-photon hyperentanglement is straightforward, as it requires all three photons to successfully undergo splitting on ppKTP_1, ppKTP_2, and ppKTP_3. In this case, the probability of creating three pairs of four-photon hyperentanglement can be expressed as follows. P_3^4(3)_succ = p_s^9. The total probability of a three-photon event successfully generating four-photon hyperentanglement can be written as P_3^4(tot)_succ = P_3^4(1)_succ+P_3^4(2)_succ+P_3^4(3)_succ = C_3^1p_s^3(1-p_s)^2+C_3^1p_s^3C_2^1p_s(1-p_s^2) ×(1-p_s)+C_3^1p_s^5(1-p_s^2)^2 + C_3^2p_s^6(1-p_s)+C_3^2p_s^7(1-p_s^2)+ p_s^9 = 3p_s^3-3p_s^6+p_s^9. Similarly, we calculate the probability of failure separately to confirm the correctness of the probability of success. This involves four scenarios. In Scenario 1, all three photons fail to split on ppKTP_1. In Scenario 2, one photon from the three-photon event successfully undergoes splitting on ppKTP_1, but this photon fails to split on ppKTP_2 and ppKTP_3 simultaneously. In Scenario 3, two photons from the three-photon event successfully undergo splitting on ppKTP_1, but these two photons fail to split on ppKTP_2 and ppKTP_3 simultaneously. In Scenario 4, all three photons from the three-photon event successfully undergo splitting on ppKTP_1, but these photons fail to split on ppKTP_2 and ppKTP_3 simultaneously. The probabilities for these four scenarios can be represented by Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), respectively. P_3^4(0)_fail = (1-p_s)^3. P_3^4(1)_fail = C_3^1p_s(1-p_s^2)(1-p_s)^2. P_3^4(2)_fail = C_3^2p_s^2(1-p_s^2)^2(1-p_s). P_3^4(3)_fail = p_s^3(1-p_s^2)^3. The failure probability of a three-photon event generating four-photon hyperentanglement is obtained from Eq. (<ref>), confirming the correctness of our conclusion. P_3^4(tot)_fail = P_3^4(0)_fail+ P_3^4(1)_fail+ P_3^4(2)_fail +P_3^4(3)_fail = 1-3p_s^3+3p_s^6-p_s^9. So far, we have provided the probabilities of generating four-photon hyperentanglement in two-photon and three-photon events. Afterwards, we will present the general formula for n-photon events and the total probability of generating four-photon hyperentanglement in multi-photon events. For ease of computation, we will illustrate using the probability of failure. This involves n+1 scenarios. In Scenario 1, none of the n photons successfully undergo splitting on ppKTP_1. In Scenario 2, one photon from the n photons successfully undergoes splitting on ppKTP_1, but this photon fails to split on ppKTP_2 and ppKTP_3 simultaneously. In Scenario 3, two photons from the n photons successfully undergo splitting on ppKTP_1, but these two photons fail to split on ppKTP_2 and ppKTP_3 simultaneously. The probabilities for these three scenarios can be represented by Eqs. (<ref>), (<ref>), and (<ref>), respectively. P_n^4(0)_fail = (1-p_s)^n. P_n^4(1)_fail = C_n^1p_s(1-p_s^2)(1-p_s)^n-1. P_n^4(2)_fail = C_n^2p_s^2(1-p_s^2)^2(1-p_s)^n-2. Continuing in this manner, we present the general formula for the probability in Scenario i+1, which can be represented as P_n^4(i)_fail = C_n^ip_s^i(1-p_s^2)^i(1-p_s)^n-i. In this way, the probability of successfully generating four-photon hyperentanglement in an n-photon event can be expressed as follows. P_n^4(tot)_succ = 1-∑_i=0^nP_n^4(i)_fail = 1-∑_i=0^nC_n^ip_s^i(1-p_s^2)^i(1-p_s)^n-i = 1-(1-p_s^3)^n. It can be observed that Eq. (<ref>) satisfies the conclusions of Eqs. (<ref>) and (<ref>), providing evidence of the correctness of our calculations. So far, we have completed the calculations for the probability of successfully generating three- and four-photon hyperentanglement in n-photon events. Depending on the experimental requirements, researchers may also seek to generate hyperentanglement with more photons, such as five-photon hyperentanglement, six-photon hyperentanglement, or more. It is necessary to provide a general formula for the probability of successfully generating m-photon hyperentanglement in an n-photon event. Naturally, the calculations for successfully generating three- and four-photon hyperentanglements have already given us enough inspiration to address this issue. Specifically, for ease of understanding, we will also divide it into n+1 scenarios. In Scenario 1, none of the n photons successfully undergo splitting on ppKTP_1. In Scenario 2, one photon from the n photons successfully undergoes splitting on ppKTP_1, but this photon fails to split simultaneously on the remaining m-2 ppKTP crystals. In Scenario 3, two photons from the n photons successfully undergo splitting on ppKTP_1, but these two photons fail to split simultaneously on the remaining m-2 ppKTP crystals. The probabilities for the three scenarios mentioned above can be represented by Eqs. (<ref>), (<ref>), and (<ref>), respectively. P_n^m(0)_fail = (1-p_s)^n. P_n^m(1)_fail = C_n^1p_s(1-p_s^m-2)(1-p_s)^n-1. P_n^m(2)_fail = C_n^2p_s^2(1-p_s^m-2)^2(1-p_s)^n-2. Continuing in this manner, the general formula for the probability of failure in Scenario i+1 can be expressed as follows. P_n^m(i)_fail = C_n^ip_s^i(1-p_s^m-2)^i(1-p_s)^n-i. Thus, by summing up the probabilities of failure, we can ultimately obtain the total probability of successfully generating m-photon hyperentanglement in an n-photon event. P_n^m(tot)_succ = 1-∑_i=0^nP_n^m(i)_fail = 1-∑_i=0^nC_n^ip_s^i(1-p_s^m-2)^i(1-p_s)^n-i = 1-(1-p_s^m-1)^n. Thus, we can obtain Eq. (<ref>) as follows, N_tot = F*∑_n=0^∞p(n)*P_n^m(tot)_succ = F*∑_n=0^∞e^-μμ^n/n!*[1-(1-p_s^m-1)^n] = F*[∑_n=0^∞e^-μμ^n/n!-∑_n=0^∞e^-μμ^n/n!(1-p_s^m-1)^n] = F*{1-∑_n=0^∞e^-μ[μ(1-p_s^m-1)]^n/n!} = F*[1-e^-μ+μ(1-p_s^m-1)] = F*(1-e^-μ p_s^(m-1)). Combining Eqs. (<ref>) and (<ref>), we can generalize the probability of generating one pair of three-photon hyperentanglement in an n-photon event as follows, P_n^3(1)_succ = C_n^1p_s^2(1-p_s)^n-1+C_n^1p_s^2C_n-1^1p_s(1-p_s)^n-1 +C_n^1p_s^2C_n-1^2p_s^2(1-p_s)^n-1+⋯ +C_n^1p_s^2C_n-1^n-1p_s^n-1(1-p_s)^n-1 = C_n^1p_s^2(1-p_s)^n-1(1+p_s)^n-1 = C_n^1p_s^2(1-p_s^2)^n-1. Obtaining such results is expected, as Eq. (<ref>) represents an n-photon event where one photon successfully undergoes down-conversion on two crystals, resulting in one pair of hyperentangled photons, while the remaining n-1 photons fail to undergo down-conversion simultaneously on the two crystals. Similarly, by combining Eqs. (<ref>) and (<ref>), we can derive the probability of generating two pairs of three-photon hyperentanglement in an n-photon event. P_n^3(2)_succ = C_n^2p_s^4(1-p_s^2)^n-2. Afterwards, we can obtain the probabilities of generating one pair and two pairs of four-photon hyperentanglement in an n-photon event as well. P_n^4(1)_succ = C_n^1p_s^3(1-p_s^3)^n-1. P_n^4(2)_succ = C_n^2p_s^6(1-p_s^3)^n-2. In this way, by combining Eqs. (<ref>), (<ref>), (<ref>) and (<ref>), we can ultimately obtain the probability of generating r pairs of m-photon hyperentanglement in an n-photon event. P_n^m(r)_succ = C_n^rp_s^r(m-1)(1-p_s^m-1)^n-r. If considering a Poissonian distribution in laser source, the probability of generating r pairs in the case of m-photon hyperentanglement can be represented as Pr(m,r) = ∑_n=0^∞p(n)C_n^rp_s^r(m-1)(1-p_s^m-1)^n-r = ∑_n=0^∞e^-μμ^n1/r!(n-r)!p_s^r(m-1)(1-p_s^m-1)^n-r ∑_t=0^∞e^-μμ^tμ^r1/r!t!p_s^r(m-1)[1-p_s^(m-1)]^t = ∑_t=0^∞e^-μ[μ(1-p_s^m-1)]^t/t!μ^r/r!p_s^r(m-1) = e^-μμ^r/r!e^μ(1-p_s^m-1)p_s^r(m-1) = μ^r/r!e^-μ p_s^m-1p_s^r(m-1). Take three-photon hyperentanglement as an example, we can obtain the ratio between two pairs and one pairs, which is Pr(3,2)/Pr(3,1)=μ p_s^2/2. For μ=1, p_s=7.6*10^-6, this ratio can reach about 2.88*10^-11. For our cascaded generation of m-photon hyperentanglement source, we can represent the state as follows, |Cas⟩_m = Pr(m,0)|0⟩ + Pr(m,1)|ϕ⟩ + Pr(m,2)|ϕ⟩^⊗2 +⋯ + Pr(m,n)|ϕ⟩^⊗ n. Here, |0⟩, |ϕ⟩, |ϕ⟩^⊗2 and |ϕ⟩^⊗ n represent the generation of zero pair, one pair, two pairs, and n pairs of m-photon hyperentanglement, respectively. |ϕ⟩ is the m-photon hyperentanglement like Eqs. (<ref>) and (<ref>). § ACKNOWLEDGEMENT We gratefully thank Cen-Xiao Huang, Chao Zhang, and Xiao-Ming Hu in University of Science and Technology of China for helpful discussion about the brightness, coincidence efficiency, and the crystal coherence of the generation protocols. This work is supported by the National Natural Science Foundation of China under Grant Nos. 92365110, 12175106 and 11974189, and Postgraduate Research & Practice Innovation Program of Jiangsu Province under Grant No. KYCX23-1027. 99 QKD1A. K. Ekert, Quantum cryptography based on Bell's theorem, Phys. Rev. Lett. 67, 661 (1991). QSS1M. Hillery, V. Bužek, and A. Berthiaume, Quantum secret sharing, Phys. Rev. A 59, 1829 (1999). QSDC1G. L. Long and X. S. Liu, Theoretically efficient high-capacity quantum-key-distribution scheme, Phys. Rev. A 65, 032302 (2002). QSDC2F. G. Deng, G. L. Long, and X. S. Liu, Two-step quantum direct communication protocol using the Einstein-Podolsky-Rosen pair block, Phys. Rev. A 68, 042317 (2003). onestepY. B. Sheng, L. Zhou, and G. L. Long, One-step quantum secure direct communication, Sci. Bull. 67, 367 (2022). entanglement1D. Magde and H. Mahr, Study in ammonium dihydrogen phosphate of spontaneous parametric interaction tunable from 4400 to 16 000 Å, Phys. Rev. Lett. 18, 905 (1967). entanglement2C. K. Hong, Z. Y. Ou, and L. Mandel, Measurement of subpicosecond time intervals between two photons by interference, Phys. Rev. Lett. 59, 2044 (1987). entanglement3Y. H. Shih and A. V. Sergienko, Observation of quantum beating in a simple beam-splitting experiment: Two-particle entanglement in spin and space-time, Phys. Rev. A 50, 2564 (1994). entanglement4P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, Ultrabright source of polarization-entangled photons, Phys. Rev. A 60, R773(R) (1999). entanglement5C. E. Kuklewicz, M. Fiorentino, G. Messin, F. N. C. Wong, and J. H. Shapiro, High-flux source of polarization-entangled photons from a periodically poled KTiOPO_4 parametric down-converter, Phys. Rev. A 69, 013807 (2004). entanglement6M. Fiorentino, G. Messin, C. E. Kuklewicz, F. N. C. Wong, and Jeffrey H. Shapiro, Generation of ultrabright tunable polarization entanglement without spatial, spectral, or temporal constraints, Phys. Rev. A 69, 041801(R) (2004). entanglement7J. Brendel, N. Gisin, W. Tittel, and H. Zbinden, Pulsed Energy-time entangled twin-photon source for quantum communication, Phys. Rev. Lett. 82, 2594 (1999). task2Z. R. Zhou, Y. B. Sheng, P. H. Niu, L. G. Yin, and G. L. Long, Measurement-device-independent quantum secure direct communication, Sci. China Phys. Mech. Astron. 63, 230362 (2020). task3L. Zhou, Y. B. Sheng, and G. L. Long, Device-independent quantum secure direct communication against collective attacks, Sci. Bull. 65, 12 (2020). task4Z. T. Qi, Y. H. Li, Y. W. Huang, J. Feng, Y. L. Zheng, and X. F. Chen, A 15-user quantum secure direct communication network, Light: Sci. Appl. 10, 183 (2021). hyperentangled1 P. G. Kwiat, Hyper-entangled states, J. Mod. Opt. 44, 2173 (1997). hyperentangled2 J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat, Generation of hyperentangled photon pairs, Phys. Rev. Lett. 95, 260501 (2005). hyperentangled3F. G. Deng, B. C. Ren, and X. H. Li, Quantum hyperentanglement and its applications in quantum information processing, Sci. Bull. 62, 46 (2017). PS1 G. Vallone, R. Ceccarelli, F. De Martini, and P. Mataloni, Hyperentanglement of two photons in three degrees of freedom, Phys. Rev. A 79, 030301(R) (2009). PS2 Y. B. Sheng and F. G. Deng, One-step deterministic polarization-entanglement purification using spatial entanglement, Phys. Rev. A 82, 044305 (2010). PS3 G. Vallone, G. Donati, R. Ceccarelli, and P. Mataloni, Six-qubit two-photon hyperentangled cluster states: Characterization and application to quantum computation, Phys. Rev. A 81, 052301 (2010). PS4M. Y. Wang, H. Guo, F. L. Yan, and T. Gao, Quantum remote implementation with polarization-temporal hyperentanglement, Phys. Rev. Applied 20, 044016 (2023). PF1 Y. Y. Chen, S. Ecker, S. Wengerowsky, L. Bulla, S. K. Joshi, F. Steinlecchner, and R. Ursin, Polarization entanglement by time-reversed Hong-Ou-Mandel interference, Phys. Rev. Lett. 121, 200502 (2018). PF2 Y. Y. Chen, S. Ecker, J. Bavaresco, T. Scheidl, L. X. Chen, F. Steinlechner, M. Huber, and R. Ursin, Verification of high-dimensional entanglement generated in quantum interference, Phys. Rev. A 101, 032302 (2020). PF3H. H. Lu, M. Alshowkan, K. V. Myilswamy, A. M. Weiner, J. M. Lukens, and N. A. Peters, Generation and characterization of ultrabroadband polarization-frequency hyperentangled photons, Opt. Lett. 48, 6031 (2023). PO1 T. M. Graham, J. T. Barreiro, M. Mohseni, and P. G. Kwiat, Hyperentanglement-enabled direct characterization of quantum dynamics, Phys. Rev. Lett. 110, 060404 (2013). PO2 X. L. Wang, X. D. Cai, Z. E. Su, M. C. Chen, D. Wu, L. Li, N. L. Liu, C. Y. Lu, and J. W. Pan, Quantum teleportation of multiple degrees of freedom of a single photon, Nature 518, 516 (2015). PO3 T. M. Graham, H. J. Bernstein, T. C. Wei, M. Junge, and P. G. Kwiat, Superdense teleportation using hyperentangled photons, Nat. Commun. 6, 7185 (2015). PO4 D. Bhatti, J. von. Zanthier, and G. S. Agarwal, Entanglement of polarization and orbital angular momentum, Phys. Rev. A 91, 062303 (2015). PO5 T. M. Zhao, Y. S. Ihn, and Y. H. Kim, Direct generation of narrow-band hyperentangled photons, Phys. Rev. Lett. 122, 123607 (2019). PO6Q. Hu, Y. Ren, X. Wang, X. M. Hu, and J. Jing, Path-orbital-angular-momentum high-dimensional hyperentangled photons from a warm atomic ensemble, Phys. Rev. A 105, 062422 (2022). PT1 F. Steinlechner, S. Ecker, M. Fink, B. Liu, J. Bavaresco, M. Huber, T. Scheidl, and R. Ursin, Distribution of high-dimensional entanglement via an intra-city free-space link, Nat. Commun. 8, 15971 (2017). PT2 J. C. Chapman, T. M. Graham, C. K. Zeitler, H. J. Bernstein, and P. G. Kwiat, Time-bin and polarization superdense teleportation for space applications, Phys. Rev. Appl. 14, 014044 (2020). PT3 Y. W. Huang, J. Feng, Y. H. Li, Z. T. Qi, C. Y. Lu, Y. L. Zheng, and X. F. Chen, High-performance hyperentanglement generation and manipulation based on lithium niobate waveguides, Phys. Rev. Appl. 17, 054002 (2022). High1 J. T. Barreiro, T. C. Wei, and P. G. Kwiat, Beating the channel capacity limit for linear photonic superdense coding, Nat. Phys. 4, 282 (2008). High2 X. M. Hu, Y. Guo, B. H. Liu, Y. F. Huang, C. F. Li, and G. C. Guo, Beating the channel capacity limit for superdense coding with entangled ququarts, Sci. Adv. 4, eaat9304 (2018). BSM1 S.P. Walborn, S. Pádua, C. H. Monken, Hyperentanglement-assisted Bell-state analysis, Phys. Rev. A 68, 042313 (2003). BSM2 Y. B. Sheng and F. G. Deng, Deterministic entanglement purification and complete nonlocal Bell-state analysis with hyperentanglement, Phys. Rev. A 81, 032307 (2010). BSM3 Y. B. Sheng, F. G. Deng, and G. L. Long, Complete hyperentangled-Bell-state analysis for quantum communication, Phys. Rev. A 82, 032318 (2010). BSM4 T. J. Wang, Y. Lu, and G. L. Long, Generation and complete analysis of the hyperentangled Bell state for photons assisted by quantum-dot spins in optical microcavities, Phys. Rev. A 86, 042337 (2012). purification1B. C. Ren, F. F. Du, and F. G. Deng, Hyperentanglement concentration for two-photon four-qubit systems with linear optics, Phys. Rev. A 88, 012302 (2013). purification2B. C. Ren, F. F. Du, and F. G. Deng, Two-step hyperentanglement purification with the quantum-state-joining method, Phys. Rev. A 90, 052309 (2014) purification3 X. M. Hu, C. X. Huang, Y. B. Sheng, L. Zhou, B. H. Liu, Y. Guo, C. Zhang, W. B. Xing, Y. F. Huang, C. F. Li, and G. C. Guo, Long-Distance Entanglement purification for quantum communication, Phys. Rev. Lett. 126, 010503 (2021). purification4 S. Ecker, P. Sohr, L. Bulla, M. Huber, M. Bohmann, and R. Ursin, Experimental single-copy entanglement distillation, Phys. Rev. Lett. 127, 040506 (2021). purification5 C. X. Huang, X. M. Hu, B. H. Liu, L. Zhou, Y. B. Sheng, C. F. Li, and G. C. Guo, Experimental one-step deterministic polarization entanglement purification, Sci. Bull. 67, 593 (2022). three1L. Achatz, L. Bulla, S. Ecker, E. A. Ortega, M. Bartokos, J. C. Alvarado-Zacarias, R. Amezcua-Correa, M. Bohmann, R. Ursin, and M. Huber, Simultaneous transmission of hyper-entanglement in three degrees of freedom through a multicore fiber, NPJ Quantum Inf. 9, 45 (2023). three2P. Zhao, M. Y. Yang, S. Zhu, L. Zhou, W. Zhong, M. M. Du, and Y. B. Sheng, Generation of hyperentangled state encoded in three degrees of freedom, Sci. China Phys. Mech. Astron. 66, 100311 (2023). GHZ1 D. M. Greenberger, M. A. Horne, and A. Zeilinger, Going beyond bell's theorem, in Bell's Theorem, Quantum Theory and Conceptions of the Universe, edited by M. Kafatos (Springer Netherlands, Dordrecht, 1989), pp. 69-72. application-qsdc D. M. Greenberger, M. A. Horne, A. Shimony, and A. Zeilinger, Bell's theorem without inequalities. Am. J. Phys. 58, 1131 (1990). application-qtF. G. Deng, C. Y. Li, Y. S. Li, H. Y. Zhou, and Y. Wang, Symmetric multiparty-controlled teleportation of an arbitrary two-particle entanglement, Phys. Rev. A 72, 022338 (2005). distributed1J. J. Niu, L. Zhang, Y. Liu, J. W. Liu, W. H. Huang, J. X. Huang, H. Jia, J. W. Liu, Z. Y. Tao, W. W. Wei, Y. X. Zhou, W. J. Zou, Y. Z. Chen, X. W. Deng, X. H. Deng, C. K. Hu, L. Hu, J. Li, D. Tan, Y. Xu, F. Yan, T. X. Yan, S. Liu, Y. P. Zhong, A. N. Cleland, and D. P. Yu, Low-loss interconnects for modular superconducting quantum processors, Nat. Electron. 6, 235 (2023). ion1 H. Häffner, W. Hänsel, C. F. Roos, J. Benhelm, D. Chek-al-kar, M. Chwalla, T. Körber, U. D. Rapol, M. Riebe, P. O. Schmidt, C. Becher, O. Gühne, W. Dür, and R. Blatt, Scalable multiparticle entanglement of trapped ions, Nature 438, 643 (2005). ion2 H. Kaufmann, T. Ruster, C. T. Schmiegelow, M. A. Luda, V. Kaushal, J. Schulz, D. von Lindenfels, F. Schmidt-Kaler, and U. G. Poschinger, Scalable creation of long-lived multipartite entanglement, Phys. Rev. Lett. 119, 150503 (2017). photon1 Y. F. Huang, B. H. Liu, L. Peng, Y. H. Li, L. Li, C. F. Li, and G. C. Guo, Experimental generation of an eight-photon Greenberger-Horne-Zeilinger state, Nat. Commun. 2, 546 (2011). photon2 H. S. Zhong, Y. Li, and W. Li, L. C. Peng, Z. E. Su, Y. Hu, Y. M. He, X. Ding, W. J. Zhang, H. Li, L. Zhang, Z. Wang, L. X. You, X. L. Wang, X. Jiang, L. Li, Y. A. Chen, N. L. Liu, C. Y. Lu, and J. W. Pan, 12-Photon entanglement and scalable scattershot boson sampling with optimal entangled-photon pairs from parametric down-conversion, Phys. Rev. Lett. 121, 250505 (2018). NVP. Neumann, N. Mizuochi, F. Rempp, P. Hemmer, H. Watanabe, S. Yamasaki, V. Jacques, T. Gaebel, F. Jelezko, and J. Wrachtrup, Multipartite entanglement among single spins in diamond, Science 320, 1326 (2008). multihyperentanglement2Z. X. Cui, L. Zhou, W. Zhong, and Y. B. Sheng, Measurement-device-independent quantum key distribution with hyper-encoding, Sci. China Phys. Mech. Astron. 62, 110311 (2019). multihyperentanglement4S. Song, Y. Cao, Y. B. Sheng, and G. L. Long, Complete Greenberger-Horne-Zeilinger state analyzer using hyperentanglement, Quantum Inf. Process. 12, 381 (2013). multihyperentanglement5L. Zhou, P. S. Yan, W. Zhong, and Y. B. Sheng, High efficient multipartite entanglement purification using hyperentanglement, arXiv:2101.08920 (2021). multihyperentanglement6P. S. Yan, L. Zhou, and Y. B. Sheng, Single-copy entanglement purification for Greenberger-Horne-Zeilinger states, J. Opt. Soc. Am. B 40, 2050 (2023). 6photonD. Ding, Y. Q. He, F. L. Yan, and T. Gao, Generation of six-photon hyperentangled states, Acta Phys. Sin. 64, 160301 (2015). multihyperentanglement1X. L. Wang, Y. H. Luo, M. C. Chen, Z. E. Su, C. Liu, C. Chen, W. Li, Y. Q. Fang, X. Jiang, J. Zhang, L. Li, N. L. Liu, C. Y. Lu, and J. W. Pan, 18-Qubit entanglement with six photons' three degrees of Freedom, Phys. Rev. Lett. 120, 260502 (2018). cascade1H. Hübel, D. R. Hamel, A. Fedrizzi, S. Ramelow, K. J. Resch, and T. Jennewein, Direct generation of photon triplets using cascaded photon-pair sources, Nature 466, 601 (2010). cascade2D. R. Hamel, L. K. Shalm, H. Hübel, A. J. Miller, F. Marsili, V. B. Verma, R. P. Mirin, S. Nam, K. J. Resch, and T. Jennewein, Direct generation of three-photon polarization entanglement, Nat. Photonics 8, 801 (2014). SagnacE. J. Post, Sagnac effect, Rev. Mod. Phys. 39, 475 (1967). Kwiat J. C. Chapman, C. C. W. Lim, and P. G. Kwiat, Hyperentangled time-bin and polarization quantum key distribution, Phys. Rev. Appl. 18, 044027 (2022). Hz1 H. Wang, Y. M. He, T. H. Chung, H. Hu, Y. Yu, S. Chen, X. Ding, M. C. Chen, J. Qin, X. X. Yang, R. Z. Liu, Z. C. Duan, J. P. Li, S. Gerhardt, K. Winkler, J. Jurkat, L. J. Wang, N. Gregersen, Y. H. Huo, Q. Dai, S. Y. Yu, S. Höfling, C. Y. Lu, and J. W. Pan, Towards optimal single-photon sources from polarized microcavities, Nat. Photon. 13, 770 (2019). Hz2 N. Tomm, A. Javadi, N. O. Antoniadis, D. Najer, M. C. Löbl, A. R. Korsch, R. Schott, S. R. Valentin, A. D. Wieck, A. Ludwig, and R. J. Warburton, A bright and fast source of coherent single photons, Nat. Nanotechnol. 16, 399 (2021). Hz3 R. Uppu, F. T. Pedersen, Y. Wang, C. T. Olesen, C. Papon, X. Zhou, L. Midolo, S. Scholz, A. D. Wieck, A. Ludwig, and P. Lodahl, Scalable integrated single-photon source, Sci. Adv. 6, 50 (2020). poissonF. H. Xu, X. F. Ma, Q. Zhang, H. K. Lo, and J. W. Pan, Secure quantum key distribution with realistic devices, Rev. Mod. Phys. 92, 025002 (2020). review1L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Entanglement in many-body systems, Rev. Mod. Phys. 80, 517 (2008). review2R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Quantum entanglement, Rev. Mod. Phys. 81, 865 (2009). review3 J. W. Pan, Z. B. Chen, C. Y. Lu, H. Weinfurter, A. Zeilinger, and M. Żukowski, Multiphoton entanglement and interferometry, Rev. Mod. Phys. 84, 777 (2012). review4A. Anwar, C. Perumangatt, F. Steinlechner, T. Jennewein, and A. Ling, Entangled photon-pair sources based on three-wave mixing in bulk crystals, Rev. Sci. Instrum. 92, 041101 (2021). cascade3L. K. Shalm, D. R. Hamel, Z. Z. Yan, C. Simon, K. J. Resch, and T. Jennewein, Three-photon energy-time entanglement, Nat. Phys. 9, 19 (2012). phase-difference1P. G. Kwiat, K. Mattle, H. Weinfurter, A. Zeilinger, A. V. Sergienko, and Y. H. Shih, New High-intensity source of polarization-entangled photon pairs, Phys. Rev. Lett. 75, 4337 (1995). -11C. Kurtsiefer, M. Oberparleiter, and H. Weinfurter, Generation of correlated photon pairs in type-II parametric down conversion-revisited, J. Mod. Opt. 48, 1997 (2001). -9 A. Fedrizzi, T. Herbst, A. Poppe, T. Jennewein, and A. Zeilinger, A wavelength-tunable fiber-coupled source of narrowband entangled photons, Opt. Exp. 15, 15377 (2007). -6S. Tanzilli, H. De Riedmatten, W. Tittel, H. Zbinden, P. Baldi, M. De Micheli, D. B. Ostrowsky, and N. Gisin, Highly efficient photon-pair source using periodically poled lithium niobate waveguide, Electron. Lett. 37, 26 (2001). cascade2addZ. M. E. Chaisson, P. F. Poitras, M. Richard, Y. C. Page, P. H. Glinel, V. Landry, and D. R. Hamel, Phase-stable source of high-quality three-photon polarization entanglement by cascaded down-conversion, Phys. Rev. A 105, 063705 (2022).
http://arxiv.org/abs/2406.08716v1
20240613004348
TSE-PI: Target Sound Extraction under Reverberant Environments with Pitch Information
[ "Yiwen Wang", "Xihong Wu" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Field investigation of 3D snow settling dynamics under weak atmospheric turbulence Jiarong Hong June 17, 2024 ================================================================================== § ABSTRACT Target sound extraction (TSE) separates the target sound from the mixture signals based on provided clues. However, the performance of existing models significantly degrades under reverberant conditions. Inspired by auditory scene analysis (ASA), this work proposes a TSE model provided with pitch information named TSE-PI. Conditional pitch extraction is achieved through the Feature-wise Linearly Modulated layer with the sound-class label. A modified Waveformer model combined with pitch information, employing a learnable Gammatone filterbank in place of the convolutional encoder, is used for target sound extraction. The inclusion of pitch information is aimed at improving the model's performance. The experimental results on the FSD50K dataset illustrate 2.4 dB improvements of target sound extraction under reverberant environments when incorporating pitch information and Gammatone filterbank. § INTRODUCTION The cocktail party problem shows that humans have an extraordinary ability at the selective auditory attention to the target sound under complex acoustic environments, such as noise and reverberation <cit.>. Motivated by the need to bridge the gap between human auditory perception and machine hearing, researchers have developed TSE models <cit.>. Target sound extraction aims to separate the desired sound from a mixture of various sound events, given a specific clue leading to the target sound event. Common clue conditions can be divided into classes <cit.>, enrollment information <cit.>, query-based separation <cit.>, etc. In addition, multi-cue and multi-modal cues are also used to achieve separation <cit.>. Most of the above methods have achieved excellent performance under anechoic sound conditions. However, there remains a gap between the performance of the TSE models under complex environments and the human auditory system. Recently, there have been several discussions on target sound extraction under reverberation. These models aim to extract desired sounds from complex acoustic mixtures, such as those encountered in the real world. Veluri et al. proposed a real-time Waveformer for binaural processing, which can be applied to real scenarios <cit.>. Choi and Choi introduced a transformer-based TSE model to extract reverberant sounds using the Dense Frequency-Time Attentive Network (DeFT-AN) architecture <cit.>. The complex short-time Fourier transform (STFT) mask is generated by supplying the sound class label. These methods have specific effects under reverberation conditions but must still be closer to the results under anechoic conditions. To further improve the target sound extraction performance under reverberant environments, it is necessary to refer to the robustness of the auditory system. ASA is a critical process for understanding and interpreting complex sound environments where multiple sound sources coexist <cit.>. Pitch information plays a vital role in the ASA process. Pitch, corresponding to the harmonics' fundamental frequency (f0), contributes to the perceptual segregation. The theory of computational auditory scene analysis (CASA), proposed by Wang and Brown <cit.>, shows that pitch information, regarded as a discriminative clue, is helpful for bottom-up foreground separation <cit.>. Auditory systems are robust, whatever the complex acoustic scene is. Temporal coherence analysis shows that humans simultaneously tend to focus on a single auditory stream. In the conventional CASA, there is a process of top-down auditory selective attention and bottom-up auditory stream formation <cit.>. Tasks for target sound extraction are simplified. That is, the separation of foreground sounds is achieved on the premise that clues such as categories are provided. Under such an assumption, only the bottom-up foreground segregation is considered. During the bottom-up process, pitch information is the leading perceptual feature to be noticed. Therefore, referring to CASA, we propose a two-stage target sound extraction model in complex acoustic scenarios. Specifically, a conditional pitch extraction model is proposed to extract the target pitch information belonging to the target sound. With the pitch information, direct sound is separated with a modified Waveformer architecture. The main contributions of this paper are: * A two-stage target sound extraction network is proposed. For the first stage, pitch information of the target direct sound under reverberation conditions is extracted. For the second stage, target sound extraction is achieved with pitch information extracted from the first stage. A modified Waveformer is chosen as the target sound separation network. [Code of our work is available on <https://github.com/wyw97/TSE_PI>] * A learnable Gammatone filter bank is introduced for conditional sound source separation. The pitch information contains frequency information, and the Gammatone filterbank often simulates the spectral analysis of the cochlea <cit.>. * The proposed target sound extraction model guided by pitch information brings about 2.4 dB improvements under reverberant conditions. Experimental results show that the bottom-up foreground sound separation in the CASA framework has essential guidance for the TSE task. The subsequent sections of the paper are organized as follows: Section 2 introduces the proposed two-stage pitch-guided target sound extraction. The experimental setup is described in Section 3, while the experimental results are reported in Section 4. Finally, conclusions are drawn in Section 5. § METHOD The pipeline of the proposed two-stage target sound extraction model with pitch information (TSE-PI) is shown in Figure <ref>. This section discusses the implementation methods of these two stages in detail. In this study, a single-channel received signal y_M, L∈ R^T positioned at location L comes from N sound sources s_n(n=1, ...,N). T is the signal duration. The received mixture signal is given as y_M, L =∑_i=1^N s_i∗ h_i, L + bn, where h_i, L represents the impulse response from the sound source to the receiver position L, ∗ represents convolution, and bn refers to the background noise. For the given class label c, supposing that the sound source s_ic belongs to class c, the goal of the work is to separate from the mixture signal to obtain the direct-path signal of s_ic, ŷ_ic, L = s_ic∗ d_ic, L, where ŷ_ic, L denotes the direct-path signal generated from s_ic, and d_ic, L is the direct part of the corresponding h_ic, L. §.§ Stage 1: Conditional pitch extraction Pitch information of the given class label is estimated through the first stage. The pitch feature is extracted from the harmonic structure of the signal amplitude spectrum. Recently, there have been several representative works using deep learning to achieve pitch extraction <cit.>. Convolution models are commonly used to extract spectral features, and fully connected layers (FCL) are selected for mapping from harmonic features to pitch information. Current works perform well in multi-pitch and multi-track pitch extraction tasks. To enable the model to pay attention to the pitch information of a specific class of sounds, the Feature-wise Liearly Modulated (FiLM) layer is introduced to achieve target pitch extraction by modulating the output channel of each convolutional layer <cit.>. We follow the basic framework for pitch estimation with a temporal convolutional network (TCN) as described in <cit.>. Specifically, for each output F^l∈ R^c × h × w from the l-th convolutional layer, where c is the kernel number, h and w represents the width and height of F^l. The FiLM modulates each layer as FiLM(F_i^(l)|γ_i^(l), β_i^(l)) = γ_i^(l)F_i^(l) + β_i^(l), where F_i^(l)∈ R^h × w, γ_i^(l), β_i^(l)∈ R^c refers to the corresponding modulation parameters. The modulation parameters are trained together with the other parameters of the model. The details of the other parts of the model are introduced in <cit.>. §.§ Stage 2: Target sound extraction In the second stage, pitch information is added based on the existing target sound extraction model proposed by Veluri et al. <cit.>. Since pitch information plays a vital role in the spectral features, the pitch information extracted from the previous stage is concatenated with the features of the mixture signal along the channel dimension. Features are extracted through the 1-D convolutional encoder. The pitch information obtained in the first stage is expressed as a one-hot encoding, consistent with the method described in <cit.>. By adding pitch information, explicit spectral representation is more intuitive than the convolutional encoder. Inspired by the excellent performance of the auditory system, Gammatone filterbank (GTFB) is introduced in spectral analysis to further increase the connection to pitch information. GTFB with learnable parameters benefits universal sound separation (USS) performance, as shown in <cit.>. Referring to this work, we apply the encoding process based on the learnable GTFB to the target sound extraction under reverberant conditions. § EXPERIMENTAL FRAMEWORK §.§ Datasets introduction Experiments are carried out on FSD50K datasets <cit.>. Twenty-seven sound classes are selected from the datasets, as shown in Figure <ref>. Each class contains at least 40 4-second samples. Reverberant signals are mixed with a signal-to-noise ratio (SNR) uniformly sampled from -5 dB to 5 dB. The mixtures are generated by mixing each sample from a different event class. After obtaining the mixed signal, background noise with an SNR of 40 dB is added. All the input audios are resampled to 16kHz. In the experiment, a microphone is installed on a rigid ball with a radius of 8cm. The microphone is positioned on the equatorial plane of a rigid sphere parallel to the ground. The room impulse response (RIR) is simulated according to <cit.>. The room size is uniformly sampled from 3.0m × 3.0m × 2.5m to 8.0m × 8.0m × 4.0m. Reverberation Time (RT60) is sampled from 0.2s to 0.8s. The position of the rigid ball and the sound source are guaranteed to be at least 0.8m away from the wall, and the distance between the sound source and the center of the rigid ball is sampled within the range from 0.6m to 2.0m. Pitch information is extracted from the single direct-path sound source with Praat <cit.>. RIRs for training, validation, and testing are 10000, 2000, and 5000. The total number of reverberation samples is 50000, 5000, and 5000, respectively. §.§ Experimental details For conditional pitch extraction, the frequency of pitch ranges from C1 (32.7 Hz) to B6 (1975.5 Hz) with 20 cents of intervals in the logarithmic scale <cit.>. Cross-entropy is used as the loss function <cit.>. The learning rate and batch size are set to 10^-4 and 32, respectively. The network model, optimized with Adam <cit.>, is implemented using pytorch_lightning <cit.>, and distributed data-parallel (DDP) is set to achieve data parallelism on multiple GPUs. To measure the accuracy of conditional pitch estimation, Raw Pitch Accuracy (RPA) is used to achieve the pitch estimation results of frame-by-frame signals <cit.>. Cosine similarity (COSS) is chosen to evaluate the estimation performance for the sequence-level pitch accuracy. For target sound extraction, the configuration of the network training remains the same as <cit.>. Batch size and training epochs are 32 and 80, respectively. The learning rate is initialized as 5 × 10^-4 while halving the learning rate after 40 epochs. The network is trained with a combination of 90% SNR and 10% scale-invariant-signal-to-noise-ratio (SI-SNR) loss. The improvements of SNR and SI-SNR (SNRi, SI-SNRi) are evaluated for the extracted sound. § RESULTS AND DISCUSSION §.§ Pitch extraction performances Table <ref> shows the performance of conditional pitch extraction under different numbers of TCN layers. To compare the effects of condition inputs, concatenate, named Concat, is used for comparison. Besides, a newly proposed attention-based TCN method (short as FiLMAtten) is introduced for comparison <cit.>. The results show that the performance improves as the TCN layer's depth increases. Deeper TCN models can effectively extract frequency characteristics and capture harmonic patterns, leading to more accurate pitch extraction results in reverberant environments. The results also show that the FiLM method is better than the Concat method. Adding the attention mechanism does not bring advantages in pitch extraction. §.§ Target sound separation performances Table <ref> compares the performance of target sound extraction. DPRNN <cit.>, trained with permutation invariant training (PIT) <cit.>, is chosen as a reference method. Under reverberant conditions, the performance of both DPRNN and Waveformer is reduced. Pitch information obtained in the first stage improves the model’s performance under reverberant conditions. Besides, the training ratio for SNR and SI-SNR loss is verified under reverberant conditions, as shown in Table <ref>. The results suggest that the training ratio of the two loss functions is similar to the anechoic sound results <cit.>. Table <ref> shows the results for GTFB, where (l) represents the learnable GTFB and (f) is short for fixed parameters. The results show that learnable GTFB can better utilize pitch information than the 1-D convolution encoder. Different filter lengths and kernel numbers are used to select optimal parameters. Under the optimal parameters, the reverberation performance based on GTFB (SI-SNRi, 9.51 dB) surprisingly exceeds the results without reverberation (9.23 dB). The proposed TSE-PI brings about 2.4 dB SI-SNR improvements compared with the original Waveformer. However, it should be pointed out that the performance under different parameter conditions is quite different, which remains to be further analyzed in subsequent studies. Figure <ref> compares the SI-SNRi results under optimal parameters for each class. The results show that pitch information provides improvements under most conditions. Using GTFB further improves the performance. This result validates our analysis of the robustness of the auditory system and confirms the effectiveness of the bottom-up process in ASA mechanisms. § CONCLUSION This paper proposes a novel target sound extraction model with pitch information (TSE-PI). Inspired by the human auditory system, pitch information and Gammatone filterbanks are introduced to improve performance under reverberant conditions. We plan to extend our method to multiple microphones under real-world reverberant scenarios with self-supervised schemes. § ACKNOWLEDGEMENT This work is supported in part by the Major Program of the National Social Science Fund of China (No. 22&ZD318), and the High-performance Computing Platform of Peking University. IEEEtran
http://arxiv.org/abs/2406.08969v1
20240613095803
New Factorizations of Yang-Mills Amplitudes
[ "Yong Zhang" ]
hep-th
[ "hep-th" ]
=1 apsrev4-1 shapes.geometric,arrows,decorations.pathmorphing,decorations.markings,patterns,calc expecta-tions #1#1 #1fig. <ref> #1Fig. <ref> #1#2figs. <ref> and <ref> #1#2Figs. <ref> and <ref> #1eq. (<ref>) 𝒩=4 𝒩=8 #1.#2⟨#1 #2⟩ #1.#2[#1 #2] #1table <ref> #1Table <ref> ϵ ϵ scalar tree dual Tr n n =shuffle10 =shuffle7 =shuffle7 at 5pt '001 '001 '001
http://arxiv.org/abs/2406.08972v1
20240613100315
On the block size spectrum of the multiplicative Beta coalescent
[ "Frederic Alberti", "Florin Boenkost", "Fernando Cordero" ]
math.PR
[ "math.PR" ]
§ ABSTRACT In this work we introduce the (exchangeable, but non-consistent) multiplicative Λ-coalescent, which accounts for the connected components (blocks) of the dynamic Λ-random graph. This is the random graph analogue of the classical Λ-coalescent studied in mathematical population genetics; it is an exchangeable and consistent random graph process. This work considers the case where Λ is the beta measure. We prove a dynamic law of large numbers for the numbers of blocks containing 1,...,d elements. In addition, we provide a functional limit theorem for the fluctuations around its deterministic trajectory. The limit process satisfies a stochastic differential equation of Ornstein-Uhlenbeck type. Keywords: coalescent process, random graphs, functional limit theorem, Poisson representation, Ornstein-Uhlenbeck type process. MSC 2020 classification: Primary: 60J90, Secondary: 05C80, 60F17, 60F15. Signature of non-trivial band topology in Shubnikov–de Haas oscillations J. Carlos Egues June 17, 2024 ========================================================================= § TRASH Collection of useful references for introduction: <cit.>, General References for multiplicative stuff <cit.> * General reference for random networks: <cit.> * Classification of exchangeable, dynamic random graphs: <cit.>; not quite applicable because our dynamic Λ-random graph does not have càdlàg paths (as a process on infinite graphs), but perhaps still worth mentioning. * In <cit.> a multiplicative coalescent is studied, where in each step after time Exp(1) a random number K of elements is chosen according to some probability and then merging all blocks which contain at least one of these elements. They study the number of isolated vertices and and the block size distribution. However, the construction in there is not consistent as one increases n. * In <cit.> they study the number of blocks N(t) in a Λ-coalescent for small values of t as the Λ-coalescent comes down from ∞. In the case of a Beta(2-α,α), α∈ (1,2) coalescent they prove N(t) t^1/α-1→ c_α, as t → 0, almost surely and in L_p, p ≥ 1. * In <cit.> they prove a fluctuation result for the number of blocks in a Λ-coalescent. For a Beta(1-β,α)-coalescent the fluctuations rescaled as ( ε^- 1/1+β(N_ε t/ v_ε t -1) ) → Z(t), as ε→ 0. where Z(t) is a (1+β) stable process, with an asymmetric Lévy noise (This seems to correspond well to our case.) * <cit.> has the same kind of result as we do for classical Beta-coalescents. This paper seems to very close to ours. Interestingly, they have the same time scaling as we do (n^α-1). However, they get a polynomial decay for the number of blocks, whereas we get exponential decay? * Connection to the Smoluchowski equation of the multiplicative coalescent (or more precisely for the mass of particles), as well connection to the gelation time. Not sure how relevant this is? Possible References <cit.> especially the references therein?,see <cit.>. * Might also be relevant <cit.>, see Proposition 4 there. Aldous shows that for the multiplicative random graph, in the sense that two vertices of weight x_i and x_j are connected with probability 1- exp(-q x_i xj), it holds that the relative block sizes after this procedure converge to the lengths of a Brownian excursion. § INTRODUCTION The study of random graphs and networks is an active and rapidly developing area of research <cit.>. In addition to their rich mathematical structure, random networks find applications in a large number of fields such as biology <cit.>, sociology <cit.>, neuroscience <cit.>, computer science <cit.>, and more. The perhaps most canonical example of a random graph is that of Erdős and Rényi, by now known as the Erdős-Rényi random graph model  <cit.>; see for example <cit.> for more details about random graphs. In that model, a random number of edges is drawn, connecting each pair of edges with a fixed probability. After proper rescaling, the connected components can be described by means of excursions of a Brownian motion with drift <cit.>. Dynamic versions of this model, in which vertices are connected by edges arriving at exponential waiting times, have been investigated as well. It was shown that, after letting the total number of vertices tend to infinity, slowing down time and normalising appropriately, the behaviour of the frequencies of small connected components is captured by a deterministic ordinary differential equation, known as Smoluchowski's coagulation equation <cit.>. By restricting attention to their connected components and ignoring the rest of the graph structure, dynamic random graph models give rise in a natural way to coalescent processes, i.e. stochastic processes that take values in the set of partitions (or, equivalently, equivalence relations) of the set of vertices by declaring two vertices equivalent if they are connected via some path in the graph <cit.>. For instance, the evolution of the components of the dynamic Erdős-Rényi model is known as the multiplicative coalescent <cit.>. Another coalescent process that can be interpreted via an underlying random graph is the additive coalescent, treated alongside the multiplicative case in <cit.>. To a large degree, the interest in coalescent processes is due to their application in mathematical population genetics <cit.>. There, the goal is usually to describe the backward-in-time evolution of the genealogy of a sample of genes taken at present; as one looks further and further into the past, sets/blocks of samples are merged to indicate identity by descent. Owing to biological reality, a main focus has traditionally been on coalescent processes that are exchangeable in the sense that they should be insensitive to arbitrary reordering of labels, and consistent in the sense that the coalescent associated with a subsample should agree with the marginal of the coalescent of the full sample. In 1999, Pitman <cit.> and Sagitov <cit.> independently classified all such coalescents with asynchronous mergers by showing them to be in one-to-one correspondence with finite measures on [0,1]. Later, Schweinsberg <cit.> generalised this classification to allow for simultaneous mergers. By incorporating types one can weaken the usual notion of exchangeability to partial exchangeability. Johnston, Rogers and Kyprianou <cit.> gave a classification of such coalescents analogous to the one in <cit.> for the exchangeable case. A natural extension is to drop (or at least to weaken) the assumption of consistency. One way to do this is to consider coalescent processes that may not be consistent themselves, but are associated with consistent, exchangeable random graph processes. Although a classification of such random graph processes was given in <cit.>, a detailed investigation of their properties has yet to be carried out. As a prototypical example, we consider in this work a dynamic random graph model that is associated with a finite measure Λ in a similar way as the classical coalescents in <cit.>. Put simply, in the classical Λ-coalescent one observes u mergers at rate Λ( u)/u^2<∞, in which each block participates independently with probability u. This choice of Λ guarantees that we observe transitions at a finite rate when considering finitely many blocks, resulting in a well-defined coalescent process. In our work we consider a random graph model in which, at rate Λ( u)/u^2, a large community is created by connecting a proportion of vertices with a complete graph. Leaving a more detailed exploration of its structure to future work, we investigate the associated coalescent, which we call the multiplicative Λ-coalescent. The qualifier `multiplicative' is due to the fact that the effective merging rates for different blocks is proportional to the product of their masses. We stress that our model is different from that treated in <cit.>. While that paper also considers an extension of the multiplicative coalescent with multiple mergers, the family sizes stay bounded as the number of vertices increases. In contrast, our family sizes are a positive fraction of the total number of vertices. In line with the tradition in population genetics, we will focus on the special case when Λ is the Beta distribution; this allows for nice, explicit computations, although we expect our results to also hold in (slightly) greater generality. Similar to <cit.>, we derive a dynamic law of large numbers for the number of connected components containing 1,...,d elements, see Remark <ref>. We also prove a non-standard functional limit for the fluctuations around this deterministic limit. Our results mirrors that of <cit.> on the fluctuations of the total number of blocks in the Λ-coalescent when Λ looks like a Beta distribution around the origin. More details can be found in Remark <ref>. The rest of the paper is organised as follows. After introducing our model and stating our main results in Section <ref>, we start Section <ref> by adapting the Poisson integral representation from <cit.> to our setting, which will be the central tool for our proof. These kind of representations are reminiscent of those used in <cit.> or <cit.>. Right after, we give an overview over the structure of the proof before diving into the technical details. Finally, in Section <ref>, we recall some useful results from real analysis that are used elsewhere in the text. § MODEL AND MAIN RESULTS §.§ The dynamic Λ-random graph and its (finite) block spectrum We start this section by introducing the main objects and stating our two main results. We define a Markov process G^n = (G^n_t)_t ⩾ 0 taking values in the set of (undirected) graphs with vertices [n] {1,…,n} and dynamically evolving edges. Let Λ be a finite measure on (0,1] and let N be a Poisson point process on [0,∞) × (0,1] with intensity μ( t, u) t Λ( u)/u^2. For each atom (t,u) of N we colour at time t each vertex independently with probability u and subsequently connect all pairs of coloured vertices. Denoting by E^n_t ⊆ [n]^2 the set of edges of G^n at time t, we set for each atom (t,u) of N E_t^n E_t-^n ∪{{i,j} : i,j ∈ [n], i ≠ j, B_i^ = B_j^ = 1 }, where B_1,…,B_n are independent Bernoulli random variables with success probability u. We call the resulting graph-valued Markov process G^n the Λ-dynamic random graph; see Figure <ref> for an illustration. In general G_0^n might be any graph with n vertices, however in our case we will usually consider G_0^n=([n],∅). Note that the second set on the right-hand side of Eq. (<ref>) is almost surely empty outside of a locally finite set of t ∈ [0,∞). Therefore, this indeed defines a Markov process with càdlàg paths on the set of undirected graphs with n vertices, if the latter is endowed with the discrete topology. It is natural to associate with G^n a coalescent process, i.e. a Markov process taking values in the partitions of [n], which becomes coarser as connected components merge over time. We call the process Π^n = (Π^n_t)_t ⩾ 0 with Π^n_t being the set of connected components of G^n_t the multiplicative Λ-coalescent on n vertices. Clearly, Π^n is a Markov process in its own right; at each atom (t,u), which we also call a u-merger, of the driving Poisson random measure, colour each vertex in [n] independently with probability u, and mark all blocks that contain at least one coloured vertex. Then, merge all marked blocks. Equivalently, we may skip the colouring step and mark each block A independently with probability p_|A|^ (u) 1 - (1 - u)^|A| = |A| u^|A| + O(u^|A| + 1). This is in contrast to the classical Λ-coalescents studied in mathematical population genetics <cit.>; there, upon a u-merger, each block is marked independently with probability u, regardless of its size. In particular, the probability that a given collection of m blocks A_1,…, A_m participates in a u-merger is u^m. In the multiplicative Λ-coalescent, that probability is ∏_i = 1^m ( 1 - (1-u)^|A| ) = u^m ∏_i = 1^m |A_i| + O(u^m+1). For any fixed n,d ∈_+ and with Π^n as in Definition <ref>, we call the process with ^n_t,i | { A ∈Π_t^n : |A| = i } | the block size frequency spectrum (of order d) of Π^n. In the following, our focus will be on the block size frequency spectrum of the multiplicative Beta-coalescent, that is, we fix Λ to be proportional to the Beta distribution, i.e. Λ ( u) _(0,1)(u) u^α - 1 (1 - u)^β - 1 u for some fixed α∈ (0,1) and β > 0, neglecting the normalisation B(α,β)^-1 for ease of notation. We will also fix d ∈_+ in Definition <ref>. We want to study the asymptotics of the block size frequency spectrum as n tends to infinity. To this end, we will normalise by 1/n. It is important to keep in mind that in the parameter regime we are considering here (i.e., α∈ (0,1)), the classical Beta coalescent comes down from infinity, meaning that at every time t>0, it consists of a finite number of blocks of infinite size. But since mergers occur with higher probability in the multiplicative setting as seen in Eq. (<ref>), this will also be true for us; moreover, Eq. (<ref>) then also implies that the rate of merging explodes and we must slow down time to observe a non-trivial limit. Let us be a bit more precise. For a single vertex to participate in a non-silent u-merger, is must be coloured (which happens with probability u) and at least one other vertex needs to be coloured as well (which happens with probability 1 - (1-u)^n). Thus, the rate at which each individual vertex is affected by a non-silent u-merger is ∫_(0,1] u (1-(1-u)^n-1) Λ( u)/u^2 = ∫_(0,1] u^α-2 (1-(1-u)^n-1) (1-u)^β-1 u = n^1-αΓ(α)/1-α + O(1), due to Lemma <ref>. Since there are n vertices in total, this means that after slowing down time by a factor of the order n^α - 1, we expect to see a macroscopic (i.e. of order n) number of vertices involved in merging events per unit time. This motivates the following definition. In what follows, we denote by ^n_t 1/n^n_n^α - 1 t the rescaled and normalised block size spectrum of Π^n. Before proceeding with our further investigation of C^n and stating our main results, namely a law of large numbers and a functional limit theorem, note that C^n is a Markov chain in its own right with state space E^n {∈ [0,1]^d ∩1/n^d : ∑_j = 1^d j c_j^≤ 1 }. Before we state its transition rates, we introduce three norms on ^d, namely || ∑_i = 1^d |x_i^| , _2 ( ∑_i=1^d x_i^2 )^1/2 and ∑_i = 1^d i |x_i^ |. We also write _2+(i) {∈^d : = i, ℓ_i^ = 0 }, for the set of integer partitions of i into at least two parts. For any ∈ E^n and ∈^d,we set n := ∏_j=1^d n c_jℓ_j. Then, starting from any state ∈ E^n and for any ∈^d such that - /n ∈ E^n, we observe a transition of of the form → - /n at rate λ^n, > _ (), where λ^n, >_() = n ∫_0^1 ∏_j=1^d (1 - (1-u)^j )^ℓ_j u^α -3 (1-u)^n - + β - 1 (1 - (1-u)^n - n ) u, is the rate at which we see a merging event in which, for each i = 1, …, d, exactly ℓ_i^ blocks of size i are marked along with at least one block of size strictly greater than d. In addition, we observe a transition from → - /n + e__⩽ d ) at rate λ^n, ≤_ (), where λ^n, ⩽_ℓ () = n ∫_0^1 ∏_j=1^d (1 - (1-u)^j )^ℓ_j u^α -3 (1-u)^n - + β - 1 u is the rate at which we see a merging event in which, for each i = 1,…,d, exactly ℓ_i^ blocks of size i are marked and nothing else. Note that the family (Π^n)_n ∈_+ is not consistent, whence we cannot define the multiplicative Λ-coalescent on _+. On the other hand, the family (G^n)_n ∈_+ of underlying graph processes is exchangeable, which allows us to couple Π^n for different n in a natural way. Such a coupling will play an important role in our proofs; we denote it by in everything that follows; see Subsection <ref> for a precise definition. §.§ Results Our first goal is to derive a dynamic law of large numbers for the normalised block size spectrum. That is, we will describe the limit as n →∞ of ^n in terms of an ordinary differential equation. For all ε > 0 and T > 0, lim_n →∞ ( sup_t ∈ [0,T] |^n_t - _t | > ε ) → 0, as n →∞, where _t = (_t,1^, …, _t,d^)_t ⩾ 0 solves the following system of ordinary differential equations / t_t,i^ = ∑_∈_2+(i)∏_j = 1^d (j _t,j^)^ℓ_j^/ℓ_j^ !Γ (α + || - 2) - Γ(α)/1 - α i _t,i^ F_i^ (_t^), 1 ⩽ i ⩽ d with initial condition _0,i = δ_i,1. * Let _0∈ [0,1]^d such that _0 =1, then one can easily relax the assumption on the initial condition and simply assume (_0^n-_0)^2→ 0 for n →∞. The initial condition of the system of ordinary differential equations in (<ref>) then has initial condition _0. * A similar result on the block size spectrum of Beta coalescent coming down from infinity has been obtained recently in <cit.>. They obtain the convergence of the block size spectrum towards polynomials, hence the decay in the frequency of blocks of any size is of polynomial order, whereas in our case the decay of blocks is exponentially fast. Notably, the time rescaling in both models is the same, namely time is slowed down by n^a-1. Since the Beta coalescent comes down from infinity for a∈(0,1) <cit.>, Miller and Pitters also provide a limiting result if one starts the coalescent with infinitely many lineages. In our case the multiplicative coalescent is not consistent in n, hence we do not provide such a result. It is a straightforward exercise to compute in an iterative fashion. For i = 1,…,d and t ⩾ 0, _t,i^ = p_i^ (t) ^-i γ t where γ = Γ(α) / (1 - α) and p_i^ is a polynomial of degree i-1 for each i between 1 and d, see Figure <ref> for a plot of _t,1^,…, _t,4^. These polynomials can be computed via the recursion p_i^ (t) = _0,i^ + ∑_∈_2+ (i) Γ ( α + || - 2) ∏_j = 1^d j^ℓ_j/ℓ_j !∫_0^t ∏_j = 1^d p_j(s)^ℓ_j s. In particular p_1(t)=_0,1 and p_2(t) = _0,2 + Γ(α)/2_0,1^2 t p_3(t) = _0,3 + Γ(α+1)( _0,1^3 t/3! + 2 _0,1_0,2 t + Γ(α) _0,1^3 t^2/4). Theorem <ref> can be proved based on its characterisation via transition rates, given in Eqs. (<ref>) and (<ref>), using general theory <cit.>. However, in view of our proof of the functional limit theorem, we will work with a representation via Poisson integrals, see Subsection <ref>, which also provides the appropriate coupling for (^n)_n ⩾ 0 so that the convergence in probability in Theorem <ref> holds. In fact, using this method, we obtain slightly more; we can show that for all ε > 0 and any T > 0, ( sup_0 ⩽ t ⩽ T |^n_t - _t^| ⩾ε ) = O(n^α - 1) as n →∞. The following theorem is a L_2 version of the law of large numbers, which plays a pivotal role in our proof of the functional limit theorem below and also seems interesting in its own right. For all T > 0, sup_0 ⩽ t ⩽ T_t^n - _t^_2^2 = O(n^α -1). Note that Theorem <ref> does not imply Theorem <ref>. Our second main result is a functional limit theorem (FLT) for the fluctuations of ^n around its deterministic limit . For n ∈_+, set σ_n^ n^1-α/2-α and _t^n σ_n^ (_t^n - _t). Then, there exists a Poisson point process on [0,∞) × (0, ∞) with intensity t u^α - 3 u, defined on the same probability space as ^n, such that for any T>0 ( sup_0 ⩽ t ⩽ T |U^n_t,i - U_t,i^ | ⩾ε ) → 0 as n →∞. Here, for all i = 1,…,d, _i = (U_t,i)_t ⩾ 0 is a generalised Ornstein-Uhlenbeck process satisfying U_t,i = ⟨∇ F_i^ (_t^), _t ⟩ t + M_t,i , see (<ref>) for the definition of F_i^; the driving martingale _i is defined as M_t,i -i ∫_[0,t) × [0, ∞)_s,i u ( s u). Put differently, U_i satisfies U_t,i = ⟨∇ F_i^ (_t^), _t^⟩ t - i _t,i^ L_t , where L is a spectrally positive Lévy process with Lévy measure u^α - 3 u driven by . We expect our results to carry over to the more general situation that Λ({0}) = Λ({1}) = 0 and there exists y_0^⩽ 1 such that Λ( y) = g(y) y for, y ∈ [0,y_0^] and lim_y → 0+ g(y) y^1 - α = A, for some A ∈ (0,∞) and α∈ (0,1); see Assumption (A) in <cit.>. However, to keep technicalities in check, we restrict ourselves to measures with densities of the form (<ref>). In particular, a result in this spirit was obtained for the Beta coalescent in <cit.>, where it is shown that the fluctuations of the block counting process of the Beta coalescent around its deterministic limit fulfils a functional limit theorem. More precisely, under the assumption in (<ref>), they prove that the fluctuations are given by (2-α) stable process of Ornstein-Uhlenbeck type, see Theorem 1.2 in <cit.>. However, note that in their work they are able to start the coalescent with infinitely many lineages. § PROOFS §.§ Integral/Poisson representation The main device that is used in the proofs of Theorems <ref>, <ref> and <ref> is a representation of the normalised block size spectrum ^n in terms of an integral equation with respect to a Poisson measure similar to N (see beginning of Section <ref>), but augmented by additional information regarding which blocks are affected by each merger. For this, we let N^E (the `E' stands for `extended') be a Poisson point process on [0,∞) × [0,1] × [0,1]^ with intensity μ^E ( t, u, x) t Λ( u)/u^2, where stands for the uniform distribution on [0,1]^. In other words, N^E may be constructed by first constructing the process N as in Eq. (<ref>), and then sampling, independently for each atom (t,u), a third component as a realisation of a random variable = (X_1,X_2,…) where X_i^, i ∈ are independent random variables, uniformly distributed on [0,1]. In analogy with our earlier habit of referring to atoms (t,u) of N as u-mergers, we refer to an atom (t,u,) of N^E as a (u,)-merger. The idea is to index the blocks of Π^n and their elements in a consistent manner by , such that upon an (u,)-merger, the i-th vertex participates in a merger if and only if x_i^⩽ u. Let us be a bit more precise. By exchangeability, we can assume without loss of generality that, whenever _t^n =, say, π = Π^n_tn^α-1 has the following form. * The n c_1^ singleton blocks of π are {1},{2},…,{nc_1^}. * The n c_2^ blocks of π of size 2 are {nc_1^ + 1, nc_1^ + 2}, {nc_1^ + 3, nc_1^ + 4 }, …, { n c_1^ + 2 nc_2^ - 1, nc_1^ + 2 nc_2^}. * In general, the k-th block of size i is given by I_i,k{S_i + (k-1)i + 1,…,S_i + ki } with S_i nc_1^ + 2nc_2^ + … + (i-1)nc_i-1^. Consequently, for any 1 ⩽ i ⩽ d and 1 ⩽ k ⩽ nc_i^, B_i,k{ (u,) ∈ [0,1] × [0,1]^ : ∃ j ∈ I_i,k s.t. x_j^⩽ u } is the set of all (u,) for which the k-th block of size i is marked by an (u,)-merger. We also define B_⩾{ (u,) ∈ [0,1] × [0,1]^ : ∃n + 1 ⩽ j ⩽ n s.t. x_j^⩽ u }, the set of all (u,) for which some block with more than d vertices is marked. Note that B_j,q and B_⩾ all depend implicitly on as well as n. The advantage of this augmented representation is that, having fixed a realisation of N^E, there is no additional randomness; each (u,)-merger induces a unique (possibly trivial) transition of ^n. For 1 ⩽ i ⩽ d, we will write f_i^n,+ (,u,) and f_i^n,- (,u,) for the normalised number of blocks of size i that are gained and lost upon an (u,)-merger. Clearly, every marked block of size i is lost, except when no other vertex outside of that block is coloured. f_i^n,- (,u,) = 1/n∑_k = 1^nc_i^_B_i,k (u,) ( 1 - ∏_r ∈ [n] ∖ I_i,k_x_r^ > u ). On the other hand, we gain a block of size i whenever, for some ∈_2+(i), exactly ℓ_j^ blocks of size j are marked, and no other vertex participates f_i^n,+ (,u,) = 1/n∑_∈_2+ (i)_B_⩾ n^c (u,) ∏_j = 1^d ( ∑_K ⊆ [nc_j^] |K| = ℓ_j^∏_q ∈ K_B_j,q (u,) ∏_r ∈ [nc_j^] ∖ K_B_j,r^c (u,) ). Finally, we also write f_i^n(,u,) f_i^n,+ (,u,) - f_i^n,- (,u,), for the net change in the (normalised) number of blocks of size i upon a (u,)-merger. To account for the time change, we write N_n^E for the image of N^E under the map (t,u,) ↦ (t n^1 - α, u, ). Note that by the Poisson mapping theorem (see, for example, Proposition 11.2 in <cit.>), N_n^E is a Poisson point process with intensity μ_n^E ( t, u, ) n^α -1μ^E ( t, u, ) = n^α -1 t Λ( u)/u^2. With this, we now have for all n ∈_+, t ⩾ 0 and 1 ⩽ i ⩽ d, C_t,i^n = ∫_[0,t) × [0,1] × [0,1]^ f_i^n ( _s^n, u, ) N_n^E ( s, u, ). By decomposing N_n^E into the compensated Poisson measure N_n^E and its intensity μ_n^E, we arrive at C_t,i^n = ∫_[0,t) F_i^n (_s^n) s + M̂_t,i^n, where, for all ∈ E^n, F_i^n (c) ∫_[0,1] × [0,1]^ f_i^n ( , u, ) μ_n^E ( s, u, ) and M̂_t,i^n ∫_[0,t) × [0,1] × [0,1]^ f_i^n ( _s^n, u, ) N_n^E ( s, u, ). We will also let _t^n (M̂_t,1^n,…,M̂_t,d^n); note that ^n (_t^n)_t ⩾ 0 is a martingale with respect to the filtration generated by N_n^E. §.§ Structure of the proof / Heuristics Before diving into the computations, we start by outlining the structure of our arguments. In order to show the law of large numbers, we will show that the martingale part M̂^n_t vanishes as n →∞. We will also show (see Lemma <ref>), that the limit of F_i^n in Eq. (<ref>) is given by F_i^ in Theorem <ref>. Taking the limit on both sides of Eq. (<ref>) and exchanging the limit with the integral, we expect _t,i^ = lim_n →∞ C_t,i^n to satisfy the integral equation _t,i^ = ∫_0^t F_i(_t^) s or, equivalently, the ordinary differential equation / t_t,i^ = F_i (_t^). Controlling the L^2 norm of the martingale part will lead to the L^2 version of the law of large numbers in Theorem <ref>. To obtain the functional limit theorem (FLT), we need a finer understanding of the asymptotics of the martingale M̂^n. For that, it is crucial to observe that due to slowing down time by a factor of n^α -1, we will only see u-mergers for very small u and may neglect effects that are of higher order in u. In particular, recall that the probability that any given block of size i is lost during an u-merger is i u + O(u^2) (see Eq. (<ref>)) and the probability that such a block is gained is of order u^2 and thus negligible. Therefore, we expect the gross change in the (normalised) number of blocks of size i to be roughly -iu _t,i^, which suggests the approximation M̂_t,i^n ≈ M_t,i^n - i ∫_[0,t) × [0,1] u _s,i^N_n( s, u), where N_n is a compensated Poisson point process with intensity μ_n^ ( t, u) = n^α-1μ( t, u) = n^α-1 t Λ( u)/u^2. Next, we investigate the asymptotics of the integral on the right-hand side of Eq. (<ref>) and give a heuristic for the scaling σ_n^ in Theorem <ref>. First, note that due to the presence of the factor n^α -1 in the density μ_n^, ^n will vanish as n →∞, hence we need to scale it up by a factor σ_n^ to obtain a nontrivial limit. We see that σ_n^ M_t,i^n = -i ∫_[0,t) × [0,1] (σ_n^ u) _s,i^N_n ( s, u) = -i ∫_[0,t) × [0,σ_n^] u _s,i^N_n^∗ ( s, u) where N_n^∗ is the image of N_n under the map (t,u) ↦ (t, σ_n^ u), which is by the Poisson mapping theorem (see again <cit.>) a Poisson point process with intensity n^α -1 t (u/σ_n^)^α - 3 (1 - u/σ_n^)^β -1 (u / σ_n^) = σ_n^2 - α n^α - 1 t u^α -3 (1- u/σ_n^)^β -1 u, which converges to the intensity of in Theorem <ref> upon choosing σ_n^ = n^(1-α)/(2-α). §.§ Rigorous argument To prepare our proof of Theorem <ref>, we first prove that the functions F_i^n defined in Eq. (<ref>) converge uniformly to F_i^ given in Theorem <ref>. In view of later applications in the proof of Theorem <ref>, we need some quantitative control. For 1 ⩽ i ⩽ d, n ∈_+ and ∈ E^n, we have sup_∈ E^n | F_i^n () - F_i^ () | = O(n^α -1). Here and in the following, O(·) refers to the limit as n→∞. Recall the definition of f_i^n,± in Eqs. (<ref>) as well as (<ref>) and define accordingly, mimicking Eq. (<ref>), F_i^n,± n^α -1∫_[0,1] × [0,1]^ f_i^n,± (,u,) Λ( u)/u^2. Let also F_i^+ () ∑_∈_2+(i)∏_j=1^d (jc_j^)^ℓ_j^/ℓ_j^ !Γ(α + || - 2) and F_i^-() Γ(α)/1-α i c_i^. We will separately show that sup_∈ E^n | F_i^n,± () - F_i^± () | = O(n^α -1). By definition, for i ∈ [2,d] ∩ F_i^n,+() = n^α-1∫_[0,1] × [0,1]^ f_i^n,+ (,u,) Λ( u)/u^2 = n^α - 2∑_∈_2+(i)∫_0^1 (1-u)^n - n∏_j=1^d [ nc_j^ℓ_j^ p_j^(u)^ℓ_j^ (1-u)^j n c_j -j ℓ_j] u^α-3(1-u)^β -1 u = n^α -2∑_∈_2+(i)n∫_0^1 [ ∏_j=1^d p_j^(u)^ℓ_j^] u^α - 3 (1-u)^n - + β -1 u, with p_i^ (u) as in Eq (<ref>). We used in the second step that _B_j,q depends only on x_i^ with i ∈ I_j,q, and I_j,q∩ I_h,r = ∅ for j ≠ h, and that _B_⩾ only depends on [n + 1,…,n], which is disjoint from all I_j,q, where for the the third step, we recall the notation n∏_j=1^d nc_j^ℓ_j^. Next, we deal with the u-integration. For this, we need to evaluate for all ∈_2+(i) the limit as n→∞ of the integral ∫_0^1 P_(u) u^α-3 (1-u)^n - + β -1 u with P_(u) ∏_j=1^d p_j^ (u)^ℓ_j^ = u^||∏_j=1^d j^ℓ_j^ + O(u^|| + 1 ). By Lemma <ref>, we have ∫_0^1 P_(u) u^α-3 (1-u)^n - ℓ + β -1 u = n^2 - α - ||∏_j=1^d j^ℓ_j^Γ(α -2 + ||) + O(n^1 - α - ||). Inserting this into Eq. (<ref>) and noting that n = n^||/ !∏_j=1^d c_j^ℓ_j + O(n^|| - 1), uniformly in c with the convention ! ∏_j=1^d ℓ_j!, we see that F_i^n,+ = ∑_∈_2+(i)n ( n^-||Γ(α -2 + ||) ∏_j=1^d j^ℓ_j^ + O(n^-|| - 1 ) = ∑_∈_2+(i)∏_j=1^d (jc_j^)^ℓ_j^/ℓ_j^!Γ(α -2 + ||) + O(n^-1) = F_i^+() + O(n^-1), where the error term is uniform in . Next, we show the convergence of F_i^n,-. Proceeding as before, we have F_i^n,- () = n^α - 1∫_[0,1] × [0,1]^ f_i^n,- (,u,) Λ( u)/u^2 = n^α - 1 c_i^∫_0^1 p_i^ (u) (1 - (1-u)^n-i ) u^α -3 (1-u)^β - 1 u = n^α - 1 i c_i^∫_0^1 u^α - 2 (1 - u)^β - 1 (1 - (1-u)^n-i ) u + O(n^α -1). where we used in the second step that p_i^(u) = i u + O(u^2). From Lemma <ref> we see that ∫_0^1 u^α - 2 (1 - u)^β - 1 (1 - (1-u)^n-i ) = n^1-αΓ(α)/1 - α + O(1), which finishes the proof. Next, we will prove an a-priori estimate for the martingales M̂_i^n. For all 1 ⩽ i ⩽ d and T > 0, (M̂_T,i^n)^2 = O(n^α - 1). We define, decomposing f_i^n in Eq. (<ref>) into f_i^n,±, M̂_t,i^n,±∫_[0,t) × [0,1] × [0,1]^ f_i^n,± ( _s^n, u, ) N_n^E ( s, u, ). By Ito isometry, we have (M̂_T,i^n,+)^2 = ∫_ [0,t) × [0,1] × [0,1]^ f_i^n,+(_s^n,u,)^2 μ_n^ ( s, u) . To evaluate this note that for ∼unif([0,1]^), n f_i^n,+(_s^n,u,) conditional on _s^n is a Bernoulli random variable with success probability ∑_∈_2+(i) (1-u)^n-n ^n_s∏_j=1^d n C^n_s,jℓ_j^ p_j^ (u)^ℓ_j (1-u)^j (n C^n_s,j - ℓ_j^). Thus, the integral on the right-hand side of Eq. (<ref>) evaluates to n^α - 3∫_[0,t) × [0,1]∑_∈_2+ (i) (1-u)^n-n^n_s [ ∏_j=1^d n C^n_s,jℓ_j^ p_j^ (u)^ℓ_j^ (1-u)^j(n C^n_s,j - ℓ_j^)] u^α - 3 (1-u)^β - 1 u s. = n^α -3∑_∈_2+(i)∫_[0,t) × [0,1]n^n_s P_ (u) u^α -3 (1-u)^n - + β -1 u s ⩽ K ∑_∈_2+(i) n^α - 3 + ||∫_0^1 P_ (u) u^α -3 (1-u)^n- + β -1 u, with P_(u) = ∏_j=1^d j^ℓ_j u^|ℓ| + O(u^|| + 1) and some uniform constant K. By Lemma <ref>, we have the estimate ∫_0^1 P_ (u) u^α -3 (1-u)^n- + β -1 u = O(n^2 - α - ||). We have thus shown that (M̂_T,i^n,+)^2 = O(n^-1) = O(n^α -1). We now turn to estimating (M̂_T,i^n,-)^2. Again by Ito isometry, (M̂_T,i^n,-)^2 =∫_ [0,t) × [0,1] × [0,1]^ f_i^n,-(_s^n,u,)^2 μ_n^ ( s, u) . Recalling the definition of f_i^n,- in Eq. (<ref>), we see that for ∼unif([0,1]^) and fixed u ∈ [0,1] and ∈ E^n, there is B ∼Binomial(n,u ) s.t. n f_i^n,-(,u,) is dominated by B _B ⩾ 2 so that we get the bound (M̂_T,i^n,-)^2⩽ K n^α - 3∫_0^1 B^2 _B ⩾ 2^ u^α -3 (1-u)^β -1 u, for some uniform constant K > 0. A short calculation gives B^2 _B ⩾ 2^⩽ n^2 u^2 + n u (1 - ( 1 - u )^n ). Clearly, ∫_0^1 u^α - 1 (1 - u )^β - 1 u < ∞, and by Lemma <ref>, ∫_0^1 u^α - 2 (1 - ( 1 - u )^n ) (1 - u)^β - 1 = O(n^1 - α). Altogether, this shows that (M̂_T,i^n,-)^2 = O(n^α -1) and together with Eq. (<ref>), this concludes the proof. We are now ready to prove the law of large numbers. Let i ∈{1,…,d }, we start from the Poisson representation (<ref>). Using Eq. (<ref>) and the definition of F_i^n in Lemma <ref>, this can be written as follows. C_t,i^n = δ_i,1 + ∫_0^t F_i^n (_s^n) s + M̂_t,i^n. By definition, is the solution of the integral equation _t,i^ = δ_i,1 + ∫_0^t F_i (_s^) s. Thus, we have for all t ∈ [0,T] C_t,i^n - _t,i^ = ∫_0^t F_i^n (_s^n) - F_i^ (_s^n) s + ∫_0^t F_i^ (_s^n) - F_i (_s^) s + M̂_t,i^n. Lemma <ref> yields a deterministic bound for the first integral. | ∫_0^t F_i^n (_s^n) - F_i^ (_s^n) s | ⩽∫_0^t | F_i^n (_s^n) - F_i^ (_s^n) | s ⩽ T sup_∈ E^n | F_i^n () - F_i^ () | = O(n^α - 1). Setting D_t^n max_1 ⩽ i ⩽ d | C_t,i^n - _t,i^ |, and noting that F_i^ is smooth, which implies D_t^n ⩽ K ∫_0^t D_s^n s + max_1 ⩽ i ⩽ d |M̂_t,i^n| + O(n^α - 1). For ε' > 0, let A_n,ε' be the event that max_1 ⩽ i ⩽ dsup_t ∈ [0,T] | M̂^n_t,i | ⩽ε'. By Doob's inequality and Lemma <ref>, A_n,ε'⩽ε'^-2max_1 ⩽ i ⩽ d(M̂_T,i^n)^2 = O(n^α-1). Moreover, on the event A_n,ε' and for sufficiently large n we have D_t^n ⩽ K ∫_0^t D_s^n s + 2 ε' and Grönwall's inequality implies that D_t^n ⩽ 2 ε' (1 + T^T) ⩽ε for sufficiently small ε' and for all t ∈ [0,T]. To conclude, we have shown that for all i ∈{1,…,d } sup_t ∈ [0,T] |C_t,i^n - _t,i^| ⩾ε = O(n^α - 1), from which the claim follows by a union bound. The proof of the L^2-version follows along similar lines. Using Eq. (<ref>) and the definition of F_i^n in Lemma <ref>, the representation from Eq. (<ref>) reads C_t,i^n = δ_i,1 + ∫_0^t F_i^n (_s^n) s + M̂_t,i^n. Recalling the definition of , we have for all t ∈ [0,T] C_t,i^n - _t,i^ = ∫_0^t F_i^n (_s^n) - F_i^ (_s^n) s + ∫_0^t F_i^ (_s^n) - F_i (_s^) s + M̂_t,i^n. Bounding the first integral with the help of Lemma <ref> and using the elementary estimate (a+b+c)^2 ⩽ 3(a^2+b^2+c^2), we see that ( C_t,i^n - _t,i^ )^2 ⩽ 3 ( ∫_0^t F_i(_s^n) - F_i (_s^) s )^2 + 3 (M̂_t,i^n)^2 + O(n^α -1) ⩽ 3 ∫_0^t ( F_i (_s^n) - F_i (_s^ ) )^2 s + 3 (M̂_t,i^n)^2 + O(n^α -1), where the second step is an application of Jensen's inequality and the error is uniform in t. Next, we take the maximum over all 1 ⩽ i ⩽ d and the expectation and obtain, using the smoothness of F_i, max_1 ⩽ i ⩽ d ( C^n_t,i - _t,i^ )^2 ⩽ K ∫_0^t max_1 ⩽ i ⩽ d ( C^n_s,i - _s,i^ )^2 s + O(n^α - 1), where we also used Lemma <ref>. The claim follows from Gronwall's inequality. §.§ The functional limit theorem As a first step towards the proof of the functional limit theorem <ref>, we make the approximation in Eq. (<ref>) precise. Recall that σ_n^ = n^1- α/2 - α. Let ^n be as in Eq. (<ref>) and ^n as in Eq. (<ref>). Then, for any T > 0, ε > 0 and i ∈{1,…,d} sup_t ∈ [0,T]σ_n^ | M̂_t,i^n - M_t,i^n | ⩾ε→ 0, as n →∞. By Doob's inequality, it is enough to show that σ_n^2 ( M̂_T,i^n - M_T,i^n )^2 → 0. With ^n = ^n, + - ^n,- as in the proof of Lemma <ref> and by the elementary inequality (a + b)^2 ⩽ 2 a^2 + 2 b^2, ( M̂_T,i^n - M_T,i^n )^2 ⩽ 2 ( M̂_T,i^n,+)^2 + 2 ( M̂_T,i^n,- + M_T,i^n )^2 . We have already shown (see Eq. (<ref>)) that ( M̂_T,i^n,+)^2 = O(n^-1) = o(σ_n^2). By Ito isometry, ( M̂_T,i^n,- + M_T,i^n )^2 = ∫_[0,T) × [0,1] × [0,1]^ (f_i^n,- (C_s^n,u,x) - iu _s,i^ )^2μ_n^ ( s, u) ⩽ 2 ∫_[0,T) × [0,1] × [0,1]^ (f_i^n,- (C_s^n,u,x) - iu C_s,i^n )^2μ_n^ ( s, u) + 2 ∫_[0,T) × [0,1] (iu C_s,i^n - iu _s,i^ )^2μ_n^ ( s, u). We can estimate the second integral with the help of Theorem <ref>. For some constant K > 0, ∫_[0,T) × [0,1] (iu C_s,i^n - iu _s,i^ )^2μ_n^ ( s, u) ⩽ Kn^α - 1∫_0^1 u^2 n^α - 1 u^α - 3 (1-u)^β - 1 u = O(n^2α - 2) = o(σ_n^2). To estimate the first integral, we proceed similarly as in the proof of Lemma <ref>. For fixed u ∈ [0,1] and ∈ E^n, we have ∫_[0,1]^ ( f_i^n,-(,u,) - iu c_i^ )^2 ⩽ 2 ∫_[0,1]^ ( f_i^n,- (,u,) - p_i^ (u) c_i^ )^2 x + 2 ∫_[0,1]^ ( p_i^ (u) c_i^ - i u c_i^ )^2 . We bound the first integral via the following probabilistic interpretation. Recalling Eq. (<ref>), we have for ∼unif([0,1]^) the following equality in distribution. f_i^n,- (,u,) = 1/n B ( _B ⩾ 2^ + B' _B = 1^ ), where B ∼Binomial (n c_i^, p_i^ (u) ) and B' ∼Ber (1 - (1 - u)^n - i c_i^ n ) independently of B. This is because upon a (u,)-merger, there is a number B of marked blocks of size i. These are removed if either B ⩾ 2, or if B = 1 and there is at least one coloured vertex that is not part of a block of size i, which happens with probability ( 1 - (1 - u)^n - i c_i^ n ). Thus, ∫_[0,1]^ ( f_i^n,- (,u,) - p_i^ (u) c_i^ )^2 = n^-2 ( B _B ⩾ 2^ + B' _B = 1^ - n p_i^ (u) c_i^ )^2 . To evaluate this further, note that ( B _B ⩾ 2^ + B' _B = 1^ - n p_i^ (u) c_i^ )^2 = ( B - n p_i^ (u) c_i^ )^2 - _B = 1^_B' = 0^ [ 1 - 2 n p_i^ (u) c_i^ ]. Inserting this into Eq. (<ref>), we see that n^2 ∫_[0,1]^ ( f_i^n,-(,u,) - iu c_i^ )^2 = B + ( 2 n p_i^ (u) c_i^ - 1 ) B = 1B' = 0 = n c_i^ p_i^ (u) ( 1 - p_i^ (u) ) + ( 2 n p_i^ (u) c_i^ - 1 ) ( n c_i^ p_i^ (u) ( 1 - p_i^ (u) )^nc_i^ - 1 ) ( 1 -u )^n - n c_i^ i = n c_i^ p_i^ (u) ( 1 - (1 - u)^n - i ) - n c_i^ p_i^ (u)^2 + 2 n^2 c_i^2 p_i^ (u)^2 (1 - u)^n - i. We separately multiply each of the terms in the last line with n^α - 3 u^α - 3 (1 - u)^β - 1 and integrate with respect to u. We get, making use of the fact p_i(u)≤ i u n^α - 3 n c_i^∫_0^1 p_i^ (u) ( 1 - (1 - u)^n - i ) u^α - 3 (1 - u)^β - 1 u ⩽ n^α - 2 ( i n ^1 - αΓ(α)/1 - α + O(1) ) = O(n^-1) = o(σ_n^-2), by Lemma <ref>. By Lemma <ref>, we get n^α - 3 n^2 c_i^2 ∫_0^1 p_i^ (u)^2 (1 - u)^n - i u^α - 3 (1-u)^β - 1 u ⩽ n^α - 1 ( i^2 n^-αΓ(α) + O(n^-α - 1) ) = O(n^-1) = o(σ_n^-2). Moreover, the integration of the middle term gives - n^α-3 n ∫_0^1 c_i^ p_i^ (u)^2 u^α-3 (1-u)^β-1 u ≥ -n^α-2 c_i i^2 ∫_0^1 u^α-1 (1-u)^β-1 u = O(n^α-2)=o(σ_n^-2). All of these errors are uniform in , and the proof is thus finished. Our final ingredient for the proof of Theorem <ref> is the convergence of σ_n^ M_t,i^n to M_t,i^. For any T > 0, ε > 0 and i ∈{1, …, d}, sup_t ∈ [0,T] | σ_n^ M_t,i^n - M_t,i | ⩾ε→ 0 as n →∞. Note that Lemma <ref> is to be understood in the sense that we give a construction of in terms of _n such that M_t,i^n and M_t,i are coupled such that (<ref>) holds. Recalling the definition of M_t,i^n in Eq. (<ref>) and applying the Poisson mapping theorem, σ_n^ M_t,i^n = -i ∫_[0,t) × [0,1] (σ_n^ u) _s,i^N_n ( s, u) = -i ∫_[0,t) × [0, σ_n^] u _s,i^N_n^∗ ( s, u), where N_n^∗ ( s, u) is a compensated Poisson point process with intensity n^α - 1 (u/σ_n^)^α - 3 ( 1 - u/σ_n^ )^β - 1_(0,σ_n^)^ (u) s (u/σ_n^) = u^α - 3 (1 - u / σ_n^)^β - 1_(0, σ_n^) (u) s u, while the uncompensated point process N_n^∗ is the image of N_n^ under the map (t,u) ↦ (t, σ_n^ u). We aim to construct an appropriate coupling of the Poisson random measures N_n^∗ and . For this purpose, note that (1 - u/σ_n^)^β - 1⩾ 1 (⩽ 1) whenever β⩾ 1 (β⩽ 1) and let for all n ∈ be N_n^Δ a Poisson point process independent of N_n^∗ with intensity μ_n^Δ ( s, u) u^α - 3 |1 - ( 1 - u/σ_n^ )^β - 1 | _(0,σ_n^)^ (u) s u. We need to alter the approach for the cases β≥ 1 and β <1 slightly. If β≥ 1, then, N_n^∗ + (β - 1) N_n^Δ is a Poisson point process with intensity u^α - 3_(0,σ_n^)^ (u) s u. We construct by setting N_n^∗ + (β-1) N_n^Δ + _n', where _n' is an independent PPP with intensity u^α - 3_[σ_n^, ∞)^ (u) s u. The case β <1 is analogously, here is implicitly defined via +(1-β) N_n^Δ N_n^∗ + _n'. Applying this to the definition of M_t,i^, we get the following decomposition M_t,i^ = M_t,i^∗ + sgn (β - 1) M_t,i^Δ + M_t,i', which holds for any β≥0, where M_t,i^∗ -i ∫_[0,t) × (0,σ_n^) u _s,i^N_n^∗ ( s, u) = σ_n^ M_t,i^n, M_t,i^Δ -i ∫_[0,t) × (0,σ_n^) u _s,i^N_n^Δ ( s, u), M_t,i' -i ∫_[0,t) × [σ_n^,∞) u _s,i^'_n ( s, u), are independent. Consequently, sup_t ∈ [0,T] | σ_n^ M_t,i^n - M_t,i^ | ⩽sup_t ∈ [0,T] |M_t,i^Δ| + sup_t ∈ [0,T]|M_t,i'|. To bound the first supremum, we further split M_t,i^Δ = M_t,i^Δ,< + M_t,i^Δ,⩾ with M_t,i^Δ,< -i ∫_[0,t) × (0,σ_n^1/2) u _s,i^N_n^Δ ( s, u) and M_t,i^Δ,⩾ -i ∫_[0,t) × [σ_n^1/2 ,σ_n^) u _s,i^N_n^Δ ( s, u). By Doob's inequality and Ito isometry, we have sup_t ∈ [0,T] |M_t,i^Δ,<| ⩾ε / 3 ⩽ 9 ε^-2 (M_T,i^Δ,< )^2⩽ 9 ε^-2 i^2 T ∫_0^σ_n^1/2 u^α - 1 |1 - ( 1 - u/σ_n^ )^β - 1 | u. We use the mean value theorem to bound | 1 - ( 1 - x )^β - 1 | ⩽ K x, for some constant K and all x ∈ [0,1/2], say. Then, for n sufficiently large so that σ_n^1/2⩽σ_n^ / 2, we can bound the right-hand side as follows. 9 ε^-2 i^2 T ∫_0^σ_n^1/2 u^α - 1 |1 - ( 1 - u/σ_n^ )^β - 1 | u ⩽ 9 ε^-2 i^2 T K σ_n^-1∫_0^σ_n^1/2 u^α u = O (σ_n^(α - 1)/2 ), which shows that sup_t ∈ [0,T] |M_t,i^Δ,<| ⩾ε / 3 = O (σ_n^(α - 1)/2 ). Next, we deal with M_t,i^Δ, ⩾. We decompose M_t,i^Δ, ⩾ = -i ∫_[0,t) × [σ_n^1/2 ,σ_n^) u _s,i^ N_n^Δ ( s, u) + i ∫_[0,t) × [σ_n^1/2 ,σ_n^) u _s,i^μ_n^Δ ( s, u). On the event A_n, T{ (_n' ∪ N_n^Δ) ∩ [0,T) × [σ_n^1/2,∞) ≠∅}, the first integral vanishes, and after substituting v = σ_n^ u, the second one is bounded by i T σ_n^α - 1∫_1/2^1 v^α - 2 | 1 - ( 1 - v)^β - 1 | s u . This is of order O(σ_n^α -1) since | 1 - ( 1 - v)^β - 1 | ⩽ 2 + 2 (1-v)^β - 1 for all v ∈ (1/2,1). Thus, we have shown that lim_n →∞sup_t ∈ [0,T] |M_t,i^Δ, ⩾ | ⩾ε / 3 = lim_n →∞A_n,T. A straightforward calculation shows that A_n,T→ 0 as n →∞. Analogously, one can show that lim_n →∞sup_t ∈ [0,T] |M_t,i' | ⩾ε / 3 = lim_n →∞A_n,T = 0. Together with Eq. (<ref>), we finally obtain lim_n →∞sup_t ∈ [0,T] |σ_n^ M_t,i^n -i M_t,i^| ⩾ε ⩽lim_n →∞ ( sup_t ∈ [0,T] |M_t,i'| ⩾ε / 3 + sup_t ∈ [0,T] |M_t,i^Δ,<| ⩾ε / 3 + sup_t ∈ [0,T] |M_t,i^Δ,⩾| ⩾ε / 3 ) = 0. Now, we just have to put the pieces together. But first, we need another a-priori estimate for the size of the fluctuations. For all 1 ⩽ i ⩽ d and ε, T > 0 we have lim_n →∞sup_t ∈ [0,T]σ_n^ (C_t,i^n - _t,i^)^2 ⩾ε = 0. Following the proof of Theorem <ref> after Eq. (<ref>) sup_t ∈ [0,T]σ^1/2_n D_t^n ⩽ K ∫_0^t σ_n^1/2 D_s^n s + max_1 ⩽ i ⩽ dσ_n^1/2 |M̂_t,i^n| + O(n^α - 1 + 1-α/4 - 2α) and thus, for an appropriate ε' > 0, lim_n →∞sup_t ∈ [0,T]σ_n^1/2 D_t^n ⩾ε⩽lim_n →∞ (ε')^-2max_1 ⩽ i ⩽ dσ_n^(M̂_T,i^n)^2 + O(n^α - 1 + 1-α/4 - 2α) ⩽lim_n →∞ O(n^α - 1 + 1-α/2 - α) = 0. Recall the decomposition C_t,i^n = ∫_0^t F_i^n (_s^n) s + M̂_t,i^n and that U_t,i^n = σ_n^ ( C_t,i^n - _t,i^ ). Hence, we have U_t,i^n = σ_n^∫_0^t F_i^n (C_s^n) - F_i^ (_s^) s + σ_n^M̂_t,i^n and therefore, recalling that U_t,i = ∫_0^t ⟨∇ F_i^ (_s^), _s^⟩ s - i M_t,i, we see that U_t,i^n - U_t,i^ = σ_n^∫_0^t F_i^ (_s^n) - F_i^ (_s^) s + σ_n^∫_0^t F_i^n (_s^n) - F_i^ (_s^n) s - ∫_0^t ⟨∇ F_i^ (_s^), _s^⟩ s + i M_t,i^ + σ_n^M̂_t,i^n = ∫_0^t ( σ_n^ ( F_i^ (_s^n) - F_i^ (_s^) ) - ⟨∇ F_i^ (_s^), _s^⟩ ) s + σ_n^∫_0^t F_i^n (_s^n) - F_i^ (_s^n) s + iM_t,i^ + σ_n^M̂_t,i^n = ∫_0^t ( σ_n^ ( F_i^ (_s^n) - F_i^ (_s^) ) - ⟨∇ F_i^ (_s^), _s^⟩ ) s + iM_t,i^ + σ_n^M̂_t,i^n + O(n^α - 1 + 1-α/2-α), where we used Lemma <ref> in the last step. Because F_i^ is a polynomial, it is straightforward to see that σ_n^ ( F_i^ (C_s^n) - F_i^ (_s^) ) = ⟨∇ F_i^ (_s^), _s^n ⟩ + 𝐆_s^, where |𝐆_t^| ⩽ K σ_n^_t^n - _t^n _∞^2, for some uniform constant K, where . _∞ denotes the maximum over all d components. Setting _t^n _t^n - _t^_∞, we have (because ↦ DF_i () is bounded) for some (perhaps different) constant K that _t^n ⩽ K ∫_0^t _s^n s + sup_t,r ∈ [0,T] (|M_t,i^ + σ_n^M̂_t,i^n| + K σ_n^C_r^n - _r^n _∞^2 ) + O(n^α - 1 + 1 - α/2 - α). By Gronwall's inequality, we see that for sufficiently large n sup_t ∈ [0,T]_t^n ⩾ε⩽∑_i=1^d sup_t,r ∈ [0,T] (|M_t,i^ + σ_n^M̂_t,i^n| + K σ_n^_r^n - _r^n _∞^2 ) ⩾ε/2 + 2^KT. By Lemmas <ref>, <ref> and <ref>, the right-hand side goes to 0 as n →∞. § SOME CALCULUS The following Lemma from deterministic calculus will come in handy; it is taken from <cit.> Suppose -∞ < a < b < ∞ and f,g : [a,b] → are càdlàg functions such that sup_x ∈ [a,b] | f(x) + ∫_a^x g(u) u | ⩽ c for some c < ∞. If in addition f(x) g(x) > 0 whenever f(x) ≠ 0, then sup_x ∈ [a,b] | ∫_a^x g(u) u | ⩽ c and sup_x ∈ [a,b] |f(x)| ⩽ 2c. I don't think we actually need this, but let's keep it just in case. In the next two lemmas, we provide approximation results for certain integrals that appear throughout the manuscript. For all θ∈, α∈ (0,1) and all k ⩾ 2, ∫_0^1 u^k + α - 3 (1 - u)^n + θ u = n^2 - α - kΓ(k+ α - 2) + O(n^1-α-k). as n →∞. Since k≥ 2, we have k+α-3≥ -1, hence by the definition of the Beta-function ∫_0^1 u^k + α - 3 (1 - u)^n + θ u = Γ(k+α-2) Γ(n+θ+1)/Γ(n+θ+k+α-2). We note that by 6.1.47 in <cit.> it holds Γ(n+θ+1)/Γ(n+θ+k+α-2) (n+θ)^k+α-2 = 1+ O(n^-1), and we arrive at ∫_0^1 u^k + α - 3 (1 - u)^n + θ u = Γ(k+α-2) (n+θ)^2-α-k (1+O(n^-1)) =Γ(k+α-2) n^2-α-k (1+O(n^-1)). For all θ_1^∈ (-1, ∞), θ_2^∈ and α∈ (0,1), ∫_0^1 u^α - 2 (1 - u)^θ_1^ ( 1 - (1 - u)^n + θ_2^ ) u = n^1 - αΓ(α)/1 - α + O(1), as n →∞. We start by decomposing the integral as ∫_0^1/2 u^α - 2 (1 - u)^θ_1^ ( 1 - (1 - u)^n + θ_2^ ) u + ∫_1/2^1 u^α - 2 (1 - u)^θ_1^ ( 1 - (1 - u)^n + θ_2^ ) u . Clearly, since ( 1 - (1 - u)^n + θ_2^ ) ⩽ 1 for n ⩾ -θ_2^, the second integral in (<ref>) can be bounded uniformly in n. To deal with the first integral in (<ref>), we employ the substitution u ↦ un and obtain ∫_0^1/2 u^α - 2 (1 - u)^θ_1^ ( 1 - (1 - u)^n + θ_2^ ) u = n^1 - α∫_0^n/2 u^α - 2 ( 1 - u/n )^θ_1^ ( 1 - (1 - u/n )^n + θ_2^ ) u. We split the integral on the right-hand side into three parts ∫_0^n/2 u^α - 2 ( 1 - u/n )^θ_1^ ( 1 - (1 - u/n )^n + θ_2^ ) u = ∫_0^n/2 u^α - 2 ( 1 - u/n )^θ_1^ ( 1 - ^-u) u + ∫_0^n/2 u^α - 2 ( 1 - u/n )^θ_1^ ( ^-u - ( 1 - u/n )^n ) u + ∫_0^n/2 u^α - 2 ( 1 - u/n )^θ_1^ + n ( 1 - (1 - u/n )^θ_2^ ) u. To proceed, note that the mean-value theorem implies that | 1 - (1 - u/n )^θ_2^ | ⩽ K u/n for all u ∈ [0, n/2] and some constant K > 0; here and in the following, K will always denote a constant that may change its value from line to line. Thus, the third integral in Eq. (<ref>) can be bounded from above by K n^-1∫_0^n/2 u^α - 1 ( 1- u/n )^n u ⩽ K n^-1∫_0^∞ u^α -1^-u u = O(n^-1), again for some constant K (different from the one above). To bound the second integral, we use Lemma <ref> and see that ∫_0^n/2 u^α - 2 ( 1 - u/n )^θ_1^ ( ^-u - ( 1 - u/n )^n ) u ⩽ Kn^-1∫_0^∞ u^α^-u u = O(n^-1). We further decompose the first integral in Eq. (<ref>) as follows ∫_0^n/2 u^α - 2 ( 1 - u/n )^θ_1^ ( 1 - ^-u) u = ∫_0^∞ u^α - 2 (1- ^-u) u - ∫_n/2^∞ u^α - 2 (1- ^-u) u + ∫_0^n/2 u^α - 2 ( ( 1 - u/n)^θ_1^ - 1 ) ( 1 - ^-u) u. The first integral is precisely Γ(α) / (1-α), as can be seen by an elementary application of integration by parts. Bounding 1 - ^-u in the second integral by 1, we see that it is bounded in absolute value by n^α - 1. For the third integral, we once more apply Eq. (<ref>) to see that ∫_0^n/2 | u^α - 2 ( ( 1 - u/n)^θ_1^ - 1 ) ( 1 - ^-u) | u ⩽ Kn^-1∫_0^n/2 u^α - 1 u = O(n^α - 1). Let n ∈ℕ, then for all u ∈ [0, n/2] it holds | ( 1 - u/n )^n - ^-u | ⩽ 2 n^-1 u^2 e^-u. Note that ( 1 - u/n)^n ≤ e^-u, hence we have 0 ≤ f(u) := e^-u - ( 1 - u/n)^n. Then, for all u ∈ [0,n/2] f'(u) = ( 1 - u/n)^n-1- e^-u = ( 1 - u/n)^n - e^-u( 1- u/n) /( 1 - u/n) ≤ 2 [ e^-u - e^-u( 1- u/n) ] = 2 e^-uu/n. Finally, noting that f(0)=0, by the mean value theorem f(u) ≤ u sup_0 ≤ v ≤ u f'(v) ≤ 2 e^-u u^2 n^-1, which finishes the proof. § ACKNOWLEDGEMENTS Frederic Alberti was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) — Project-ID 519713930. Fernando Cordero was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) — Project-ID 317210226 — SFB 1283. For all n ∈, max_u ∈ [0,n/2] | ( 1 - u/n )^n - ^-u | ⩽ 4 n^-1 u^2 ^-u. Fix u ∈ (0,1) and define g(x) (1 - xu)^1/x. Clearly, g(x) is holomorphic on B_1/2 (0) ∖{ 0 } with a removable singularity at 0 and g(0) = ^-u. By the mean value theorem, we have the estimate | ^-u - (1 - u/n )^n | ⩽ n^-1sup_x ∈ (0,1/n) |g'(x)|. We compute g'(x) = - (1 - xu)^1/x ( u/x(1 - xu) + log(1 - xu)/u^2 ) = - g(x) ( u/x(1 - xu) + log(1 - xu)/x^2 ). Since g(x) ⩽^-u for all x ∈ [0,1/n], it suffices to consider the term in brackets. We have u/x(1 - xu) + log(1 - xu)/x^2 = ∑_k = 1^∞ u^k x^k - 2 - ∑_k = 1^∞1/k u^k x^k-2 = ∑_k = 2^∞ ( 1 - 1/k ) u^k x^k-2, which is positive and bounded from above by 2 u^2 ∑_k=0^∞ u^k x^k = 2u^2/1 - ux⩽ 4u^2, where the last inequality is due to ux ⩽ (n/2) (1/n) ⩽ 1/2. alpha
http://arxiv.org/abs/2406.09392v1
20240613175907
Entanglement dynamics and eigenstate correlations in strongly disordered quantum many-body systems
[ "Bikram Pain", "Sthitadhi Roy" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.quant-gas", "cond-mat.stat-mech", "quant-ph" ]
bikram.pain@icts.res.in International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bengaluru 560089, India sthitadhi.roy@icts.res.in International Centre for Theoretical Sciences, Tata Institute of Fundamental Research, Bengaluru 560089, India § ABSTRACT The many-body localised phase of quantum systems is an unusual dynamical phase wherein the system fails to thermalise and yet, entanglement grows unboundedly albeit very slowly in time. We present a microscopic theory of this ultraslow growth of entanglement in terms of dynamical eigenstate correlations of strongly disordered, interacting quantum systems in the many-body localised regime. These correlations involve sets of four or more eigenstates and hence, go beyond correlations involving pairs of eigenstates which are usually studied in the context of eigenstate thermalisation or lack thereof. We consider the minimal case, namely the second Rényi entropy of entanglement, of an initial product state as well as that of the time-evolution operator, wherein the correlations involve quartets of four eigenstates. We identify that the dynamics of the entanglement entropy is dominated by the spectral correlations within certain special quartets of eigenstates. We uncover the spatial structure of these special quartets and the ensuing statistics of the spectral correlations amongst the eigenstates therein, which reveals a hierarchy of timescales or equivalently, energyscales. We show that the hierarchy of these timescales along with their non-trivial distributions conspire to produce the logarithmic in time growth of entanglement, characteristic of the many-body localised regime. The underlying spatial structures in the set of special quartets also provides a microscopic understanding of the spacetime picture of the entanglement growth. The theory therefore provides a much richer perspective on entanglement growth in strongly disordered systems compared to the commonly employed phenomenological approach based on the ℓ-bit picture. Entanglement dynamics and eigenstate correlations in strongly disordered quantum many-body systems Sthitadhi Roy June 17, 2024 =================================================================================================== § INTRODUCTION The question of if, and how, out-of-equilibrium quantum many-body systems thermalise as quantum correlations develop dynamically between distant degrees of freedom is one of the most fundamental questions of interest in condensed matter and statistical physics <cit.>. This question is intimately connected to the entanglement structure and its dynamics <cit.> as it is one of the most fundamental manifestations of how quantum correlations and information propagates through the system <cit.>. Developing a microscopic theory for the dynamics of entanglement in quantum many-body systems is therefore naturally important. Given a Hamiltonian (or the generator of time translation in general), the joint distribution of its eigenvalues and eigenstates, in principle, contains all the information about the dynamics of the system. However, with the Hilbert-space dimension growing exponentially with the system size, it is obvious that the amount of information in the joint distribution is too large for it to be feasible to develop a theory in terms of that. A key theoretical challenge then is to distil out the minimal correlations between the eigenvalues and eigenstates that suffice to develop a theory for the dynamics of local observables and entanglement. As far as the former is concerned, the eigenstate thermalisation hypothesis (ETH) <cit.> provides a statistical description of the matrix elements of local operators in the eigenbasis and sheds light on their dynamics. An important understanding from the ETH is that local observables do eventually thermalise in systems which satisfy the ETH <cit.>. On the other hand, systems which violate the ETH, such as strongly disordered systems in the many-body localised regime (MBL) <cit.> fail to thermalise and break ergodicity. While the ETH or its violation is central to our understanding of dynamics of local observables, it has been recognised that the picture is insufficient. It has been established that there exist non-trivial higher-point correlations between the eigenvalues and eigenstates which fall outside the purview of the ETH, and more importantly, which encode the dynamics of information scrambling and entanglement growth <cit.>. In fact, for locally interacting systems, the question can be turned around and posed as, what does the presence of fundamental bounds on how fast quantum information can propagate <cit.> imply for higher-point, dynamical eigenstate correlations. However, much of the literature on this is focused on ergodic or quantum chaotic systems and hitherto, there is no work addressing such questions for non-ergodic systems. This is the central motivation of this work, namely, to understand the entanglement dynamics in strongly disordered quantum systems, in the MBL regime, through the lens of eigenstate correlations. Within the rich dynamical phase diagram of disordered, interacting quantum systems, the MBL phase constitutes a rather interesting but unusual example of a robustly non-ergodic phase which violates the ETH <cit.> and yet shows unbounded growth of entanglement <cit.>, albeit logarithmically slowly in time. This behaviour sets the MBL phase apart not only from the ergodic phase, but also from a non-interacting Anderson localised phase <cit.> wherein there is no entanglement growth. An early and simplistic understanding of this logarithmic growth of entanglement in the MBL phase was via the phenomenological, so-called ℓ-bit picture <cit.>. The picture proposes that there exists an extensive number of (quasi)local integrals of motion, the ℓ-bits, which are weakly dressed versions of the trivially localised integrals of motion at infinite disorder. Interactions between these ℓ-bits decay exponentially in space such that degrees of freedom separated by a certain distance get entangled on timescales which are exponentially large in the distance, which eventually leads to the logarithmic growth of entanglement <cit.>. While the ℓ-bit picture has been lead to several important insights about the MBL phase, their explicit constructions have remained elusive despite several noteworthy efforts <cit.>. As a result, a microscopic understanding of the distribution of the ℓ-bits' localisation lengths, which would be an important ingredient to any theory, has also eluded us. More recently, it has been realised that many-body resonances between the ℓ-bit configurations abound the spectra of MBL systems <cit.> and hence the phenomenological ℓ-bit picture cannot be entirely complete. These difficulties with the ℓ-bit picture therefore underline the importance of a microscopic theory of the logarithmically slow entanglement growth in the MBL phases, without alluding to any phenomenological picture. In this work, we approach this question from the point of view of eigenstate correlations. The precise questions raised therefore are what are the minimal eigenstate correlations that encode the dynamics of entanglement, what are their statistical properties in MBL systems, and how do they lead to the ultraslow growth of entanglement with time. A detailed understanding of these questions constitutes the main result of this work. §.§ Overview of the main results We start with a brief overview of the main results of the work. As a measure of entanglement in the system, we consider the (q^ th-)Rényi entropy of entanglement between a subsystem A and its complement B, defined as S^AB_q(t) = -1/q-1lnTr[ρ_A^q(t)] , where ρ_A(t) is the reduced density matrix of A at time t. In particular, we consider the simplest case of the second Rényi entropy, S^AB_q=2(t), or equivalently (t) = Trρ_A^2(t) , where (t) is defined as the purity of ρ_A(t). Throughout the work we consider one-dimensional systems where A (B) is the left (right) half of the system. We identify that averaging over random product states between A and B as initial states allows for the time-dependent subsystem purity to be expressed in terms of sums of eigenstate and spectral correlations involving four eigenstates. In fact, with this averaging the time-dependent subsystem purity is identical to the operator purity of the time-evolution operator. As such, our theory describes on equal footing the dynamics of entanglement of states starting from typical product states and the operator entanglement of the time-evolution operator. These correlations between quartets of eigenstates form the building blocks of our theory. We show that resolving these correlations in frequency (ω) where the frequency is now a combination of the four eigenvalues, directly encodes the dynamics of the subsystem purity. In particular, we find that the frequency dependence of these correlations is a power-law which in turn implies a power-law decay in time of (t) and consequently, the logarithmic growth in time of S^AB_2(t). Note that since these correlations involve four eigenstates, they manifestly go beyond the question of the ETH or its violation. Although in our case, these correlations emerge naturally out the dynamics of subsystem purity, their closely related cousins have been studied extensively for ergodic systems in the context of operator entanglement entropy of the time-evolution operator <cit.>. Having distilled out the minimal correlations that encode the dynamics of entanglement we next turn to the question of the microscopic origins of the power law in time of (t) or equivalently, the power law in ω of its Fourier transform, (ω). Insights into the question are obtained by studying the anatomy of the four-point eigenstate correlations in detail. For a system with total Hilbert-space dimension , there are obviously O(^4) possible quartets of eigenstates. Quite remarkably, we find that only O(^2) of those quartets carry an O(1) value of the relevant eigenstate correlation whereas for the rest of them, the correlations are vanishingly small. The key point is here that, for the latter, the correlations are so small that even their overwhelmingly large majority does not lead to a contribution comparable to the former. As such, we conclude that the dynamics of purity is dominated entirely by the O(^2) set of `special dominant' quartets of eigenstates for which the eigenstate correlation is O(1). In fact, we find that once restricted to these special quartets, the spectral correlations within them are sufficient to recover the aforementioned power-laws in time and ω. The question then reduces to understanding the structure of these special quartets and the spectral correlations within them. We discover that the eigenstates forming these special quartets have a very specific structure in real as well as in Hilbert space. Note that, the MBL eigenstates have area-law entanglement <cit.> notwithstanding the multifractality and rare resonances in Hilbert space <cit.>. It is therefore a reasonable starting point to consider that a typical MBL eigenstate has a well-defined localisation centre in Hilbert space. This is nothing but the infinite disorder eigenstate to which the original eigenstate is connected via a finite-depth local unitary operator <cit.> and which is manifestly a product state. We find that the localisation centres of the MBL eigenstates in the special quartets have a specific structure – the four states are made up of combinations of two different states in A and two different in B. As such, for any eigenstate in such a quartet, there are two others which differ from the first one only in one subsystem whereas the remaining fourth one differs from the first one in both the subsystems. This is shown schematically in Fig. <ref>(a). Denoting Hilbert space dimension of subsystem A (B) as N_ H_A(B), the number of choosing two different states in A (B) is O(N_ℋ_A(B)^2) such that total number of such quartets is O(N_ℋ_A^2N_ℋ_B^2) ∼ O(^2). This explains the number of such special quartets. This real-spatial structure of the special dominant quartets provides us with an organising principle for grouping them. Since the interactions in the system are short-ranged, the entanglement can spread only locally. This suggests grouping the quartets in terms of the distance, r, of the nearest sites from the bipartition (between A and B) that differ amongst the states in the quartets. Via numerical exact diagonalisation, we obtain the frequencies associated these special dominant quartets, which as mentioned earlier are sums and differences of the eigenvalues of the four states involved in a quartet. Interestingly, we find that the distributions of the frequencies has a distribution which exhibits a scaling form – the distributions for different r can be collapsed onto a common curve when the frequencies are scaled with a characteristic frequency ω_∗(r) which in turn decays exponentially with r. We show that the interplay of this hierarchy in the energyscales ω_∗(r) and the number of such special dominant quartets with r leads to the emergence of the power-law in ω decay of (ω) which in turn implies the logarithmic growth of the second Rényi entropy of entanglement in time. Resolving the dynamics in r and the associated energyscales also provides a window into the spacetime picture of the entanglement growth. In particular, from the distribution of the frequencies for the quartets with a given r, we find that the entanglement growth at time t is dominated by the special quartets with r(t)∼ln t. This leads to the notion of a logarithmically spreading `entanglement wavefront' as schematically shown in Fig. <ref>(b). Since the degrees of freedom within the two wavefronts are strongly entangled, it again implies the logarithmic growth of bipartite entanglement entropy. We finally discuss how the eigenstate and spectral correlations also lead to the volume-law saturation of the entanglement entropy in the MBL phase and an area-law in the Anderson localised case. §.§ Organisation of the paper The rest of the paper is organised as follows. In Sec. <ref> we lay out the general framework and show how the dynamics of purity is encoded in the four-point eigenstate and spectral correlations. In particular in Sec. <ref> we show how random product states as initial states can be averaged over and in Sec. <ref> we discuss how to separate the dynamical and infinite-time components of the purity. The connection between eigenstate correlations and the operator entanglement of the time-evolution operator is described in Sec. <ref>. In Sec. <ref>, we present some numerical results for a Floquet, disordered spin-1/2 chain which serves as the test bed for our theory. Section <ref> consists of a detailed analysis of the eigenstate correlations that contribute to the dynamics of purity and forms the backbone of the paper. In Sec. <ref> we identify the special quartets which dominate the dynamics of purity and their structure, both in real and Fock space. This structure leads to a hierarchy of frequency and timescales which we is discussed in Sec. <ref>, and from which the logarithmic growth of entanglement emerges automatically as discussed in Sec. <ref>. The spacetime picture of entanglement, as emerges from the hierarchy of frequency and timescales, constitutes Sec. <ref>. In Sec. <ref>, we discuss how our work is related to and goes beyond the ℓ-bit picture. We briefly discuss the infinite-time saturation of the purity in Sec. <ref> before closing with a summary and outlook in Sec. <ref>. § DYNAMICS OF PURITY AND EIGENSTATE CORRELATIONS In this section, we lay out the basic framework and show the how the dynamics of purity is encoded in dynamical eigenstate correlations. In particular, we show that averaging over initial states which are random product states between the subsystems, leads to an expression for the purity purely in terms of eigenstates correlations which involves sets of four eigesntates. We also discuss the subset of eigenstate correlations which encode the infinite-time purity. It is worth mentioning here that the framework discussed in this section is completely general and is equally valid for an ergodic or an MBL system. §.§ Setting and definitions We start by describing the basic setting and establishing some necessary notation. Consider a quantum system with a bipartition into two subsystems, A, and its complement, B (see Fig. <ref>). For a given state |ψ⟩ of the system, the second Rényi entropy of entanglement between A and B is given by S_2^AB = -ln where the bipartite purity, , is defined as = Tr_A[ρ_A^2]; ρ_A = Tr_B[|ψ⟩⟨ψ|] , where ρ_A is the reduced density matrix of subsystem A obtained by a partially trace of the density matrix of the full system over the degrees of freedom in subsystem B. The Hilbert space of the system is made up of a tensor product of the Hilbert spaces of A and B, H = H_A⊗ H_B. Let us denote a set of orthonormal basis state of H_A as {|i_A⟩} and similarly of H_B as {|i_B⟩} such that any basis state of H is given by |i_A,i_B⟩≡|i_A⟩⊗|i_B⟩. With this notation, the purity in Eq. <ref> can be expressed as = ∑_i_A,i_B, j_A,j_Bψ_i_Ai_Bψ_i_Aj_B^∗ψ_j_Ai_B^∗ψ_j_Aj_B , where ψ_i_Ai_B=⟨ i_A,i_B|ψ⟩. Note that Eq. <ref> is manifestly invariant under arbitrary basis transformations within H_A and H_B. Also note that, Eq. <ref> holds for a time-evolving state as well where the eigenstate amplitudes and hence the purity depend explicitly on time. In order to study the dynamics of purity, we consider a periodically driven, or a Floquet system where we denote the time-evolution operator over one period as U_F which is also often referred to as the Floquet unitary. While all the results in this work hold for a system described by a time-independent Hamiltonian, we consider a Floquet system for two reasons. First, in the latter all (quasi)energies are statistically equivalent and the density of states is flat – this removes the necessity to unfold the spectrum while studying dynamical eigenstate correlations. Second, Floquet systems do not allow for mobility edges which serves as a convenience as we prefer to not have results in the MBL regime potentially contamimated by them. We denote by {|α⟩} and {θ_α}, the set of eigenstates and eigen-(quasi)energies of U_F, such that U_F = ∑_αe^-iθ_α|α⟩⟨α| , with α_i_Ai_B = ⟨ i_A,i_B|α⟩. An initial state, |ψ_0⟩ at time t=0, evolves in time with the unitary in Eq. <ref> as |ψ(t)⟩ = U_F^t|ψ_0⟩ such that the time-dependent amplitudes are given by ψ_i_Ai_B(t) = ∑_αe^-i θ_αtα_i_Ai_B⟨α||ψ_0⟩ . Using the above in Eq. <ref>, we obtain an expression for the purity at time t as _ψ_0(t) = ∑_,,,e^-itV_ℐ^_ψ_0 , where = θ_α-θ_β-θ_γ+θ_λ , are the different frequencies composed of sums and differences of the quasienergies that contribute to the dynamics, V_ = ∑_i_A,i_B, j_A,j_Bα_i_Ai_Bβ^∗_i_Aj_Bγ_j_Ai_B^∗λ_j_Aj_B , denotes the corresponding eigenvector correlations involving the four different eigenstates, and ℐ^_ψ_0=⟨|ψ_0⟩⟨ψ_0|β⟩⟨ψ_0|γ⟩⟨|ψ_0⟩ , contains explicitly the information of the initial state. This explicit dependence on the initial state is also reflected in the subscript in _ψ_0(t) in Eq. <ref>. It is important to note that in Eq. <ref> is again invariant under arbitrary basis transformations within subsystems H_A and H_B. As a final piece notation, we introduce (ω) denoting the dynamics of purity in the frequency domain. Formally the Fourier transform of Eq. <ref>, it can be expressed as _ψ_0(ω) = ∑_,,,δ_2π(-ω)V_ℐ^_ψ_0 . where the subscript of 2π in the Dirac-delta function denotes that =ω mod 2π as we will be working with Floquet systems. Note that the each term in the sum in Eq. <ref> or in Eq. <ref> is a correlation involving eight distinct eigenstate amplitudes, four from the eigenstate correlation in Eq. <ref> and four from the initial state dependence in Eq. <ref>. §.§ Purity averaged over initial states While the results for purity in Eq. <ref> and in Eq. <ref> depend explicitly on the initial state, we next show that they can be averaged over a large family on initial state such that they depend only on eigenstate and spectral correlations. In particular, we consider random product states between A and B as initial states |ψ_0⟩ = |ψ_0^A⟩⊗|ψ_0^B⟩ , where |*⟩ψ_0^A is a normalised Haar random state in susbsystem A and similarly in B. The motivation behind this choice of initial states is twofold. First, the growth of entanglement starting from such states is quantitatively the same as the growth of the operator entanglement entropy of U_F^t <cit.>, and hence quantifies entanglement growth in a manner which is not biased due to choices of initial states. The second, and arguably more important in this case, point is that for such initial states, the quantity ℐ^_ψ_0 (see Eq. <ref>), encoding the information of the initial state, can be straightforwardly averaged over all ψ_0 of the form in Eq. <ref>. Using the Haar random nature of |*⟩ψ_0^A and |*⟩ψ_0^B, it can be shown that (see Appendix <ref>) 𝔼[I^αβγλ_ψ_0]_ψ_0 = 1/^2[δ_δ_+δ_δ_+ V_^*+V_^*] , where 𝔼[⋯]_ψ_0 denotes the average over intial states and V_is given by Eq. <ref>. Defining the initial-state averaged purity as (ω) ≡𝔼[_ψ_0(ω)]_ψ_0, an expression for it can be obtained using Eq. <ref> in Eq. <ref> as (ω) = 1/^2∑_,(V_+V_)δ(ω)+ 1/^2∑_,,,δ(_-ω)V_(V_^∗+V_^∗) . An obvious but key point to note here is that the average over the initial states has yielded an expression for the dynamics of purity, (<ref>), that depends only on the eigenstate and spectral correlations. §.§ Dynamical and infinite-time purity At this stage, it will be particularly useful to separate the dynamical part of (ω) corresponding to ω≠ 0 from the infinite-time saturation, corresponding to t→∞ or equivalently ω=0 for a finite system. As such, we express (ω)=(ω)+δ(ω). The infinite-time saturation of purity is given by = 1/^2[∑_,(V_ +V_)+ ∑_,,,: =0 V_(V_^∗+V_^∗)] . We will return to a detailed analysis of the above expression in Sec. <ref> and discuss the scaling of with system size and show it leads to a volume-law saturation of the bipartite entanglement entropy. However, much of our focus in this work is on the dynamics of purity which is encoded in (ω), given by (ω) = 1/^2∑_,,,: ≠0δ(_- ω)V_× (V_^∗+V_^∗) , where it can be demonstrated that the second term in the brackets is negligibly small such that, (ω) = 1/^2∑_,,,: ≠0δ(_- ω)|V_|^2 . The expression in Eq. <ref> constitutes a key ingredient to our theory of entanglement growth and its right hand side is a central quantity of analysis, as will become clear in the subsequent sections. While details of the calculation and numerical evidence for ignoring the second term in Eq. <ref> are presented in Appendix <ref>, we argue for it here on general grounds. Note that (ω) satisfies a sum rule, namely ∫ dω (ω) = 1, which is a result of the initial state being a product state between A and B, such that (t=0)=1. At the same time, using the orthonormality of the eigenvectors, it can be shown that ∑_α,β,γ,λ|V_αβγλ|^2 = ^2 ; ∑_α,β,γ,λV_αβγλV_αγβλ^∗= . Interpreting (ω) as a distribution of weighted by the respective |V_αβγλ|^2+V_αβγλV_αγβλ^∗, Eq. <ref> makes it clear that almost the entire weight of the distribution is carried by the first term and the contribution of the second term is negligibly small, O(^-1). It is therefore, completely justfied to ignore the second term, doing precisely which yields Eq. <ref> from Eq. <ref>. §.§ Operator entanglement entropy of time-evolution operator It is also useful to note that the dynamics of bipartite entanglement averaged over the initial states, as in Sec. <ref>, is directly related to the operator entanglement entropy <cit.> of the time-evolution operator U(t) ≡ U_F^t with U_F given in Eq. <ref>. To see this, note that the time-evolution operator, just like any other operator, can be written as a vector in a doubled Hilbert space, of dimension ^2), which is the Hilbert space of all operators defined for the system. The inner product between two operators, X and Y, is defined as (X|Y) = 1/Tr[X^†Y] , where the notation |⋯) denotes an operator written as a vector. The doubled Hilbert space can also be written as a direct product of the doubled Hilbert spaces of the two subsystems and each of them are spanned by the set of operators { O_A} and { O_B} which have support only on their respective subsystems and are mutually orthonormal under the definition in Eq. <ref>. Using this notation, the time-evolution operator can be expressed as |U(t)) = ∑_O_A,O_B u_O_A,O_B(t)|O_A)⊗|O_B) , where u_ O_A, O_B(t)=( O_A⊗ O_B|U(t)). The bipartite operator purity of the time-evolution operator is then P_U(t)^AB = ∑_O_A,O_B, O^'_A,O^'_B u_O_AO_B u^∗_O^'_AO_B u^∗_O_AO^'_B u_O^'_AO^'_B , where the explicit dependence of u_ O_A O_B on t as been suppressed for brevity. While the operator purity in Eq. <ref> as well as the eigenstate correlations, in Eq. <ref> are basis independent, the explicit relation between the two is most conveniently seen as follows. For any set of orthonormal basis states {|i_A⟩} of H_A, one can construct a set of operators A_i_Aj_A=√(N_ H_A)|j_A⟩⟨i_A| which form a set of orthonormal basis for the operator Hilbert space of subsystem A; and similarly for B. This particularly choice of O_A/B leads to u^†_A_i_Aj_AB_i_Bj_B (t)= 1/√()∑_αe^-iθ_αtα_i_Ai_Bα_j_Aj_B^∗ , which when used in Eq. <ref> directly leads to P_U(t)^AB = 1/^2∑_α,β,γ,λ||^2e^-it , which is nothing but the bipartite purity of time-evolving states initialised as product states, Eq. <ref>, upon averaging over such initial states. This is particularly useful because the rest of what follows can be understood as a unified picture for entanglement growth starting from typical product states as well as entanglement growth of the time-evolution operator. § DISORDERED FLOQUET SPIN-1/2 CHAIN AS A MODEL While the entire framework relating the dynamics of purity to eigenstate correlations described in the previous section was completely general, our main focus on strongly disordered systems in the MBL regime. In order to orient ourselves in that direction, and also as a concrete setting for demonstrating our theory, we we employ a disordered, Floquet spin-1/2 chain. The Floquet unitary, U_F, is given by <cit.> U_F = exp[-iτH_X] exp[-iτH_Z] , where H_X =gΓ∑_i=1^L σ^x_i , H_Z = ∑_i=1^L [σ^z_iσ^z_i+1 + (h+g√(1-Γ^2)ϵ_i)σ^z_i] , where {σ_i^μ} is the set of Pauli matrices representing the spins-1/2 and ϵ_i∼𝒩(0,1) are Gaussian random numbers with zero mean and unit standard deviation. Following Ref. <cit.> we take g = 0.9045, h = 0.809, and τ = 0.8. For these parameters, there is a putative many-body localisation transition at Γ_c≈ 0.3 with the model in an ergodic phase for Γ>Γ_c and in a MBL regime for Γ<Γ_c. It is the latter that we are interested in and hence, all our numerical results are for Γ=0.1 and 0.15, two representative values in the MBL regime. In Fig. <ref>, we show the results for (t) and S_2^AB(t) as a function of time t, where the ⋯ denotes the average over disorder realisations. The first thing to note from the data is that (t) is extremely close to exp[-S_2^AB(t)]. That the quenched and annealed averages are so close to each other suggests that the distribution of the purity remains extremely narrow at all times. However the key takeaway from this is that it is indeed sufficient to (t) or (ω), and their averages over disorder realisations to understand the dynamics of S_2^AB(t). The data shown in Fig. <ref> suggests that with increasing system size L, (t) as a function of t on logarithmic axes falls better and better onto a straight line. As such the data is best described by a power-law of the form (t)∼t^-a , or equivalently, S^AB_2(t)∼ aln t with 0<a<1. It is also consistent with expectation that the value of a is smaller for Γ=0.1 compared to that for Γ = 0.15, as the purity is expected to decay slower in the former case by virtue of it being deeper in the MBL regime. Turning towards numerical results in the frequency domain, Fig. <ref> shows ⟨(ω)⟩ evaluated explicitly [Evaluating and for all the O(^4) quartets of eigenstates for L>8 is computationally unfeasible. We therefore randomly choose 10^7 quartets of eigenstates for each disorder realisation and average the results over them as well as over around 10^4 disorder realisations. We however checked that for L=8, the results obtained this way match the ones obtained by considering all the quartets.] using Eq. <ref>. The data is again best described by a power-law decay in ω, *(ω)∼ω^-b , particularly for ω≪ 1 corresponding to long-time dynamics. It is useful to point out that b≈ 1-a, although it is expected on general grounds as (ω) is essentially the Fourier transform of (t). Concomitantly, b increases on moving deeper into the MBL phase. This is an interesting point of distinction between the higher-point correlations of the form in Eq. <ref> and correlations involving only two eigenstates. Albeit basis dependent, the latter also exhibits a power-law dependence in ω with the corresponding exponent becoming smaller on moving deeper into the MBL phase <cit.>. The fact that this can be explained via rare resonances between pairs of eigenstates <cit.> and is in complete contrast to the behaviour of the exponent b in Eq. <ref> suggests that the underlying physics at the heart of (ω) is different from and goes beyond pairwise resonances between two eigenstates. Understanding microscopic origins of the power-law in ω behaviour of (ω), and possible subleading corrections will be the central focus of the rest of the paper. However, the subleading corrections to the behaviour in Eq. <ref> and in Eq. <ref>, if present, are unlikely to be revealed in the data shown in Fig. <ref> and Fig. <ref> due to limited system sizes and timescales accessible in numerical calculations. § ANATOMY OF EIGENSTATE CORRELATIONS The key takeaway from the general framework laid out in Sec. <ref> was the explicit connection between the dynamics of purity and the eigenstate correlations, as embodied in, for example Eq. <ref>. It is therefore clear that to develop a theoretical understanding of the dynamical behaviour, as exemplified in Sec. <ref>, a detailed understanding of the anatomy of the eigenstate correlations, ||^2, and the associated spectral correlations is important. This constitutes the topic of this section. §.§ Special quartets which dominate dynamics of purity To get a basic idea of the eigenstate correlations, ||^2, we begin by studying their distributions. Since we are interested in the dynamics of purity, as encoded in (ω) , we focus on the distribution ||^2 over quartets of eigenstates for which ≠ 0 (see Eq. <ref>). Denoted by P_|V|^2, the distribution is defined as P_|V|^2(v)= 1/N_Q∑_,,,: ≠0δ(|V_|^2-v) , where the normalisation, N_Q=^4 -2^2+≈^4, is simply the number of quartets that satisfy ≠ 0. The results are shown in Fig. <ref> for the same two values of Γ as in Fig. <ref> and Fig. <ref>. The important feature that emerges out of the results is that for an overwhelmingly large fraction, in fact close to unity, of the quartets of eigenstates, ||^2 is vanishingly small. This is reflected in the large peak in P_|V|^2(v) near v≈ 0^+, and that the height of the peak is seemingly converged with system size (top panels). This is, in fact, better understood from the insets where the distributions are shown on logarithmic scales such that regime of small values ||^2≪ 1 is magnified. On the other hand, the fact that P_|V|^2(v) for v∼ O(1) collapses for different L when scaled by ^-2 (bottom panels). This suggests that for a number of quartets which scales as ^2 (or equivalently whose fraction scales as ^-2), ||^2 ∼ O(1). This observation raises two questions, (i) what is the distinguishing feature of these special quartets of states with ||^2 ∼ O(1), and (ii) do these special quartets dominate the dynamics of purity such that it is sufficient to consider only such quartets in Eq. <ref>. We start by investigating the first question. To understand the structure of these special quartets of eigenstates, we study their amplitudes in Fock space. Note that even though the MBL eigenstates exhibit (multi)fractal statistics in any basis which is a tensor product of local (in real space) bases, they can typically be associated to a localisation centre in an appropriate such basis <cit.>. For the model in Eq. <ref>, the σ^z-product state basis is the appropriate basis as the disordered fields in H_Z couple to the σ^z_i operators. The localisation centres are then nothing but the σ^z-product states to which the MBL eigenstates are analytically connected to via finite-depth local unitary circuit <cit.>. We denote the localisation centre of an eigenstate, say |α⟩, by (i_A^α,i_B^α). Using this notation, we find that for any typical quartet of eigenstates with ||^2∼ O(1) and ≠0, the localisation centres satisfy the condition i_A^α= i_A^γ, i_A^β= i_A^λ  with i_A^α≠i_A^β , i_B^α= i_B^β, i_B^γ= i_B^λ  with i_B^α≠i_B^γ , or the combination equivalent to a permutation of the indices in Eq. <ref> with β↔γ. The condition in Eq. <ref> implies that the localisation centres of |α⟩ and |γ⟩ correspond to the same σ^z-configuration in subsystem A and similarly for |β⟩ and |λ⟩; however, the two localisation centres are necessarily different. On the other hand, it is |α⟩ and |β⟩ whose localisation centres correspond to the same σ^z-configuration in subsystem B and similarly for |γ⟩ and |λ⟩, with the two localisation centres again being different. Evidence for this structure provided in Fig. <ref>. In the left two columns, we show the eigenstate intensities, |α_i_Ai_B|^2, (and similarly for the other three eigenstates) as colourmap in the (i_A,i_B) plane. The localisation centres, also marked by the red dashed lines, are indicated by the points of highest intensity in each of the panels. Note that, it is clear from the positions of the localisation centres that they do indeed satisfy Eq. <ref>. As a matter of notation, hereafter we will denote by α,β,γ,λ∈□ any such special quartet of eigenstates. This specific structure of the eigenstates also manifest themselves in the real-space profile of the σ^z-expectation values, σ^z_i_α≡α|σ^z_i|α. A characteristic signature of the MBL regime is that σ^z-expectation values are close to ± 1 and not exponentially small in L as they would be in an ergodic phase <cit.>. The condition in Eq. <ref> would suggest that       σ^z_i_α= σ^z_i_γ ,  σ^z_i_β= σ^z_i_λ  ∀i∈A ,       σ^z_i_α= σ^z_i_β ,  σ^z_i_γ= σ^z_i_λ  ∀i∈B , along with       ∃ i∈A : σ^z_i_α≠σ^z_i_β ,       ∃ i∈B : σ^z_i_α≠σ^z_i_γ . The right column in Fig. <ref> shows evidence for Eq. <ref>. The picture that therefore emerges for the structure of quartets of eigenstates with ||^2∼ O(1) is as follows. The four eigenstates in the quartet are such that there localisation centres are made of two distinct σ^z-configurations in subsystem A and two in B. In other words, the four distinct combinations of these states form the localisation centres of the four eigenstates. In fact, this picture straightforwardly explains the number of quartets with ||^2∼ O(1). For a given |α⟩, the number of states |β⟩ which has the same localisation centre in B but different in A is N_ H_A-1. Similarly, the number of states |γ⟩ which has the same localisation centre as |α⟩ in A but different in B is N_ H_B-1. Fixing |β⟩ and |γ⟩ automatically fixes the state |λ⟩, see Eq. <ref> or Fig. <ref>. Since there are choices of |α⟩ itself, the total number of quartets which satisfy Eq. <ref>, which we denote by N_□ is therefore (N_ H_A-1)(N_ H_B-1). As the subsystems A and B correspond to left and right halves of the entire system, we have N_□ = (√()-1)^2≈^2 or equivalently N_□/ N_Q∼^-2. This immediately explains why a fraction ∼^-2 of the quartets have ||^2∼ O(1) (see Fig. <ref>). It is worth clarifying and reiterating a couple of points here. First, our focus is deep in the MBL phase where associating an eigenstate to a localisation centre is well-defined despite the multifractality of the eigenstate. Second, the theoretical picture that emerges in the following sections does not depend on an eigenstate being sharply localised in the Hilbert space. Instead, the identification should be thought of as a starting point for constructing the theory. We next turn to the second question raised above, namely do the special quartets with ||^2∼ O(1) dominate the dynamics of purity. In order to ascertain that, we define (ω) = 1/^2∑_α,β,γ,λ∈□δ(-ω) , where we restrict the sum in Eq. <ref> to only the special quartets, and also make the approximation that their O(1) values of ||^2 can be set to 1. The results for (ω) are shown in Fig. <ref> where it is clear that they are virtually indistinguishable from the full (ω). This provides compelling evidence for the fact that the dynamics of purity is governed by spectral correlations (again four-point) between eigenstates forming the special quartets such that, ⟨(ω)⟩≈(ω) . Understanding the power-law in ω of (ω) then reduces to understanding these spectral correlations which we discuss next. §.§ Hierarchy and distribution of energyscales In order to understand the spectral correlations in Eq. <ref>, it will be useful to organise the special dominant quartets using some guiding principles. The first of them is a consequence of the locality of the interactions which results in the entanglement between A and B growing from the bipartition. In other words, sites in A and B which are closer to the bipartition will get entangled earlier than those further away. This suggests that the quartets should be classified based on a real-space distance measured from the bipartition. The second guiding principle is provided by the picture that the effective interactions between degrees of freedom in any effective theory must decay with the distance between them, for instance the decay is proposed to be exponential in the ℓ-bit theory <cit.>.. Using this picture as the starting point, we organise the quartets as follows. Consider the localisation centres of the four eigenstates in a quartet to be made out of |i_A^∗⟩ and |j_A^∗⟩ in A with i_A^∗≠ j_A^∗, and |i_B^∗⟩ and |j_B^∗⟩ in B with i_B^∗≠ j_B^∗, see Fig. <ref> (left panel) for a visual schematic. Note that , in fact, is a difference of the differences of quasienergies θ_α-θ_β and θ_γ-θ_λ. Perturbatively, to leading order in interactions, is therefore the difference, ≈Δ_i_A^∗j_A^∗^i_B^∗ - Δ_i_A^∗j_A^∗^j_B^∗ + ⋯ , where Δ_i_A^∗ j_A^∗^i_B^∗ is the difference between the energies of |i_A^∗⟩ and |j_A^∗⟩ due to interactions with B in configuration |i_B^∗⟩ and similarly for Δ_i_A^∗ j_A^∗^j_B^∗ [One can equivalently interpret as the difference θ_α-θ_γ and θ_β-θ_λ such that perturbatively it can be expressed as ≈Δ_i_B^∗ j_B^∗^i_A^∗ - Δ_i_B^∗ j_B^∗^j_A^∗ + ⋯.]. This is where we use the second guiding principle above which suggests that the leading term in Eq. <ref> will be dominated by the interaction between the site in A and the site in B which are the nearest to the bipartition but are different between |i_A^∗⟩ and |j_A^∗⟩ and between |i_B^∗⟩ and |j_B^∗⟩ respectively. We denote the distances of these sites from the bipartiton by r_A and r_B respectively, see Fig. <ref> (right panel) for a visual schematic. The upshot of the above is that given a special dominant quartet, α,β,γ,λ∈□, it can be classified according to its r_A and r_B value as defined above from the localisation centres of the eigenstates, we will denote such a quartet as □_r_Ar_B. With the organisational principle at hand, a natural step towards understanding (ω) in Eq. <ref> is the distribution of over all quartets □_r_Ar_B, which we define as (ω) = 1/N_r_Ar_B∑_α,β,γ,λ ∈□_r_Ar_Bδ(-ω) , where N_r_Ar_B = 2^2L-r_A-r_B is the number of special quartets □_r_Ar_B. This counting can be simply understood as follows. For a given i_A^∗, there are 2^L_A-r_A choices of j_A^∗ where the spin configuration is the same for all the r_A sites from the bipartition. Similarly, for a given i_B^∗, the number of states j_B^∗ which has the same configuration on r_B sites from the bipartition is 2^L_B-r_B. As such, with the total number of choices of i_A^∗ (i_B^∗) being 2^L_A (2^L_B), the total number of special quartets □_r_Ar_B is therefore 2^L_A2^L_A-r_A× 2^L_B2^L_B-r_B which yields the expression above. Since is expected to be dominated by the effective interaction between the sites which are at a distance r_A+r_B from each other (see right panel of Fig. <ref>), we expect (ω) in Eq. <ref> to be independent of L, as the distance itself is independent of L and the microscopic interactions are strictly local. Evidence of this is provided by the results in Fig. <ref>. The results therein show that, for a given r_A and r_B, the data for for different L indeed fall on top of each other. The same argument as above also suggests that for a given r_A and r_B should depend solely on r≡ r_A+r_B. That this is indeed the case is shown by the results in Fig. <ref>, where it is clear that the data for (ω) are identical for different pairs of r_A and r_B which correspond to the same r. It will therefore be useful to denote by □_r the special dominant quartets □_r_Ar_B with r_A+r_B=r and define a distribution analogous to Eq. <ref> as (ω) = 1/N_r∑_α,β,γ,λ ∈□_rδ(-ω) , where N_r is the number of special quartets □_r with N_r = 2^2L-r× r-1 ;  r≤L/2 L-r+1 ;  r>L/2 , where the second factor is nothing but the number of integer pairs (r_A,r_B) which sum to r and the first term is just the number of special quartets with a specific r_A and r_B which sums to r. We have already established that (ω) in Eq. <ref> independent of system size; however we still need to understand its dependence on r and ω. The data for (ω) as a function of ω for several values of r is shown in Fig. <ref> (top panels). There are two salient features of the data worth noting. For every r, there seems to exists a characteristic scale ω_∗(r), above which (ω) decays with ω as a power-law with the exponent seemingly, and crucially, independent of r. At the same time, below this ω_∗(r), (ω) is independent of ω and has a plateau at the value which appears to grow exponentially with r. This motivates a scaling form (ω) = f(r) F(ω/ω_∗(r)) , with the asymptotic behaviour of the scaling function, F(x) = x^-μ ;  x≫1 1 ;  x→0 . Since (ω) is a normalised distribution, it automatically mandates that f(r) = ω_∗^-1(r). The data shown in the lower panels of Fig. <ref> shows that the scaling form in Eq. <ref> with the scaling function in Eq. <ref> is an excellent description of (ω). The power-law exponent μ>1 and it decreases with increasing Γ. The inset in the lower right panel shows that ω_∗(r) does decay exponentially with r, ω_∗(r)= c exp(-r/ξ) , where ξ is a characteristic lengthscale. The data in the inset in Fig. <ref> shows that ξ increases with Γ. §.§ Emergence of logarithmic growth of entanglement We next show how the scaling form for discussed above leads to the power-law in time decay of the purity, or equivalently, the logarithmic in time growth of the second Rényi entropy. Recall from Eq. <ref> that ⟨(ω)⟩=(ω) which can be written as a sum, ⟨(ω)⟩= 1/^2∑_r=2^L N_r(ω) . The scaling form in Eq. <ref> implies that for a given ω, there exists a characteristic scale r_∗(ω) such that (ω)= ω^-1_∗(r) ;   r<r_∗(ω) ω^-μ [ω_∗(r)]^-1+μ ;   r>r_∗(ω) , where r_∗(ω) = ξ|ln(ω/c)| . Using the two cases in the equation above, the sum in the right-hand side in Eq. <ref> can be split into two as ⟨(ω)⟩= 1/^2[ ∑_r=2^r_∗(ω)N_rω^-1_∗(r)+ ω^-μ∑_r=r_∗(ω)+1^L N_r[ω_∗(r)]^-1+μ] . While the summations in Eq. <ref> can be done exactly, as detailed in Appendix <ref>, here we only sketch the derivation and discuss the physical import of the result. A couple of key assertions that we make here are μ>1 and ξ<1/ln 2; the numerical results in Fig. <ref> indeed show that they are satisfied. However, we will see post facto that violating these bounds leads to results which are unphysical or incompatible with locality. In the first summation in Eq. <ref>, note that the summand, ∼ 2^-re^r/ξ is an exponentially increasing function of r as ξ<1/ln 2, and hence the sum is dominated by r=r_∗(ω). As such, the first sum can be estimated as 2^-r_∗(ω)e^r_∗(ω)/ξr_∗(ω) which using Eq. <ref> is ∼ω^-1+ξln 2ln|ω|. On the other hand, in the second summation, by virtue of μ>1 and ξ<1/ln 2, the summands are an exponentially decreasing function of r, and hence the summation is expected to be dominated by the first term r=r_∗(ω)+1, which again leads to an estimate of the sum of as ∼ω^-1+ξln 2ln|ω|, which is the same as the first summation. We therefore conclude that ⟨(ω)⟩∼ω^-1+ξln2|lnω| , which is the precisely the power-law in ω behaviour of (ω) we sought to understand but with an additional logarithmic correction. Note that the result in Eq. <ref> implies that for (ω) to decay with ω, we necessarily require ξln 2<1 which validates one the two assertions made earlier. Also note that, if μ<1-ξln 2, the second sum in Eq. <ref> is dominated by r∼ L at finite ω, which is clearly incompatible with dynamics induced by a local unitary time-evolution implying μ must satisfy μ>1-ξln 2. Since the theory must remain valid in the deep MBL phase (and in fact, get better on going deeper) the inequality must hold as ξ→ 0 implying μ>1 and thus validating the second assertion. In the time domain, the Fourier transform of Eq. <ref> leads to ⟨(t)⟩∼t^-ξln2lnt , which again is nothing but the power-law decay in time of the purity (Eq. <ref>) with a logarithmic correction, as seen numerically in Sec. <ref>. Note that ⟨ S_2^AB(t)⟩≈ -log⟨(t)⟩ (see Fig. <ref>), we conclude that ⟨S_2^AB(t)⟩∼ξln2 lnt - lnlnt t≫1∼ (ξln2) lnt , which is the logarithmic growth of the entanglement entropy characteristic of the MBL regime. From the data in Fig. <ref>, we have ξ=0.32 for Γ=0.1 and ξ=0.44 for Γ=0.15. For these values, the power-law-exponent, ξln 2 in Eq. <ref>, takes on values approximately 0.22 and 0.31 for Γ=0.1 and 0.15 respectively. The numerical results in Fig. <ref> on the other hand estimate the exponent a to be 0.17 and 0.24. We note that while the two estimates are reasonably close, a ⪅ξln 2 which we attribute to the logarithmic correction in Eq. <ref>. §.§ Spacetime picture of entanglement growth An important physical import of the above analysis is that it provides a spacetime picture of the entanglement growth as follows. For a given ω, the sum in Eq. <ref> is dominated by a single r = r_∗(ω). In other words, the dynamics of the purity at frequency ω is encoded in the spectral correlations of the form in Eq. <ref> within the special quartets □_r_∗(ω) with r_∗(ω) given by Eq. <ref>. This suggests that that at any time t, there is a a characteristic lengthscale d(t)≡ r_∗(ω=2π/t)∼ξln t and degrees of freedom within this distance d(t) from the bipartition are entangled at time t. Since the total entanglement carried in such a case will be proportional to d(t), this presents an alternative perspective to infer the logarithmic growth of entanglement entropy. In addition, this could also be understood as the manifestation of the logarithmic light-cone of operator spreading in the MBL regime <cit.>. Understanding the dynamics of purity resolved in real-space distance leads to further insights. The purity in time, using Eq. <ref>, can be expressed as ⟨(t)⟩= 1/^2∑_r=2^L N_r P_r(t) , where P_r(t) is the Fourier transfrom of (ω). The functional form of (ω) in Eq. <ref> suggests that P_r(t) decays like a stretched-exponential in time <cit.>, P_r(t)∼exp[-(t/t_∗(r))^z] ; t_∗(r)∼ω_∗(r)^-1 . Since t_∗(r) grows exponentially with r, the temporal dependence of P_r(t) naturally slows down with increasing r. At the same time, the number of quartets with r also decays exponentially with r, Eq. <ref>. The power-law decay of ⟨(t)⟩ emerges from the collective effect of the stretched-exponential decays in Eq. <ref> with hierarchically growing timescales, t_∗(r)∼ e^r/ξ and the hierarchically decreasing N_r∼ e^-rln 2. This arises simply from the fact that the sum in Eq. <ref> is dominated by a `saddle-point', r_∗(t)∼ξln t. It is also interesting to note that the phenomenology is quite similar to that of hierarchically constrained dynamics for glassy relaxation <cit.>. The space-resolved picture discussed above also provides us with a notion of an `entanglement wavefront'. This can be understood as follows. The partial sum of Eq. <ref>, ∑_r=2^R N_r P_r(t) can be understood to contain the information of the decay of bipartite purity between A and B carried by degrees of freedom within a distance 2R from the bipartition. N_r P_r(t) is therefore the contribution to the decay in purity at time t at distance r from boundary. However, the aforementioned r_∗(t) suggests that there is a well-defined wavefront of entanglement spreading. This is shown schematically in Fig. <ref> where N_r P_r(t) is plotted as a heatmap in spacetime assuming the stretched exponential in Eq. <ref> which also shows the logarithmic spreading in time of the entanglement wavefront. For a given t, the wavefront decays exponentially with r away from r_∗(t) for r>r_∗(t) due to the exponential decay with r of N_r. On the other hand, the hole in the profile for r away from r_∗(t) with r<r_∗(t) is purely due to the fact that degrees of freedom within distance r of the bipartition are strongly entangled and the effective contribution to the purity from them has decayed to extremely small values. §.§ ℓ-bits redux Having understood the microscopic origins of the logarithmic growth of entanglement in the MBL regime, it is useful to discuss how the microscopic theory is connected to the phenomenological ℓ-bit picture, and in fact goes beyond it. The ℓ-bit picture suggests that the effective Hamiltonian encoding the interactions between the local integrals of motion (denoted by τ^z_i) is of the form H_ℓ-bit=∑_i h_iτ^z_i + ∑_j>iJ_ije^-j-i/ζ_ijτ^z_iτ^z_j + ∑_k>j>iJ_ijke^-k-i/ζ_ijkτ^z_iτ^z_jτ^z_k⋯ . A simplifying assumption commonly made within the ℓ-bit picture is that the lengthscale ζ can be taken to be a characteristic localisation length of the MBL system which depends only on the disorder strength. The energyscale which therefore controls the entanglement spreading until distance r from the bipartition is ∼ e^-r/ζ and the corresponding timescale is ∼ e^r/ζ. In other words, at time t, the `entanglement wavefront' reaches a distance d(t) ∼ζln t. Since the entanglement is proportional to this volume, S(t)∼ d(t) one obtains the logarithmic growth of entanglement S(t)∼ζln t <cit.>. While such a picture recovers the logarithmic growth of entanglement in time, it is insufficient to describe the distributions of relevant energy- and timescales or the spacetime profile of entanglement growth. This picture also fails to take into account the several corrections to the energyscales arising from the interactions between more distant degrees of freedom <cit.>. This is fundamentally due to the absence of an understanding of the distribution of the lengthscales {ζ} which in turn is rooted in an absence of an explicit construction of the ℓ-bit Hamiltonian starting from a microscopic theory. This is where our theory goes beyond the conventional ℓ-bit picture. Note that the energyscale that controls the entanglement spreading at distance r from the bipartition is the for α,β,γ,λ∈□_r. Crucially, in our approach, the set of {} and their distributions can be computed explicitly with the latter given simply by (ω) defined in Eq. <ref>. Since this is computed exactly, for a given r, it manifestly contains not just the leading contribution due to the interactions between degrees of freedom separated by r but also the smaller corrections due to the interactions between degrees of freedom further away. In fact, since the spacetime picture in Fig. <ref> follows directly from the distribution (ω), the logarithmic spreading of the entanglement wavefront and the stretched-exponential decay of P_r(t) can be attributed to the collective effect of the plethora of energyscales and timescales that emerge from the all the interactions between degrees of freedom at different distances. In this way, the microscopic picture developed in this work presents a much richer picture of the entanglement growth in an MBL phase than what is possible within a ℓ-bit picture. An alternative viewpoint could be that the interactions between the faraway degrees of freedom are irrelevant and the distribution (ω) originates purely from a distribution of the lengthscales {ζ}. However, positing that ∼ e^-r/ζ for α,β,γ,λ∈□_r, the distribution can be transformed to yield a distribution P_ζ for ζ – assuming the form in Eq. <ref> the latter exhibits a power-law tail P_ζ∼ζ^-2. The tail in the distribution rules out the presence of finite, mean localisation length. However, more importantly, this does not preclude the logarithmically slow in time growth of entanglement and hence a characteristic of the MBL regime. § INFINITE-TIME PURITY While the focus of this work has been on the dynamics of purity, as a matter of completeness, we discuss how the eigenstate correlations also encode the infinite-time saturation value of the purity. Specifically, we will discuss the system-size scaling of in Eq. <ref>. For an interacting quantum many-body system of finite-size without any conservation laws, there are no degeneracies in the eigenvalues. Moreover, the probability that =0 [ mod 2π] where all four of , , , and are different is measure zero. Hence the only distinct possibilities for having =0 are α=β==, α=β and = with ≠, and α= and = with ≠. These conditions can be used in Eq. <ref> to give = 1/^2∑_, [(V_+V_)× (1+V_+V_)]-2/^2∑_V_^2 . The terms on the right-hand side of Eq. <ref> satisfy certain sum rules and inequalities. From the orthonormality of the eigenstates, ⟨α|β⟩=δ_αβ, it can be straightforwardly shown that ∑_α,βV_ααββ = ∑_α,βV_αβαβ = ^3/2 . Also note that 0<V_αβαβ,V_<1. The first inequality is trivially manifest as = ∑_i_B,j_B|∑_i_Aα_i_Ai_Bβ^∗_i_Aj_B|^2 and similarly for V_. At the same time since V_ααββ,V_ααββ is sum over products of normalised eigenstate amplitudes, the second inequality also follows. This naturally implies that ∑_α,β|V_ααββ|^2≤∑_α,β V_ααββ⇒∑_α,β|V_ααββ|^2≤^3/2 , and similarly for ∑_α,β|V_αβαβ|^2. The same argument goes through for the quantity ∑_α,βV_V_ implying ∑_α,βV_V_< ^3/2. Finally, note that V_ is nothing but the purity of the eigenstate |α⟩. The area-law entanglement of MBL eigenstates then means V_∼ O(1) which in turn implies ∑_αV_^2 ∼O() . The sum rules and inequalities in Eq. <ref>-<ref>, therefore suggest that it is the first term in Eq. <ref>, ≈^-2∑_α,β(V_+V_) which dominates the scaling of with system size. As such, we have L≫1≈ 2^-1/2⇒S_2^AB(t→∞)≈L/2ln2 . Note that the entanglement entropy saturates to the maximal (thermal at infinite temperature in this case) volume law in this case. This is due to the fact that initial states of the form in Eq. <ref> are fully ergodic in the eigenbasis of U_F. More commonly considered initial states, such as σ^z-product states (or computational basis states) are typically multifractal in the basis of MBL eigenstates which leads to the sub-thermal volume law saturation of the entanglement entropy <cit.>. §.§ Digression: area-law saturation of entanglement in Anderson localised systems It is important to note here that Eq. <ref> is not valid for a non-interacting Anderson localised system. As we discuss next, it is this issue that lies at the heart of the entanglement entropy saturating to an area-law value in an Anderson localised system compared to volume-law saturation in an MBL system. For an Anderson localised system, let us denote the single-particle eigenstates as {|a⟩} and the corresponding eigenenergies as {ε_a}. The many-body eigenvalues are then simply given by θ_α= ∑_a=1^L η_α^aε_a , where η_α^a=1,0 is the occupancy of the orbital a in the eigenstate |α⟩. In such a case, = ∑_a=1^L (η_α^a-η_β^a-η_γ^a+η_λ^a)ε_a . This allows for multitude of more possibilities for =0 compared to the interacting MBL case. In particular, for a quartet of eigenstates (α,β,γ,λ), all of them different, if the orbitals whose occupancy are different between |α⟩ and |β⟩ are the same orbitals whose occupancy are different (with the same sign) between |γ⟩ and |λ⟩, it leads to =0, and in principle they can all contribute to the second term in in Eq. <ref>. However, the crucial point is that the single-particle orbitals, {|a⟩} are all exponentially localised in space. Hence, for a quartet, if the aforementioned orbitals which differ in occupancy between |α⟩ and |β⟩, and between |γ⟩ and |λ⟩, are sufficiently far away (further than a few single-particle localisation lengths) from from boundary between A and B, it leads to ∼ 0. On the other hand, if all of these single-particle orbitals straddle the boundary between A and B, it leads to ∼ O(1). It is straightforward to see that the number of such quartets is ∝^2 with a the proportionality factor being the single-particle localisation length. Using this in the second term in Eq. <ref> leads to an infinite-time purity which is proportional to the single-particle localisation length, and hence leads to an area-law saturation of the entanglement entropy. It is interesting to note that while the many-body eigenstates individually are qualitatively similar in many way in both the interacting and non-interacting cases, it is the stark difference in the spectral correlations that leads to the volume-law saturation of entanglement entropy in the interacting case, and area-law in the non-interacting case. § SUMMARY AND OUTLOOK Let us summarise the main results of the paper briefly. We developed a concrete relation between the dynamics of bipartite entanglement entropy, of states as well as that of the time-evolution operator, and the minimal eigenstate- and spectral correlations that encode them in a generic, interacting quantum system. In particular, we focussed on the second Rényi entropy of entanglement or equivalently the subsystem purity. The eigenstate correlations involve quartets of at least four eigenvectors and hence, manifestly go beyond those mandated by the ETH or lack thereof in strongly disordered, ergodicity-broken systems. Using this lens of eigenstate and spectral correlations, we constructed a microscopic theory of the logarithmic growth in time of entanglement entropy in the MBL regime at strong disorder. One key aspect of the theory was the identification that for strongly disordered systems, the dynamics of purity is contributed to by the eigenstate and spectral correlations of a vanishing fraction of the quartets. In particular, out of the O(^4) possible quartets, special set of O(^2) quartets dominate the dynamics of the purity. In fact, we discover that the spectral correlations within these special dominant quartets is sufficient to capture the dynamics. We uncover the defining structure of these quartets, both in real and Hilbert space, which provides us with an organising principle for these special quartets, in terms of a characteristic lengthscale. Using this, we find that there exists a hierarchy of characteristic frequency- and timescales for these quartets based on their characteristic lengthscales. The interplay of these hierarchical timescales and the number of such special quartets with a particular lengthscale eventually leads to the power-law decay in time of purity or equivalently, the logarithmic in time growth of entanglement entropy. We discussed briefly implications of our results for the phenomenological ℓ-bit picture and also the infinite-time saturation of the purity, in particular its scaling with system size, as manifested in the eigenstate correlations. All the results summarised above are rooted in an understanding of the eigenstate and spectral correlations involving four different eigenstates. Questions to follow up in the future therefore arise where such correlations or their generalisations appear naturally. While we focused here on the entanglement entropy of a system with no conservation laws, a similar theory can be constructed for the unresolved issue of the dynamics number entropy in MBL systems with a conserved charge <cit.>. In order to better understand the spatiotemporal structure of statistics of eigenstates, analogous to Ref. <cit.>, one can study the correlation (^')^∗ resolved in where ^' is the same eigenstate correlation as defined in Eq. <ref> but with a different bipartition of the system. In a similar spirit, one can introduce matrix elements of local operators and recast the OTOC or temporal fluctuations of local observables <cit.> from the point-of-view of eigenstate correlations. In the same direction of spatiotemporal structure of entanglement dynamics, it will be important to understand the origins of the putative stretched-exponential decay in time of purity resolved in r (see Eq. <ref>). Usually such anomalous decays appear due to the presence of a broad distribution of timescales due to heterogeneity; whether this is connected to the dynamical heterogeneity in the entanglement dynamics, akin to classical glasses <cit.>, remains a work for the future. Relatedly, the spacetime picture also raises the question that whether there exists a surface growth model for entanglement growth <cit.> which describes the dynamics of entanglement in the MBL phase. We thank F. Alet, C. Artiaco, J. H. Bardarson, D. A. Chávez, N. Laflorencie and I. M. Khaymovich for several useful discussions. SR acknowledges support from SERB-DST, Government. of India under Grant No. SRG/2023/000858, from the Department of Atomic Energy, Government of India, under Project No. RTI4001, and from an ICTS-Simons Early Career Faculty Fellowship via a grant from the Simons Foundation (Grant No. 677895, R.G.). § DETAILS OF AVERAGING OVER INITIAL STATES In this appendix we present the details of the averaging over initial states and the derivation of Eq. <ref>. The initial states we considered are random product states between A and B as defined in Eq. <ref>. where, |ψ_0^A⟩ and |ψ_0^B⟩ are defined as follows, |ψ_0^A⟩ =∑_i_Aϕ_i_A|i_A⟩ , |ψ_0^B⟩ =∑_i_Bϕ_i_B|i_B⟩ , where ϕ_i_A (ϕ_i_B) are independent complex Gaussian random numbers with zero mean and standard deviation N_ H_A^-1/2 (N_ H_B^-1/2) so as to preserve normalisation of the state. Formally, ϕ_i_A=0 ; ϕ_i_Aϕ_j_A^∗=N_H_A^-1δ_i_Aj_A . Since the random states are Gaussian it naturally implies 𝔼[ϕ_i_Aϕ_j_A^*ϕ_k_A^*ϕ_l_A]_{ϕ_i_A} =1/N_H_A^2[δ_i_Aj_Aδ_k_Al_A+δ_i_Ak_Aδ_j_Al_A] , and similarly for B. Now, ℐ^_ψ_0 from Eq. <ref> can be written as ℐ^_ψ_0=∑_i_A,j_A,k_A,l_A, i_B,j_B,k_B,l_B α_i_Ai_B^*β_j_Aj_B_k_Ak_B_l_Al_B^*×(ϕ_i_Aϕ_j_A^*ϕ_k_A^*ϕ_l_A)×(ϕ_i_Bϕ_j_B^*ϕ_k_B^*ϕ_l_B) , Since |ψ_0^A⟩ and |ψ_0^B⟩ are independent of each other, the averages over {ϕ_i_A} and {ϕ_i_B} can be done separately, such that 𝔼[ℐ^_ψ_0]_ψ_0 =∑_i_A,j_A,k_A,l_A, i_B,j_B,k_B,l_Bα_i_Ai_B^*β_j_Aj_B_k_Ak_B_l_Al_B^*×𝔼[ϕ_i_Aϕ_j_A^*ϕ_k_A^*ϕ_l_A]_{ϕ_i_A}×𝔼[ϕ_i_Bϕ_j_B^*ϕ_k_B^*ϕ_l_B]_{ϕ_i_B} =1/^2∑_i_A,j_A,k_A,l_A, i_B,j_B,k_B,l_Bα_i_Ai_B^*β_j_Aj_B_k_Ak_B_l_Al_B^*×[δ_i_Aj_Aδ_k_Al_A+δ_i_Ak_Aδ_j_Al_A]×[δ_i_Bj_Bδ_k_Bl_B+δ_i_Bk_Bδ_j_Bl_B] , where in the second line we used Eq. <ref>. The combinations of the Kronecker-delta functions in Eq. <ref> finally leads to 𝔼[ℐ^_ψ_0]_ψ_0=1/^2[δ_δ_+δ_δ_+V_^*+V_^*] , which is the content of Eq. <ref>. § EVIDENCE FOR EXPRESSION FOR The expression for in Eq. <ref> can be split into two terms as (ω) = ^(1)+(ω)^(2)(ω) , where ^(1)(ω) ≡1/^2∑_,,,: ≠0δ(_-ω)|V_|^2 . ^(2)(ω) ≡1/^2∑_,,,: ≠0δ(_-ω)V_V_^∗ , and we had neglected ^(2)(ω). In the main text, we provided a justification for it on general grounds based on its integral over all ω being suppressed in Hilbert space dimension. For completeness, however we show numerical evidence in support of the fact that ^(2)(ω)≪^(1)(ω) for all ω. In Fig. <ref> we show the results for ^(1)(ω) and ^(2)(ω) as a function of ω where it is clear that that latter is several orders of magnitudes smaller than the former for all ω. § DERIVATION OF THE Ω-DEPENDENCE OF (Ω) In this appendix, we provide the details of the summation in Eq. <ref> and how it leads to Eq. <ref>. Note that in Eq. <ref> there are two summations, which we denote as Σ_1 and Σ_2. Let us discuss the two of them separately starting with the former. Note that for any ω, that is independent of L, and so is r_∗(ω) and hence r_∗(ω)≪ L/2. So for all terms in Σ_1, N_r = 2^2L-r(r-1). We therefore have Σ_1 = ∑_r=2^r_∗(ω)c (r-1) e^-r(ln2-1/ξ) , where the summation can be performed exactly to yield Σ_1=c e^2/ξ/(e^1/ξ-2)^2 -c^-ξln2e^2/ξ/(e^1/ξ-2)^2ω^-1+ξln2 +ξc^-ξln2e^1/ξ/(e^1/ξ-2)ω^-1+ξln2|ln(ω/c)| . Since ξln2<1 and we are interested in ω≪ 1, the dominant term above is the last one such that we have Σ_1≈ξc^-ξln2/1-2e^-1/ξ ω^-1+ξln2|ln(ω/c)| Turning to the second summation in Eq. <ref>, Σ_2=ω^-μ∑_r=r_∗(ω)+1^LN_r[ω_∗(r)]^μ-1 , it also splits into two terms Σ_2 = Σ_2^(1)+Σ_2^(2) depending on the value of N_r, Σ_2^(1) =c^μ-1ω^-μ∑_r_∗(ω)+1^L/2(r-1)e^-r[ln2+(μ-1)/ξ] , Σ^(2)_2 =c^μ-1ω^-μ∑_r=L/2+1^L(L-r+1)e^-r[ln2+(μ-1)/ξ] . Note however that each summand in Eq. <ref> is exponentially small in L as ξln2>0 and μ>1 and but the number of terms in the sum is only linear in L. Hence Σ_2^(2) is also exponentially small in L, and can be neglected such that Σ_2L→∞→Σ_2^(1). Each of the term in the latter sum is exponentially suppressed in system size(as, μ>1). The sum Σ^(1)_2 in Eq. <ref> can be performed exactly leading to Σ_2^(1)= c^μ-1e^-L/2ξ(μ-1+ξln2)/[2e^(μ-1)/ξ-1]^2[L e^(μ-1)/ξ+L-1]ω^-c +c^-ξln2/[2e^(μ-1)/ξ-1]^2ω^-1+ξln2 +ξc^-ξln2/[2e^(μ-1)/ξ-1]ω^-1+ξln2|ln(ω/c)| . The first term on the right-hand side of the equation above is again exponentially small in L and hence can be neglected. Moreover, since we are interested in ω≪ 1, the third term dominates over the second and we therefore have Σ_2 = ξc^-ξln2/[2e^(μ-1)/ξ-1]ω^-1+ξln2|ln(ω/c)| . Using Eq. <ref> and Eq. <ref> we finally have ⟨(ω)⟩≈ C ω^-1+ξln2|ln(ω/c)| , which is the result in Eq. <ref>. As a matter of completeness we note that the prefactor C above is give by C=c^-ξln2ξ[1/1-2e^-1/ξ+1/2e^(μ-1)/ξ-1] , which, reassuringly, is necessarily positive due to ξ<1/ln2 and μ>1.
http://arxiv.org/abs/2406.08430v1
20240612171602
Testing Quantum and Simulated Annealers on the Drone Delivery Packing Problem
[ "Sara Tarquini", "Daniele Dragoni", "Matteo Vandelli", "Francesco Tudisco" ]
math.CO
[ "math.CO", "math.OC" ]
[ Testing Quantum and Simulated Annealers on the Drone Delivery Packing Problem Sara Tarquinigssi Daniele Dragoniqcrl,di Matteo Vandelliqcrl Francesco Tudiscogssi,sch gssiGran Sasso Science Institute, Viale Francesco Crispi 7, L'Aquila, Italy qcrlQuantum Computing Research Laboratory, Leonardo S.p.A., Via R. Pieragostini 80, Genova, Italy diDigital Infrastructures, Leonardo S.p.A., Via R. Pieragostini 80, Genova, Italy schSchool of Mathematics and Maxwell Institute, University of Edinburgh, Peter Guthrie Tait Road, EH9 3FD, Edinburgh, UK Sara Tarquinisara.tarquini@gssi.it Daniele Dragonidaniele.dragoni@leonardo.com Matteo Vandellimatteo.vandelli.ext@leonardo.com Francesco Tudiscof.tudisco@ed.ac.uk 0.3in ] § ABSTRACT Using drones to perform human-related tasks can play a key role in various fields, such as defense, disaster response, agriculture, healthcare, and many others. The drone delivery packing problem (DDPP) arises in the context of logistics in response to an increasing demand in the delivery process along with the necessity of lowering human intervention. The DDPP is usually formulated as a combinatorial optimization problem, aiming to minimize drone usage with specific battery constraints while ensuring timely consistent deliveries with fixed locations and energy budget. In this work, we propose two alternative formulations of the DDPP as a quadratic unconstrained binary optimization (QUBO) problem, in order to test the performance of classical and quantum annealing (QA) approaches. We perform extensive experiments showing the advantages as well as the limitations of quantum annealers for this optimization problem, as compared to simulated annealing (SA) and classical state-of-the-art commercial tools for global optimization. § INTRODUCTION Combinatorial optimization problems are widely used to model complex decision-making processes involving a large number of binary choices. Due to the combinatorial nature of the problem, the cost of computing global optima can scale exponentially in the number of variables becoming quickly very prohibitive. While conventional approaches can be effective in specific cases, it is also evident the need for more general techniques, adaptable to a bigger range of applications. In this sense, a potentially advantageous approach is represented by the use of quantum optimization techniques. Motivated by the recent progress in quantum annealing hardware for solving quadratic unconstrained binary optimization (QUBO) problems, in this work we aim to test the performance of classical and quantum annealing algorithms for the DDPP. Solving the DDPP using quantum annealers involves transforming it into a QUBO problem. When doing so, however, adhering to the hardware constraints of the quantum machine, especially qubit count and connectivity, poses a substantial challenge. In order to formulate DDPP as a QUBO problem, we first transform the constrained quadratic optimization problem into an integer linear programming problem (ILPP), through the addition of slack variables. Subsequently, the conversion to a QUBO is done by introducing quadratic penalty terms that equal zero for feasible solutions, and a positive quadratic penalty for infeasible solutions. However, if implemented naively, this relatively standard procedure leads to a prohibitive number of variables and, in turn, a prohibitive qubit count, after the embedding process onto the quantum hardware. Thus, we propose a relaxed QUBO alternative formulation that equivalently solves DDPP but with significantly fewer slack variables (and thus fewer qubits). In order to solve the resulting QUBO formulations, we employ simulated and quantum annealing samplers, using the ultimate QPU model by D-Wave: Advantage system 4.1. Advantage QPUs can hold inputs that are almost three times larger, on average, than those that could run on previous-generation D-Wave 2000Q QPUs. The last model features 5627 qubits and 15 couplers per qubit, for a total of 35,000 couplers <cit.>. We studied the performance of the two approaches and the quality of the corresponding solutions. Overall, our main contributions are as follows: * We propose two QUBO formulations of the DDPP, one following an established all-purpose approach and another one tailored to the specific DDPP. We show that the latter QUBO formulation requires significantly fewer slack variables than the former. * Using the proposed QUBO formulation, we perform extensive experimental evaluation providing statistics on the performance, concerning problem size and chain strength, and in terms of time-to-solution, memory cost, and solution quality of D-Wave's ultimate QPU as compared to classical (simulated) annealing and global-optimization deterministic baseline approaches. Motivated by the rapidly progressing technological advances realizing bigger and more connected quantum devices, the overarching goal of this work is to identify, using DDPP as a prototype of a complex combinatorial optimization problem, the advantages and limitations of modern quantum annealing hardware in finding reliable solutions and the extent to which quantum annealers represent (or have the potential to represent) a realistic alternative to simulated annealing and deterministic approaches in the context of this type of combinatorial problems. We also emphasize that investigating the scalability of quantum applications using moderate-to-small scale test problems is particularly important in the context of quantum-classical hybrid approaches based on the quantum annealing hardware. Particularly, the quantum component can be used as a specialized sub-processor for improving sub-problems within larger classical algorithms <cit.>. § RELATED WORK A variety of work has been done in the area of drone scheduling, as well as in related optimization problems with similar formulations. Examples, concerned with the fields of transportation, logistics, and supply chain management, include applications of the renowned Travelling Salesperson Problem (TSP), for which decomposition techniques are introduced in e.g. <cit.>. Different versions have been analyzed, both for minimizing emissions <cit.>, and addressing the dynamic version with real-time customer requests <cit.>. Another widely studied problem is the Bin Packing Problem (BPP), where items of different sizes must be allocated into a finite number of bins with a fixed given capacity, so to minimize bin usage. For this scope, a hybrid classical-quantum approach is presented in <cit.>. The parallelism with the DDPP is clear: in the DDPP case, the challenge is efficiently utilizing drones to deliver a set of parcels while considering battery capacities; similarly, in the classic BPP, the aim is to efficiently pack a set of items into bins, while considering limited physical capacities. A special application <cit.> is the optimization of spent nuclear fuel (SNF) filling in canisters, so that the maximum heat output does not exceed a limiting value. Another scheduling-related problem is the optimal flight gate assignment for airport management, aiming to the minimization of the total transit time of the passengers in an airport, with time constraints <cit.>. The starting points for our work are the results of Jana and Mandal in <cit.>, where they prove the NP-hardness of the DDPP, and give an ILP representation for which they propose two approximation algorithms. Specifically, they give a greedy approximation algorithm, and a colouring-based algorithm, on a graph representation of the DDPP. Indeed, they construct, for the given set of delivery time intervals, an interval graph, where vertices represent intervals and are adjacent if the corresponding intervals conflict. In our work, we use their ILP formulation as a starting point to develop two QUBO formulations of the DDPP, amenable to global optimization via sampling for both classical and quantum annealing. The following section is dedicated to mathematically formalizing the DDPP, as a general constrained optimization problem first, then as an integer linear programming problem, and finally as a QUBO problem. The section presents two QUBO formulations: one obtained following a relatively standard variable augmentation approach <cit.> and another one obtained via a tailored relaxation. An analysis is provided showing that the latter formulation improves the former in terms of the number of variables and qubits required, positively impacting the time-to-solution of annealing optimization samplers. § QUBO FORMULATION OF THE DRONE DELIVERY PACKING PROBLEM In this section, we introduce the drone delivery packing problem as a combinatorial optimization problem and we then derive two alternative QUBO formulations. The DDPP seeks to give an optimal delivery assignment to a set of identical available drones, with battery budget. Their task is to bring to completion a certain set of deliveries requested by customers, with constraints regarding cost and time consistency. More specifically, for the given set of deliveries, their delivery time intervals, cost in terms of energy for each delivery, and (equal) battery budget of the drones, the goal is to schedule an optimal set of drones, that minimizes the used drones. The assignment must guarantee time and energy consistency, other than the completion of the deliveries. Let ℳ = {1,...,m} be the collection of identical drones, available for the company in a given depot. Let 𝒩 = {1, ..., N} be the set of deliveries assigned to the company to be completed. The binary variables used within the model are: x_ij = 1, drone i ∈ℳ delivers to j ∈𝒩 0, otherwise Other variables involved in the constraints formalization are: * The battery budget B > 0 for each drone, namely the service duration time of the battery; * The delivery time intervals I_j, representing the time window in which delivery j ∈𝒩 is done; * The cost c_j in terms of battery for completing delivery j ∈𝒩. Then, the DDPP can be formulated as follows <cit.>: min_x_ij∈{0,1}∑_i ∈ℳmax_j∈𝒩 x_ij , s.t. ∑_j ∈𝒩 c_j x_ij≤ B ∀ i ∈ℳ ∑_i ∈ℳ x_ij = 1 ∀ j ∈𝒩 x_ij + x_ik≤ 1 ∀ i ∈ℳ, j, k ∈𝒩, I_j ∩ I_k ≠∅ Notice that the binary quantity max_ j∈𝒩 x_ij indicates whether drone i is used. In fact, max_j∈𝒩 x_ij= 1 if drone i has been used 0, otherwise Therefore, ∑_i ∈ℳmax_j∈𝒩 x_ij counts the number of used drones. §.§ Standard ILP form of the maximum function and first QUBO formulation Starting from the combinatorial optimization formulation (<ref>), we are interested in obtaining an equivalent QUBO formulation representing the Hamiltonian to be given to the annealer. This can be done with two relatively standard steps: (see e.g. <cit.>): * Transforming all the inequality constraints into equality constraints by adding a number of slack variables, thus reformulating (<ref>) as an integer (binary, to be precise) linear programming (ILP). * From the binary ILP obtain the QUBO by transferring all the linear constraints into penalty terms to be added to the ILP objective function (which is quadratic and binary). Note that, for each inequality constraint, the number of slack variables comes from the quantity s = b - ∑_i=1^N A_ix_i ≤ b - ∑_i : A_i ≤ 0 A_i. Therefore, at most ⌈log_2 (b - ∑_i : A_i ≤ 0 A_i ) ⌉ coefficients are needed to represent s, and each inequality constraint requires this number of slack variables for the transition to equality. In order to apply steps 1 and 2 above to the DDPP model (<ref>), we first notice that, while the constraints in <Ref> are linear inequalities, the objective function is written in terms of max_j ∑_i x_ij and thus it is not quadratic. In order to formulate the problem using a quadratic binary loss, we introduce m new binary variables y_i and we notice that the condition y_i := max_j ∑_i x_ij (for binary x_ij and y_i) corresponds to the pair of conditions x_ij≤ y_j and y_i ≤∑_j∈𝒩 x_ij for all i∈ℳ and all j∈𝒩. By inserting these constraints into (<ref>) we obtain a new constrained quadratic binary problem min_y_i ∈{0,1}∑_i ∈ℳ y_i , s.t. ∑_j ∈𝒩 c_j x_ij≤ B ∀ i ∈ℳ ∑_i ∈ℳ x_ij = 1 ∀ j ∈𝒩 x_ij + x_ik≤ 1 ∀ i ∈ℳ, j, k ∈𝒩, I_j ∩ I_k ≠∅ x_ij≤ y_i ∀ i ∈ℳ, ∀ j ∈𝒩 y_i ≤∑_j∈𝒩 x_ij ∀ i ∈ℳ Now, we transform the inequality constraints in (<ref>) into quadratic equality constraints. This is done by means of a number of additional slack variables, obtaining the following quadratic penalty terms <cit.>: H_C_1 = ∑_i ∈ℳ( ∑_j ∈𝒩 c_j x_ij + ∑_l=0^⌈log_2 B ⌉ - 1 2^l s_il - B )^2 H_C_2 = ∑_j ∈𝒩( ∑_i ∈ℳ x_ij - 1 )^2 H_C_3 = ∑_i ∈ℳ∑_j,k ∈𝒩 : I_j ∩ I_k ≠∅( x_ij + x_ik + t_i - 1 )^2 H_C_4 = ∑_i ∈ℳ∑_j ∈𝒩( x_ij - y_i + r_ij)^2 H_C_5 = ∑_i ∈ℳ( - ∑_j ∈𝒩 x_ij + y_i + ∑_l=0^⌈log_2 ( |𝒩|) ⌉ -1 2^l p_il)^2 Finally, by moving the binary constraints H_C_i into the objective function as penalty terms we obtain the Hamiltonian H = ∑_i ∈ℳ y_i + ∑_i=1^5 α_i H_C_i where α_i > 0, for i=1,2,3,4,5. This Hamiltonian, to be given to the annealer, is made of six terms, the objective function and the five quadratic Hamiltonian terms H_C_i, and is a function of the original variables x_ij and several additional slack variables y_i, c_j, t_i, r_ij, and p_ij. With this construction, the overall number of variables needed to formulate DDPP as a QUBO problem is |ℳ| + |ℳ||𝒩| + |ℳ| ⌈log_2 B ⌉ + |ℳ| κ + |ℳ||𝒩| + |ℳ| ⌈log_2 (|𝒩|) ⌉ where κ denotes the number of time-conflicting deliveries: κ = |⋃_j,k ∈𝒩{j, k : i < j, I_i ∩ I_j ≠ 0}|. The first two terms come from the original sets of variables {y_i}_i ∈ℳ and {x_ij}_i ∈ℳ, j ∈𝒩, respectively, while the subsequent terms come from H_C_1, H_C_3, H_C_4 and H_C_5. While the obtained QUBO formulation results from the most standard method, we notice that it leads to a number of overall variables that quickly grow very large. In the next subsection, we present an alternative QUBO model, that better leverages the structure of the problem, while significantly reducing the number of variables. §.§ Quadratic proxy of the maximum function and second QUBO formulation In this subsection, we propose a more efficient QUBO model for the DDPP that differs from the one introduced in the previous section. The key idea is to consider a cheaper quadratic objective function that acts as a proxy for the original max_j x_ij without requiring additional slack terms, thus directly expressing the optimization problem in terms of the initial x_ij variables. To this end, we notice that in terms of the assignment matrix [x_ij], seeking to minimize ∑_i max_j x_ij as in the original minimization problem (<ref>) is equivalent to favour configurations of [x_ij] with as many zero rows as possible. In other words, taking into account the “all deliveries once” condition ∑_i x_ij=1 and ignoring the other constraints, the worst configuration for [x_ij] is one with a permutation-like pattern, while the best configuration is one where all rows in [x_ij] are zero except for one, i.e. all the deliveries are done by only one drone, as illustrated in <Ref>. If σ_i = ∑_j x_ij denotes the row-sum of [x_ij], we then notice that the quadratic function ∑_i (N-σ_i)σ_i combined with the all-delivery-once constraints favours the latter type of configuration and penalizes the former, as (N-σ_i)σ_i is minimal when either σ_i=0 or σ_i=N. See <Ref> for an illustration. With these considerations in mind, we propose here the following modified QUBO formulation for the DDPP min_x_ij∈{0,1}∑_i ∈ℳ(N- ∑_j∈ℳx_ij)∑_j∈ℳx_ij , s.t. ∑_j ∈𝒩 c_j x_ij≤ B ∀ i ∈ℳ ∑_i ∈ℳ x_ij = 1 ∀ j ∈𝒩 x_ij + x_ik≤ 1 ∀ i ∈ℳ, j, k ∈𝒩, I_j ∩ I_k ≠∅ which can be expressed in QUBO format by means of the quadratic unconstrained Hamiltonian H = H_0+α_1 H_C_1+α_2H_C_2+α_3 H_C_3, with H_0 = ∑_i (N- ∑_jx_ij)∑_jx_ij. Note that this new model does not need to consider the variables y_i, nor the consistency constraints. Thus, it requires only |ℳ||𝒩| + |ℳ| ⌈log_2 B ⌉ + |ℳ| κ variables, as opposed to (<ref>). The advantage of the modified QUBO formulation is further highlighted in <Ref> in terms of logical variables involved within the formulation. Two curves are shown, quantifying the improvement in the number of logical variables when using <Ref> with respect to <Ref>, on example problems with N∈{4,5,6,7,8}. This also shows how the benefit in the variables usage scales with increasing deliveries. In particular, we notice that the variable allocation decreases significantly and that the discrepancy between the two models grows as N increases. The following section moves on to Simulated and Quantum Annealing techniques for tackling the DDPP formulated as <Ref>, while addressing the consequent state-of-the-art quantum technology limitations. § SIMULATED AND QUANTUM ANNEALING In this section, we start by introducing the main ideas at the basis of the annealing optimization algorithm, highlighting the differences between the classical and the quantum versions and, in particular, the potential advantages of the latter over the former. Simulated and Quantum Annealing are probabilistic algorithms for approximating minimal solutions by exploring the loss landscape and the solution space iteratively, through certain acceptance probabilities. Inspired by the physical phenomenon of metallurgical annealing, the process leverages the adiabatic theorem (<cit.>). Therefore, the system is initialized to an easily prepared state and slowly evolves, through a time-dependent perturbation, towards a more complex Hamiltonian, whose ground state encodes the solution to the optimization problem. More specifically, the algorithm aims at finding the global minimum of an objective function. It starts with a guess and iteratively moves to neighboring states. The switch is based on a Boltzmann acceptance probability that exponentially depends on the height of the energy hill representing the local minimum to escape. In the Quantum version, the perturbation is a time-dependent magnetic field, allowing quantum phenomena like the tunneling effect (see <cit.>). This modifies the acceptance probability that now depends also on the length of the potential barrier. The extra dependence of QA's acceptance probability on L allows the system to compensate for the difficulty of escaping from very high barriers, e.g. very isolated local minima. This shows that QA can potentially outperform SA, especially when the loss landscape consists of very high but thin barriers surrounding shallow local minima. Nevertheless, this is problem-specific, as the acceptance probability is strongly related to the shape of the objective function. Rigorous proofs of the advantage of QA over SA in terms of the ability to escape local optima exist only for specific examples <cit.>. However, there is a wealth of empirical indications that quantum annealing can make highly complex combinatorial optimization problems computationally feasible, see e.g. <cit.>. Also, transferring this theoretical advantage of QA over SA in terms of time-to-solution is a challenging task. In fact, the relationship between acceptance probability and annealing time is still an open question <cit.>. However, we will show in the following subsection the time-to-solutions found for our example problems both for SA and QA, displaying a significant gain for the considered problem sizes. Also, the next subsection starts by addressing the issue of embedding a QUBO in the chipset of a Quantum Annealer both in general and for the QUBO formulation (<ref>). §.§ Chipset embedding In this subsection, we provide a brief overview of the quantum annealer hardware, and its connection with the QUBO format. The QUBO formulation is strongly related to the quantum annealing computation. In fact, the processor of a quantum annealer can be seen as a graph whose nodes are the qubits and whose edges represent the connections between the qubits. Similarly, QUBO formulations, being quadratic forms, can be represented as a graph where linear and quadratic terms are associated with the nodes and the edge weights of the graph, respectively. Thus, QUBO problems are particularly suitable to be implemented and solved on quantum annealers, as long as an embedding of the QUBO graph into a subgraph of the fixed quantum annealer's chipset graph exists. However, QPU chipsets have specific connectivity limitations due to the physical layout of their qubits and the coupling strengths between them. Thus, within the embedding process, it is often the case that multiple physical qubits are involved to represent single model variables, forming so-called qubit chains (<cit.>). This leads to the concrete involvement of a greater number of physical qubits, even though, at the model level, fewer variables were used. This gap between model variables and the physical qubits involved can be significant. Nevertheless, it is intended to decrease with the technological advancement, and the construction of bigger and more connected quantum computers, allowing more direct embeddings. We remark that the D-Wave Leap's Ocean SDK contains several specifically designed utilities for this task. See for example <cit.> or D-Wave's documentation <cit.>. Also, we point out that the advantage obtained when using <Ref> over <Ref> discussed in <Ref>, results in a reduction in the number of variables to be embedded in the chosen quantum hardware. This highlights how choosing a more convenient model formulation extends to a meaningful decrease in the number of variables involved, which is even more evident at a hardware level. This is shown in <Ref>, where a "hardware counterpart" of <Ref> is depicted. In fact, while the former shows an improvement in terms of logical variables, the latter shows similar plots of variables, but referring to physical ones. Also, this reduction is observed to scale with the number N of deliveries, as in <Ref>. Note that the qubits usage depends on the chosen embedding. We used the EmbeddingComposite class provided by D-Wave's Ocean SDK for this process <cit.>. This tool automatically minor-embeds the problem into the device, calculating a new minor-embedding each time one of its sampling methods is called <cit.>. Therefore, the values shown in <Ref> represent average numbers of physical qubits, computed across 10 different embedding computations. In this sense, such values measure how many physical variables are needed, on average, to encode and embed each problem instance for increasing N ∈{4,...,8}. The following section is dedicated to introducing the experimental configuration used for conducting our tests. Particularly, we describe the parameter setting, the hardware specifications both of the local processor used for running simulated annealing and the quantum processor, and the synthetic problem instances generated for the testing. § NUMERICAL SETUP AND PARAMETER TUNING §.§ Choice of the penalty coefficients In this subsection, we focus on the choice of the penalty coefficients α_i in (<ref>). We recall that α_1 is responsible for the energy budget constraint H_C_1; α_2 is related to the requirement that all deliveries should be done once, as modeled by the constraint term H_C_2; α_3 is responsible for the time interval constraints as in H_C_3. In general, higher penalty coefficients emphasize constraint satisfaction, at the expense of sacrificing optimality, while lower coefficients may prioritize the objective function over feasibility. For this reason, the process of setting penalty weights is not trivial, and it becomes more challenging when dealing with multiple constraints. In fact, in that case, one has to choose the coefficients to balance the importance of individual terms, and that is typically problem-specific. A common approach to address the interplay between single penalty terms (see e.g. <cit.>) is to set all the weights α_i to a value greater than the largest possible absolute value of the original objective function H_0. In this way, violating constraints is considered more significant than minimizing the objective function. In the case of <Ref>, the objective function H_0 represents a discrete and positive m-dimensional downward-facing parabola touching the x-axis in 0 and N, as depicted in <Ref>. Therefore, its maximum value is upper bounded as H_0 ≤ m N^2/4. In our case, however, setting the penalty coefficients uniformly to the same value did not work well as the resulting solutions consistently failed to satisfy the constraint imposing that all the deliveries should be performed by exactly one drone. For this reason, we performed a parameter tuning analysis for α_2 on a range of randomly generated benchmark test problems. We fixed the number of drones to m=10 and sampled problem instances with either 10 or 20 deliveries, N∈{10,12}, by randomly generating costs and time intervals. <Ref> shows the details of the test problems we considered, and more details on the generation process can be found in the next <Ref>. In order to tune the parameter α_2 we set α_2 = k · (mN^2)/4 and let k vary within a discrete grid from 5 to 150. We tested the performance of SA on each of these parameter values for the dataset from <Ref> and checked the quality of the solution in terms of objective function and constraint satisfaction. As a result, we observed overall best performance within the considered problem instances dimension range for k=120. The next subsection provides additional details on the set of test problems as well as an in-depth analysis of the performance of both quantum and simulated annealing samplers, as compared to the exact solution computed with a commercial deterministic solver (Gurobi <cit.>, used with an academic license) applied to the original constrained ILP (<ref>). In all our tests, we fixed the value of α_2 using k=120 and set α_i=mN^2/4 for the other coefficients, i ≠ 2. For what concerns the Quantum Annealer, we conducted an independent tuning of penalty weights, to empirically find their optimal combination, also for this different search method. In fact, the coefficients setting requires a different selection, due to the different exploration of the solution space and the different strategies used to escape local minima. We discovered that also QA encounters more difficulties in fulfilling the all-deliveries-once constraint and that the same strengthening coefficient (<ref>) with k=120 performs well in this case. §.§ Test problems setup In order to generate an unbiased test problem benchmark, we use a random instance generator to sample 24 synthetically generated problem settings encompassing a range of possible configurations. This is a standard benchmarking approach, see e.g. <cit.>. In particular, we produce two datasets, with the aim of analyzing the performance sensitivity on the problem dimension, especially with respect to the deliveries set size. The first dataset comprises 12 instances with either 10 or 12 deliveries, resulting in a larger number of variables, while the second group of 12 instances uses either 7 or 8 deliveries, leading to fewer variables. We report in <Ref> the sampled sets for the cases of 10 and 12 deliveries while the other sampled sets are shown in the Appendix (<Ref>). Our evaluation framework considers the following parameters: * The number of available drones is fixed to m = 10; * The number N of deliveries is randomly chosen, between 10 and 12 for generating the dataset of larger problems, and between 7 and 8 for producing the smaller ones; * The battery budget B is randomly chosen between 50, 70 and 100; * An N-dimensional list containing a random distribution of energy costs c_j for completing each delivery j ∈ N, such that c_j < B ∀ j ∈ N. To generate the lists, two configurations have been employed: a single Gaussian centered in B/2, and a uniform distribution between 0 and B; * An N-dimensional list of time windows I_j within which to deliver item j ∈ N. Intervals are generated to extend from 8 a.m. to 8 p.m. and to be one or two hours long. In the subsequent subsection, comprehensive explanations are provided for the hardware utilized by each solver used on this benchmark. §.§ Annealers hardware By importing the necessary modules from the D-Wave library, it is possible to instantiate a solver object, both D-Wave's Simulated Annealer and the Quantum Annealer samplers. Then, one can submit the problem to the chosen sampler through the sample function, associated with the desired parameters, and retrieve results. We provide here the technical specifications of both the solver's hardware used. Simulated Annealing operates locally on personal computers without necessitating data transmission to specialized hardware. In particular, we used the Python environment along with D-Wave's Ocean SDK on our personal computer, equipped with an Intel(R) Core(TM) i7-11390H processor. Differently, performing experiments on Quantum Annealing requires specific hardware exploiting superconducting technology. Thus, to execute a quantum machine instruction (QMI), again through dedicated libraries within D-Wave's Ocean SDK, the information is sent across a network to the SAPI server, joining a queue for the chosen solver (<cit.>). Our tests are conducted on the latest Quantum Annealer QPU hardware by D-Wave Advantage 1. The Advantage architecture counts 5627 qubits and uses the Pegasus graph topology, increasing the per-qubit connections to 15 (<cit.>). The solver's protocol and parameters characterize how the problem is run and allow control of the annealing process (e.g. chain strength, annealing time, and annealing schedule <cit.>). In our tests, quantum annealing was implemented according to the standard forward annealing protocol, with a defaulted single-sample annealing time of 20.0μ s. The chain strength parameter instead, was tuned from default to the smallest value able to increase the solver's performance for our test case, see <Ref>. Also for the simulated annealing tests, we used the defaulted annealing protocol, corresponding to a linear interpolation of the β parameter, within a defaulted range of values computed based on the problem biases <cit.>. The number of reads was always set to 1000 for both classical and quantum tests. Detailed explanations and comments are given in the following subsection for each metric used to evaluate the solver's performance. § NUMERICAL RESULTS In this section, we provide a performance analysis for both quantum and simulated annealing samplers as compared to the results obtained using the original constrained ILP formulation (<ref>), solved with the commercial software Gurobi <cit.>. Being probabilistic methods, annealing techniques are employed iteratively, i.e. multiple runs of the optimization process are required to explore different regions of the solution space. Thus, for an annealing technique to be considered a successful probabilistic optimization method, it is sufficient to find at least one optimal and feasible solution among all the runs. Namely, the key metric for evaluating the performance of an annealing technique is not whether it consistently finds the optimal solution in every run, but rather if it can achieve the optimal solution at least once within multiple attempts. We seek the problem dimension range within which both Simulated and Quantum annealing results meet this requirement. §.§ Models comparison in time to solution The experiments detailed in the present section all pertain to the QUBO formulation in <Ref>, as it was discussed in <Ref> and <Ref> to be significantly more convenient. Also, we show in this subsection that the enhancement gained with <Ref> over <Ref> can be evaluated in terms of time-to-solution benefit. <Ref> displays the impact of the variables' improvement in terms of time-to-solution, both for simulated annealing and quantum annealing. Precisely, the considered time for SA measures the extension of the classical algorithm execution, that retrieves the best sample among 1000 reads. In this sense, the plot shows how long a single call to SA takes to retrieve 1000 samples and subsequently extract the best. The reported values are averages computed within 10 of such calls to the solver. To make a fair comparison, we plot the QPU access time for QA, neglecting the data transmission latency. We notice that for both SA and QA there is a sensible time reduction when using the modified QUBO formulation (<ref>). The figure also highlights the significant speed-up of QA with respect to SA. The effectiveness of the QUBO model (<ref>) in terms of solution quality will be thoroughly investigated in <Ref>. Note that <Ref> shows just the QPU access time as a comparison with SA execution time. Namely, we chose to omit internet latency and preparation times, and also the time required by the embedding. However, while for the considered benchmark the embedding time makes a negligible contribution, we point out that, for complex embeddings, it can appear to be significant. The observed speed-up of quantum hardware is encouraging and aligned with their promising capacities in overcoming local minima related to purely quantum effects, as exposed in <Ref>. §.§ Simulated Annealer results The results found by SA on each of the 24 synthetically generated problem instances reported in <Ref> and <Ref>, are shown in <Ref> and <Ref> respectively. There, we show both average performance and "best" performance, obtained by showing the results corresponding to the smallest value of H_0, over 10 runs with starting points randomly chosen. As discussed in <Ref>, the results for the "best" run serve as a crucial indicator of whether the solver has successfully located the global minimum or whether it has gotten stuck into a local minimum, even after multiple attempts. Overall, <Ref> and <Ref> summarize the results obtained across all the 24 problem instances described in <Ref>, providing details about the following metrics: * Avg time: the time to solution required by the SA to retrieve the best sample for each of the 12 instances. This value is an average computed within 10 calls to the solver. Precisely, the considered time for SA measures the extension of the classical algorithm execution, that retrieves the best sample among 1000 reads; * Solution, avg: the average value, within 10 calls to the SA, of the best sample retrieved by the solver. In each call, after the sampling process, the solver extracts the best value of the objective function among the various solutions found; * Solution, best: in agreement with what has been said, the key information is whether the solver finds the real minimum at least once. In order to understand this, for each of the 12 instances, we select the smallest among the 10 results found by SA. We highlight in darker blue the cases in which it corresponds to the real global minimum; * Solution, ILP: the correct optimal value of the objective function for each instance, computed with a commercial deterministic solver branch-and-bound based <cit.>, applied to the original constrained ILP (<ref>). * Drones, avg: the number of drones corresponding to the solution with the smallest value of the objective function, for each instance. We report here the average number of such drones over 10 calls to SA; * Drones, best: for each of the 12 instances, the number of drones corresponding to the solution with the smallest objective function value. We highlight in darker blue the cases in which it corresponds to the real minimum number of prescribed drones; * Drones, ILP: the number of drones prescribed by the exact solution given by the deterministic solver; * [battery, time-consistency, all deliveries once], avg: triplet of percentages of satisfaction, namely how many times, in 10 attempts, each of the specific constraints is satisfied; * [battery, time-consistency, all deliveries once], best: triplet of satisfaction, specific for the solution with the smallest objective function value. The triplet entry equals 1 if the corresponding constraint is satisfied, and 0 otherwise. The results highlighted with a dark blue color in <Ref> and <Ref> emphasize the capacity of the solver to find both optimal and feasible solutions at least once. We recall that by "feasible solution" we mean a solution that satisfies simultaneously all the problem constraints, namely bringing to completion the required delivery task. It is also important to stress that even though the simultaneous satisfaction of the constraints is an important feature, studying how the method addresses single constraints is also useful. In fact, it allows us to understand potential disparities among different constraints, thus hinting at the necessity of tuning the corresponding penalty weight, as explained in <Ref>. <Ref> shows that utilizing a benchmark comprising 10 or 12 deliveries, where each instance consists of approximately 200 variables, SA struggles to find solutions that are both feasible and optimal. However, the situation changes significantly when lowering the problem dimension. When focusing on 7 or 8 deliveries, encompassing approximately 150 variables, <Ref> shows that SA is able to find solutions that are both feasible and optimal. The comparison of <Ref> and <Ref> highlights the dependence of SA performance on the number of variables. It underscores that variable reduction can play a key role, enabling the method to find satisfactory solutions. More specifically, we identify a dimension-limiting case of ∼ 150 variables within which, with this setting and these parameters, the solver can be considered comparable to deterministic methods. In the following subsection, we aim to conduct a similar analysis to the one undertaken for SA, but for the QPU described in <Ref>. However, we will see that for the quantum case, due to hardware constraints, the dimension sensitivity is impacting the annealing performance even more. §.§ Quantum Annealer results We address the issues related to the problem embedding into the quantum hardware described in <Ref>, for the specific problem at hand, and how these influence the performance sensitivity concerning problem size and chain strength. The goal is to identify, as done with the classical annealer, a dimension range of applicability of currently available machines for the DDPP formulated as in <Ref>. Using QA for the bigger benchmark problems in <Ref> requires ∼ 800 physical qubits, yielding poorly performing results. In general, encountering many infeasible solutions represents a notable drawback for annealing techniques. This underscores the need for establishing a dimensionality range within which these methods, given current technological capabilities, appear to be reliable. <Ref> demonstrates the capacity of SA to derive meaningful solutions for the DDPP on instances featuring ∼ 150 variables. Consequently, we seek a similar estimation for QA: we aim to identify the largest problem size for which QA is able to find at least one feasible solution while approximating the optimal value of H_0. Starting from <Ref> data, we repeat the same experiments done with SA, changing the solver to QA, and significantly decreasing m and N. All tests are done according to <Ref> setup, just changing the solver and the problem size. Particularly, 10 QA runs with random starting points are done for each decreased-size instance, each with 1000 reads. For what concerns the size reduction, we consider a fixed m = 4 and tune N from 4 to the largest value that results in at least one feasible solution, determined to be N=6. The detailed findings are presented in <Ref>, revealing that on this simpler use-case, QA can be considered successful. As the number of deliveries increased, so did the likelihood of encountering unfeasible solutions. Larger problem instances amplify the complexity of exploring the solution space, particularly due to the presence of multiple constraints. Our tests show that QA is robust up to a threshold of approximately 160 physical variables. This corresponds to scheduling m=4 drones on bringing to completion N=6 deliveries, with prescribed constraints of time and battery. Beyond this point, the performance of QA declines. The next subsection explores the possibility of stretching this limiting problem dimension through parameter tuning. §.§ Chain Strength Tuning In this subsection, we observe that the dimension threshold can be raised from N=6 to N=7 tuning the chain strength of the annealer. To this end, we look for the optimal choice of this parameter such that QA yields at least one feasible solution to the smallest non-functioning case: m=4, N=7. Larger choices of the parameters turned out to be untreatable by the available quantum hardware. We concentrate on the chain-strength parameter <cit.> arising specifically when utilizing QA technology. This value tunes the coupling force applied to chain qubits inside the hardware, in order to faithfully represent single QUBO variables, as explained in <Ref>. In other words, couplers are assigned a bias that favors chain qubits taking the same value, and this is controlled by the chain strength parameter. Knowing the threshold sizes of N = 7, m = 4 for the QA to approach the DDPP successfully, we manually increase the chain strength parameter corresponding to this indicted case, in a tuning process. Indeed, given the default chain strength value of 109413.47, for this size, we iteratively raise it, and report in <Ref> how this impacts solution quality in terms of feasibility. Parameters that lead to a feasibility ratio boost are highlighted in <Ref>, announcing the possibility of raising the reliability of QA on this problem up to N=7 deliveries. This shows that this case, which corresponds to a large number of physical qubits (between 190 and 200), can be addressed by currently available quantum annealers by optimizing the chain strength. Finally, to provide a more complete overview of the performance of QA, in the next subsection, we tackle the issue of scalability and show how the performance changes when increasing N in the derived range. §.§ Performance scaling with respect to the number of deliveries In this section, we address the scalability of QA and SA solvers, by examining how performance varies within the approachable size of N ∈{4,5,6,7}. Specifically, a study of the sensitivity of the annealers' solutions with respect to the number of deliveries is given and evaluated through a comparison with deterministic solver findings on the same problems. We aim to tune N ∈{4,5,6,7,8} since, as we recall from <Ref>, QA performs successfully on all values of N ∈{4,5,6,7}, when the set of available drones has dimension m=4. Therefore, for conducting the N-tuning in the stated range, for QA tests, we fix m=4. We chose to show also the case with N=8 deliveries, as representative of the dimension-limiting case for QA. Similarly, SA handles problems in the stated N range for m=10, as shown in <Ref>. Therefore, for the same tests with SA, we fix m=10. For each N value and the fixed m values, we generate a test problem, as done for the instances from <Ref>. Of course, these synthetic instances, created for each N value, comprise N-dimensional lists of energy costs and time intervals, and consequently are of increasing size, as N increases. Specific sampled sets of parameters are shown in <Ref> of the Appendix. We report in <Ref> the results for SA and QA on these random instances of increasing size. In the figure, we report the average optimal objective function values over the feasible solutions obtained with 10 calls to the annealers. These values are compared with the corresponding values obtained by the commercial deterministic solver applied to the ILP formulation (<ref>). We also show the percentage of feasible solutions found in 10 attempts, for each value of N and for both SA and QA. As we average only over feasible solutions, these figures provide information on both the optimality of the computed output (as compared to the deterministic solution) and its feasibility. According to our findings, with the penalty weights specified in <Ref> and the optimal chain strength for the N=7 case of <Ref>, QA is effective on the selected size range, finding meaningful solutions. In terms of physical qubits, this shows that QA can competitively tackle DDPP instances, as long as the number of variables remains under ∼ 190. This is a noteworthy value and is obtained due to the chain strength tuning in <Ref>. § CONCLUSION AND FUTURE WORK The overarching goal of the work was to study the extent to which annealing techniques can be used to tackle the DDPP, a popular and notoriously complex optimization problem. We presented the standard QUBO formulation of the problem and proposed a novel one leading to a significant reduction in constraints and thus in the variable count. We conducted a series of experiments with the primary objective of exploring the possibility of using quantum annealing machines and understanding the correct dimension range within which QA works well. We undertook an analysis involving both the Simulated and the Quantum Annealer solvers by D-Wave. Our study of the potential applicability of quantum annealing hardware is conceived with a forward-looking perspective, as technology advances and yields larger and more interconnected devices. The analysis of how QA scales using small problems is particularly relevant also in view of the potential application within hybrid quantum-classical computing frameworks, where the quantum component tackles specific sub-problems of larger classical algorithms <cit.>. It is a topic of interest to develop strategies for handling larger problems with QA alone, as well as in cooperation with other methods. We observe from our study that, even though QA can efficiently approximate exact solutions, still commercial deterministic solvers show the best performance. However, it is also known that, for big-size instances, exact solvers like Gurobi become unable to obtain optimal solutions with zero optimality gap, within a reasonable amount of time. For example, in <cit.> it is shown that the branch-and-cut algorithm implemented in Gurobi was unable to solve the considered optimization problem within 24 hours, and hence yielded a sub-optimal solution. In that work, a hybrid quantum-classical partitioning approach instead was shown to find good quality near-optimal solutions within reasonable computational time. Therefore, depending on the problem setting, QA can represent a valid alternative in a hybrid-decomposition framework, as opposed to approaching the global problem with a deterministic solver. § ACKNOWLEDGMENTS We acknowledge the CINECA award under the ISCRA initiative, for the availability of quantum computing resources and support. S.T. acknowledges support from Leonardo SpA and wishes to thank the Quantum and HPC Labs in Genova for their hospitality. icml2023 § APPENDIX We report here the tables with the parameter details of the synthetic problem settings used in the experiments.
http://arxiv.org/abs/2406.08765v1
20240613025118
LLM-based Knowledge Pruning for Time Series Data Analytics on Edge-computing Devices
[ "Ruibing Jin", "Qing Xu", "Min Wu", "Yuecong Xu", "Dan Li", "Xiaoli Li", "Zhenghua Chen" ]
cs.LG
[ "cs.LG" ]
Injecting Combinatorial Optimization into : Application to the Board Game Florian Richoux AIST, Tokyo, Japan florian@richoux.fr June 17, 2024 ============================================================================== § ABSTRACT Limited by the scale and diversity of time series data, the neural networks trained on time series data often overfit and show unsatisfacotry performances. In comparison, large language models (LLMs) recently exhibit impressive generalization in diverse fields. Although massive LLM based approaches are proposed for time series tasks, these methods require to load the whole LLM in both training and reference. This high computational demands limit practical applications in resource-constrained settings, like edge-computing and IoT devices. To address this issue, we propose Knowledge Pruning (KP), a novel paradigm for time series learning in this paper. For a specific downstream task, we argue that the world knowledge learned by LLMs is much redundant and only the related knowledge termed as "pertinent knowledge" is useful. Unlike other methods, our KP targets to prune the redundant knowledge and only distill the pertinent knowledge into the target model. This reduces model size and computational costs significantly. Additionally, different from existing LLM based approaches, our KP does not require to load the LLM in the process of training and testing, further easing computational burdens. With our proposed KP, a lightweight network can effectively learn the pertinent knowledge, achieving satisfactory performances with a low computation cost. To verify the effectiveness of our KP, two fundamental tasks on edge-computing devices are investigated in our experiments, where eight diverse environments or benchmarks with different networks are used to verify the generalization of our KP. Through experiments, our KP demonstrates effective learning of pertinent knowledge, achieving notable performance improvements in regression (19.7% on average) and classification (up to 13.7%) tasks, showcasing state-of-the-art results. § INTRODUCTION With the advancement of deep learning, massive methods are proposed for time series learning across different fields such as healthcare <cit.>, transportation <cit.>, energy <cit.> and industry <cit.>. Although these approaches show significant improvements on some benchmarks, it is still challenging to generalize these methods to complex scenarios <cit.>. The main issue which limits the generalization of existing time series approaches, is that different measurements are applied in the process of time series data collection. Unlike computer vision and language, it is difficult to combine these time series datasets collected from different measurements into a large scale dataset. Limited by the scale and diversity of a single time series dataset, the generalization of trained neural network on time series data is not satisfacotry. Recently, large language models (LLM) with tens of billions of parameters, demonstrate remarkable generalization capabilities in different tasks <cit.>. Pre-trained on massive corpus of self-supervised data, these foundation models implicitly capture knowledge understanding on the world, which enables them to be zero-shot transferable on downstream tasks. To alleviate the issues in time series learning, some methods <cit.> are proposed to integrate the knowledge from LLMs into their frameworks. Nevertheless, there are two issues in these LLM based time series methods. * These approaches often require to load the whole LLM during training and inference, which is computationally expensive and time-consuming. * These methods are generally based on a pre-trained and fixed LLM, which largely limits the flexible of these methods. Limited by these issues above, It is challenging for existing LLM based methods to flexibly design models with different scales according to the requirements of tasks, especially for some computation constrained scenarios. To address this problem, we re-evaluate the impact of the world knowledge acquired by LLMs on downstream tasks. We argue that for a specific downstream task, it is not necessary to transfer th entire knowledge of a pre-trained LLM into a target model. Instead, as illustrated in Fig. <ref>, we contend that this world knowledge actually can be divided into two parts: related knowledge and redundant knowledge for a specific task. Only the related knowledge termed as "pertinent knowledge" is what we need to transfer to the target model. Motivated by this discernment, we propose a novel compression paradigm called Knowledge Pruning (KP) for LLMs, which is able to identify the pertinent knowledge, prune the redundant knowledge and effectively distill the pertinent knowledge to our target model. Knowledge is implicitly stored in a neural network. It is generally difficult to directly obtain a specified part of knowledge from a network. However, unlike traditional networks, LLMs can produce related knowledge description via prompts based on a dialogue scheme. According to this scheme, our proposed KP firstly generates a knowledge prompt set (KPS) for a specific task, where these prompts are forwarded to a pre-trained LLM to produce corresponding embeddings. In our proposed KP, these embeddings are called knowledge anchor points (KAPs). Although the latent space of the pertinent knowledge is a continuous space, these KAPs can be used to roughly represent this latent space. After that, a metric learning is applied to learn this prior knowledge via knowledge distillation and transfer this pertinent knowledge to our target model. Additionally, the regression task requires a network to learn the continuous domain of the task and predict arbitrary value. To fulfill this requirement, an anchor voting scheme (AVS) is proposed, where the confidence distribution among different anchor points is generated to predict the expected output. To verify the effectiveness of our proposed KP, massive experiments are conducted on two fundamental tasks on edge-computing devices, where different network architectures are investigated on two different task categories: classification and regression in time series learning. In classification, we evaluate the performances of our KP on four different benchmarks of human activity recognition, where our approach effectively improve the performances by up to 13.7%. In regression, we investigate the performance of our KP on the remaining useful life prediction under four different scenarios. Through experiments, our proposed KP significantly improves the accuracy by 19.7% on average. Our proposed KP achieves state-of-the-art performances on both tasks. Overall, our contributions are summarized as below: * We discover that the knowledge in LLMs is much redundant for a specific downstream task. In stead of the entire knowledge, only the pertinent knowledge needs to be transferred to the target model. * A novel compression paradigm, Knowledge Pruning (KP) is proposed to effectively distill the pertinent knowledge into the target model, which achieves satisfactory performances, while remaining low computation cost. * An anchor voting scheme (AVS) is proposed based on the scores of knowledge anchor points to predict arbitrary value for the regression task. * Experiments are extensively conducted on two fundamental tasks: classification and regression in time series learning, where different networks are employed among 8 different scenarios or benchmarks. For the regression task, our KP significantly improves the accuracy by 19.7% on average. For the classification task, the performances are largely improved by up to 13.7%. With our KP, state-of-the-art performances are achieved on both two tasks § RELATED WORK Large language models (LLMs) recently witness significant progress and show impressive performances among a multitude of fields including natural language processing (NLP) <cit.> and computer vision (CV) <cit.>. To integrate the knowledge representations of LLMs into time series analytics, many approaches are proposed. PromptCast <cit.> firstly attempt to utilize LLMs for time series forecasting, where the time series data is converted into prompts. OFA <cit.> proposes to fine-tune a pre-trained LLM for downstream tasks in time series analytics. Time-LLM <cit.> and LLM4TS <cit.> aim to repurpose a pre-trained LLM by aligning the time series domain to that of language for time series tasks. TEST <cit.> combines the text prompts with time series encoding for better aligning time series data to the language. To fully utilize the generalization capability of LLMs, TEMPO <cit.> augment the raw time series data by data decomposition and fine-tune a Pre-trained LLM on these augmented time series data. Although these LLM based approaches achieve significant performances on time series tasks, they are proposed based on a pre-trained and fixed LLM and require to load the whole LLM during training and inference. These drawbacks limit their flexibility and their applications on some scenarios with limited computation resources. To address this issue, we propose the knowledge pruning (KP), which is able to prune the redundant knowledge and effectively transfer the pertinent knowledge to a target model without the retaining process of LLMs, significantly reducing the computation cost and maintaining satisfactory performances. § MAIN WORK Large language models (LLMs) have high computational demands and the knowledge store in them is much redundant for a specific task. To alleviate these drawbacks and facilitate the application of LLMs on computational constrained scenarios, we propose a new compression paradigm, Knowledge Pruning (KP) in this section. Knowledge is implicitly stored in neural networks. It is difficult to directly obtain the specified network in general. To address this problem, we alleviate the dialogue scheme of LLMs to generate a series of language embeddings based on prompts. The pipeline of our KP is shown in Fig. <ref>, where our KP is composed of two stages: pre-processing and training stage. Given a specific downstream task, a knowledge prompt set (KPS) is firstly generated. After that, the prompts in KPS are forwarded into a pre-trained LLM to produce corresponding embeddings, which serve as knowledge anchor points (KAPs) and are used to represent the pertinent knowledge. We regard this pertinent knowledge as prior knowledge for the target model. After that, at the training stage, the metric learning and knowledge distillation are leveraged to transfer this prior knowledge into the target model. Additionally, the output based on metric learning is generally discrete. To extend the application of our KP to the tasks with continuous output, an anchor voting scheme (AVS) is proposed, which enables our KP to produce arbitrary value, achieving significant improvements on both classification and regression tasks. §.§ Knowledge Pruning Our knowledge pruning consists of three steps: knowledge prompt set generation, knowledge anchor point production and pertinent knowledge distillation. Knowledge Prompt Set Generation Knowledge prompt set (KPS) contains the prompts which are forwarded into a pre-trained LLM for getting the knoweldge anchor points (KAPs). In this paper, these prompts in KPS indicate the description of corresponding data. Since we devise to apply our proposed KP to two fundumental tasks: regression and classification, two different prompt templates are proposed. In regression, the remaining sueful prediction is used to evaluate the performance of our KP, and the prompt template is “The remaining useful life is {num}.”, where num indicates the correspoding groud truth value and ranges from [y_min, y_max] In classification, we apply our KP on the human activity recognition, and the prompt template is “The subject is {action}.”, where action means the name of the corresponding activity. Knowledge Anchor Point Production After obtaining the KPS, we forward these prompts in the KPS to a pre-trained LLM to obtain the language embeddings, which can be formulated as following. z_i = F_l(𝒫_i), where 𝒫_i indicates the ith prompt, F_l represents a pre-trained LLM, and z_i is the produced language embedding termed as a knowledge anchor point (KAP). These KAPs are used to represent the space of pertinent knowledge. Pertinent Knowledge Distillation Without transfering the entire knowledge of a LLM, our KP only transfer the pertinent knowledge which is indicated by KAPs. However, there is domain gap between the knowledge learned by LLM and the knowledge of downstream tasks. To alleviate this issue, an alignment module consisting of 2 fully connected layers are used to project these KAPs into tha latent space of the downstream task. This process is computed as below, k_i = ϕ(z_i), where ϕ means the alignment module and k_i is the transformed feature vector, which serves as a prior knowledge. Given a segment of time series data x_i, we forword it into our target model F_t, and get the predicted feature vector f_i. To utilize the prior knowledge 𝒵 = {z_i | i = 1, …, N}, the metric learning is leveraged. Moreover, to optimze the target model and the alignment simultaneously, based on the unidirectional metric learning in Prototypical Networks <cit.>, we develope a bi-directional metric learning. For optimizing the target model, the process is computed as: p_t(i) = exp(-d(k_i, x_i))/∑_t=1^|𝒵|exp(-d(k_t, x_i)), where d denotes the distance function. To improve the numerical stability and computational efficiency, we further improve this computation progress and compute the prediction as below: p_t(i) = log(exp( simi(k_i, f_i))/∑_t=1^|𝒵|exp( simi(k_t, f_i))), where simi is the cosine similarity. For the alignment optimzation part, the process can be formulated as: p_l(i) = log(exp( simi(k_i, f_i))/∑_t=1^|ℬ|exp( simi(k_i, f_t))), where |B| denotes the batch number. Finally, to distill the pertinent knowledge to the target model, the Kullback–Leibler divergence (KL-div) is used and the final loss is: L = 0.5*D_KL(p_t, p_g) + 0.5*D_KL(p_l, p_g^T), where p_g is the ground truth distribution and is defined as following, p_g(i) = exp(g_i * τ)/∑_t=1^|ℬ|exp(g_t*τ)), where g_i is equal to one for correpsonding prompt description, while is zero. Similarly to knowledge distillation, τ is a temperature hyper-parameter. §.§ Anchor Voting Scheme Since our KP is based on metric learning, the prediction of the target model is discrete. To extend our KP to the task with continuous output like regression, an anchor voting scheme (AVS) is proposed. Given the prediction distribution 𝒮 = {p_t(i)|i=1,…,|𝒵|}, we firstly sort these scores in a descending order as below: 𝒮̂ = sort(𝒮). After that, these scores are cumulated according to Eq. <ref>. 𝒮_a = cumsum(𝒮̂) Then, the cumlated scores which are larger than θ, are formed into a voting set 𝒱 = {v_i| 1,…, |𝒱|}. The final prediciton is generated as following, o = ∑_i=1^|𝒱|v_i*n_i/∑_i=1^|𝒱|v_i, where n_i indicates the numerical value described by the KAP v_i. With our proposed AVS, our proposed KP is effectively entended to the regression task, achieving significant performances.<cit.> § EXPERIMENTS To verify the effectiveness of our Knowledge Pruning (KP), extensive expriments are conducted in this section. §.§ Datasets and Experimental Setup Datasets To comprehensively investigate the performances of our KP, two fundamental tasks on edge-computing devices: classification and regression, are evaluated in this paper. In classificaiton, the human activity recognition (HAR) task is studied and four different benchmarks: UCI_HAR <cit.>, Opportunity <cit.>, PAMAP2 <cit.>, and WISDM <cit.> are used. These benchmarks contain different number of activity categories ranging from 6 to 17 with different scales between 3k and 29k samples. In regression, the remainning useful life (RUL) prediciton is alleviated for evaluation, where the C-MAPSS <cit.> dataset is used. C-MAPSS contains four different subsets: FD001, FD002, FD003 and FD004 with different scenarioes. Experimental Setup In classifiction, for consistency and meaning fulcomparison, the training and inference process on UCI_HAR, Opportunity and PAMAP2 are conducted according to the protocol of iSPLInception <cit.>. Since the experiments in iSPLInception <cit.> do not include WISDM benchmark, the expriments on WISDM follow the setting in Multi CNN-BiLSTM <cit.>. For fair comparion, other compared methods are re-implemented under the same setting. According to approches <cit.>, F1-Score is used for evaluation in HAR tasks. In regression, some methods are also re-implemented under the same conditions. The training and inference processes are conducted according to classic RUL methods <cit.>. RMSE and scoring fuction are used as evaluation metrics. Two hyper-parameters τ and θ are set as 10 and 0.9, respectively for all experiments. The pre-trained text encoder in CLIP <cit.> is used as the pre-trained LLM in experiments. Experiments are conducted on a workstation with a GeForce RTX 4080 GPU and 128 GB memory from 1 to 4 hours. §.§ Comparison with other methods To evaluate the performances of our KP, we compare our approach with orther state-of-the-art (SOTA) methods. The exprimental results in regression and classificationare listed in Table. <ref> and Table. <ref>, respectively. In the RUL task, serveral SOTA approaches: Li et al. <cit.>, BLCNN <cit.>, PE-Net <cit.>, DGRU <cit.>, AdaNet <cit.>, Jang et al. <cit.> and KDnet <cit.> , are compared with our method. Among these methods, Li et al. proposes a CNN based network to predict the RUL. PE-Net integrates position encoding scheme with an optimzed CNN architecture for the RUL task. AdaNet introduces the deformable convolution into the RUL task. BLCNN devises a hybrid network which combines RNN and CNN together to improve the prediction accuracy. DGRU apply the adversarial learning on the RUL task. A self-supervised learning approch is proposed in Jang et al. KDnet utilize knowledge distillation to transfer the knowledge in RNN to a CNN model. Benefitted from the learned pertinent knowledge from a pre-trained LLM, our KP performs much better than them and achieve the best performances. In the HAR task, we compare our KP with seven SOTA methods: LSTM-CNN <cit.>, CNN <cit.>, Multi CNN-GRU <cit.>, Multi CNN-BiLSTM <cit.>, GRU_INC <cit.>, DTL <cit.> and iSPLInception <cit.>. These compared approaches employ different network architectures like RNN, CNN and even hybrid networks. As a novel comdel compression paradigm, our proposed KP is fundamentally orthogonal to existing HAR methods and can be applied to any existing HAR approaches. We apply our KP to two different methods: DTL and iSPLInception, and list the best performances we achieved in Table <ref>. Through experiments, it demonstrates that our KP effectively transfer the pertinent knowledge of a pre-trained LLM to the target model, which achieves the best performances among other SOTA methods. §.§ Ablation Study To verify the effectiveness, experiments on ablation study are presented. Our KP is orthogonal to approaches for time series analytics and can be directly applied to these methods. To show the generalization of our KP, we apply our KP on serveral different networks and show the performance improvements. Experimental results on the regression task, RUL, and classification task, HAR are listed in Table. <ref> and Table. <ref>, respectively. In the RUL task, three different approaches: Bi-LSTM, Two-Stream BiLSTM <cit.> and PE-Net <cit.> are used as our baselines. Bi-LSTM is a shallow network, which consists of two bi-directional LSTM. Two-Stream BiLSTM integrates the handcrafted feature flow <cit.> into the raw time series data via a Bi-LSTM based network. PE-Net designs a CNN with a position encoding scheme to predict the RUL. In Table. <ref>, Bi-LSTM is used as baseline1, PE-Net is used as baseline2 and Two-Stream Bi-LSTM is used as baseline3. With our proosed KP, all these three methods are remarkablely improved among four different scenarioes. Compared with RMSE, Score is generally regarded as a more important evaluation metric, since it give more penalty on the late prediciton, which is similar to the practical setting. Among these three baselines, baseline3 acheves the best performances on average. After applying our KP, its performances are further improved by 19.7% in Score. In the HAR task, two different methods: DTL <cit.> and iSPLInception <cit.>, are used as our baselines. DTL is a hybrid network, which combines CNN and RNN togethor to capture the temporal features. In comparison, iSPLInception proposes to utilize the inception based CNN network to classify the human activaities.In Table. <ref>, baseline1 indicates the DTL, and basleine2 represents the iSPLInception. According to the experimental results, our KP is able to effectively improve the performances on these two baselines among all four benchmarks. The improvements on these methods range from 0.8% to 13.7%. These expriments above show that our propsoed KP effectively identify the pertinent knowledge and transfer it to the target model. With our KP, all these five baselines are improved by a large marge. Effectiveness of AVS AVS is proposed to enable the metric learning based network to predict continuous value for the regression task, like RUL. Experiments are designed to show the effectiveness of our proposed AVS, which are listed in Table <ref>. The Bi-LSTM is used as the baseline. The experimental reults indicates that without our propsoed AVS, the performances of KP on the regrression task are not satisfactory. After applying our proposed AVS, our KP can effectively improve the accuracy on the RUL prediction by a large marge. Based on the experiments above, it can be found that our proposed KP can consistently improve the performances across different tasks and benchmarks. Since the improvemment by KP ranges from 0.8% to 13.7%, the effectiveness of our KP may be affected by the specific data distribution and the neural network architecture. §.§ Computation Efficiency Our KP is proposed to alleviate the issue of the computation cost in LLMs. The experiments on computation efficiency is carries out and listed in Table. <ref>, where the FLOPs and Params are listed to compare the computation efficiency. As listed in Table. <ref>, we apply our KP to two networks: Two-Stream BiLSTM and DTL for the task RUL and HAR, respectively. Since DTL applies a hybrid network, which is composed of CNN and RNN and is more complex than the Two-Stream BiLSTM, the computation complexity of DTL is higher than that of Two-Stream BiLSTM. Nevertheless, the computation demands on these two approaches are much lower than that of the LLM in CLIP. According to the experimental results, our proposed KP is able to effectively prune the redundant knowedlge of LLM. The computation issue of LLMs is well alleviated, and the performances of the target model are improved. §.§ Sensitivity Analysis Our proposed KP involves two hyper-parameters: τ and θ. To investigate the impact of different values of these two parameters, several experiments are conducted and discussed. For the parameter τ, we graduately increase its values and carry out experiments on HAR and RUL tasks, respectively. These experimental results on RUL and HAR are illurstrated in Fig. <ref> and Fig. <ref>, respectively. In Fig. <ref>, our KP is applied on Two-Stream BiLSTM with different τ values. Experiments are carried out in FD004 subset, which contains the most complex scenarioes. Although the difference performances are obtained on RUL task, their performances are still better than the baseline. With our KP, the performances of Two-Stream BiLSTM are consistly improved under different values of τ. In Fig. <ref>, we apply our KP on the DTL in two different datasets: UCI_HAR and WISDM benchmarks. It shows that our KP can improves the performances of DTL under different τ values. AVS is proposed to regression tasks, which enables our network to predict continuous value for the RUL task. The hyper-parameter θ in AVS is used as a threshold value to select the anchor for voting. To investigate the stability of our AVS, we apply our KP to the Two-Stream BiLSTM and design expriments on FD004 with different θ, which are presented in Table. <ref>. As listed in Table <ref>, our AVS with different θ values is able to consistly improve the performances of Two-Stream BiLSTM. This demonstrates that our proposed AVS is adaptive to the variation of θ. § CONCLUSIONS In this paper, we have proposed a new model compression paradigm, Knowledge Pruning (KP). Our KP consists of three steps: knowledge prompt set generation, knowledge anchor point production and pertinent knowledge distillation. Furethurmore, since our KP is based on metric learning, the performances on the regresison tasks may be limited. To extend our KP to the regression task, a anchor voting scheme has been proposed. Through experiments, our KP has effectively pruned the redundant knoweldge of LLMs for a specific downstream task and accurately transfer the pertinent knowledge to the target model. With our KP, the computation cost introduced by LLMs is largely reduced, and satisfacotry performances are achieved. Our KP shown siginificant improvement on both classification task, HAR and regression task, RUL, achieving state-of-the-art performances. unsrtnat
http://arxiv.org/abs/2406.07806v1
20240612015058
Probing the Shock Breakout Signal of SN 2024ggi from the Transformation of Early Flash Spectroscopy
[ "Jujia Zhang", "Luc Dessart", "Xiaofeng Wang", "Qian Zhai", "Yi Yang", "Liping Li", "Han Lin", "Giorgio Valerin", "Yongzhi Cai", "Zhen Guo", "Lingzhi Wang", "Zeyi Zhao", "Zhenyu Wang", "Shengyu Yan" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
0000-0002-8296-2590]Jujia ZhangE-mail:jujia@ynao.ac.cn Yunnan Observatories (YNAO), Chinese Academy of Sciences (CAS), Kunming, 650216, China International Centre of Supernovae, Yunnan Key Laboratory, Kunming 650216, China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming, 650216, China 0000-0003-0599-8407]Luc Dessart Institut d'Astrophysique de Paris, CNRS-Sorbonne Université, 98 bis boulevard Arago, F-75014 Paris, France 0000-0002-7334-2357]Xiaofeng Wang Physics Department, Tsinghua University, Beijing, 100084, China Yunnan Observatories (YNAO), Chinese Academy of Sciences (CAS), Kunming, 650216, China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming, 650216, China 0000-0002-6535-8500]Yi Yang Physics Department, Tsinghua University, Beijing, 100084, China Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Yunnan Observatories (YNAO), Chinese Academy of Sciences (CAS), Kunming, 650216, China International Centre of Supernovae, Yunnan Key Laboratory, Kunming 650216, China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming, 650216, China Yunnan Observatories (YNAO), Chinese Academy of Sciences (CAS), Kunming, 650216, China International Centre of Supernovae, Yunnan Key Laboratory, Kunming 650216, China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming, 650216, China INAF-Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122 Padova, Italy Yunnan Observatories (YNAO), Chinese Academy of Sciences (CAS), Kunming, 650216, China International Centre of Supernovae, Yunnan Key Laboratory, Kunming 650216, China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming, 650216, China 0000-0003-0292-4832]Zhen Guo Instituto de Física y Astronomía, Universidad de Valparaíso, ave. Gran Bretaña, 1111, Casilla 5030, Valparaíso, Chile Centre for Astrophysics Research, University of Hertfordshire, Hatfield AL10 9AB, UK Millennium Institute of Astrophysics, Nuncio Monseñor Sotero Sanz 100, Of. 104, Providencia, Santiago, Chile 0000-0002-1094-3817]Lingzhi Wang Chinese Academy of Sciences South America Center for Astronomy (CASSACA), National Astronomical Observatories, CAS, Beijing, China Yunnan Observatories (YNAO), Chinese Academy of Sciences (CAS), Kunming, 650216, China International Centre of Supernovae, Yunnan Key Laboratory, Kunming 650216, China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming, 650216, China Yunnan Observatories (YNAO), Chinese Academy of Sciences (CAS), Kunming, 650216, China International Centre of Supernovae, Yunnan Key Laboratory, Kunming 650216, China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming, 650216, China Physics Department, Tsinghua University, Beijing, 100084, China § ABSTRACT We present early-time, hour-to-day cadence spectroscopy of the nearby type II supernova (SN II) 2024ggi, which was discovered at a phase when the SN shock just emerged from the red-supergiant (RSG) progenitor star. Over the first few days after the first light, SN 2024ggi exhibited prominent narrow emission lines formed through intense and persistent photoionization of the nearby circumstellar material (CSM). In the first 63 hours, spectral lines of He, C, N, and O revealed a rapid rise in ionization, as a result of the progressive sweeping-up of the CSM by the shock. The duration of the IIn-like spectra indicates a dense and relatively confined CSM distribution extending up to ∼ 4 × 10^14 cm. Spectral modeling reveals a CSM mass loss rate at this region exceeding 5 × 10^-3 yr^-1 is required to reproduce low-ionization emissions, which dramatically exceeds that of an RSG. Analyzing Hα emission shift implies the velocity of the unshocked outer CSM to be between 20 and 40 , matching the typical wind velocity of an RSG. The differences between the inner and outer layers of the CSM and an RSG progenitor highlight a complex mass loss history before the explosion of SN 2024ggi. § INTRODUCTION Type II supernovae (SNe), characterized by the presence of prominent hydrogen features in their spectra <cit.> are the death throes of most massive stars (e.g. red supergiant, RSG, ). A fraction of SNe II exhibits interaction signatures with circumstellar material (CSM), characterized by narrow optical emission lines with broad electron-scattering wings classified as Type IIn SNe <cit.>. Some of the diversity of SNe IIn lies in the duration of the interaction signatures, which can range from a few days (e.g., SN 2013fs, ) to several weeks (e.g., SN 1998S, ), months (e.g., SN 2010jl, ), and even a few years (e.g., SN 2015da, ). This indicates a range in spatial scales and mass for the CSM around their progenitor stars. These cataclysmic events provide a unique window into the final stages of stellar evolution, especially the mass-loss history and the environment surrounding the progenitor stars. The mass-loss rates deduced from SN observations often differ from those derived from studies of RSG in their quiescent phases, underscoring the gaps in our understanding of late-stage stellar evolution. For example, the variety of SNe II with short-lived IIn-like spectral features, as witnessed in supernovae like SN 2013fs, SN 2018zd <cit.>, and the recent SN 2023ixf (e.g., ), reveals the diversity of CSM structures and the mass-loss histories of that subset of SNe and their associated massive star progenitors. Moreover, the early IIn-like spectra resulting from ionizations of CSM by shock photons provide a way to investigate the initial shock breakout (SBO) signal of SN explosion. The earliest emission of electromagnetic radiation from an SN explosion is associated with SBO, a brief yet brilliant event signifying the transition from an opaque to a transparent state as the expanding shockwave reaches the stellar surface <cit.>. Once the shock approaches the progenitor (within an optical depth of τ∼ 10-30), radiation begins to leak from the shock, initiating the ionization of the cool atmosphere and environment of the RSG. The ionization process, occurring on timescales from seconds to tens of minutes, is marked by a UV flash <cit.>. It is followed by UV and optical emissions from the cooling envelope. However, if there is an optically thick CSM or dust shell, the SBO might occur within this shell and the shock emission will be reddened and prolonged, as seen in SN 2023ixf <cit.>. In this letter, we present the spectroscopic observations of a nearby SN II in NGC 3621, which provides another chance to detect the SBO signals. SN 2024ggi was discovered by ATLAS (Asteroid Terrestrial-impact Last Alert System, ) on April 11.14, 2024 (UTC dates are used throughout this paper) in the nearby galaxy NGC 3621, located at a distance of D = 7.0 ± 0.2 Mpc (based on the averaged Cepheid distance derived by different period-luminosity relations presented in ). The archive images taken with the Hubble Space Telescope about 20 years before the explosion reveal an RSG progenitor with a temperature T_⋆=3290_-27^+19 K and radius R_⋆=887_-51^+60  <cit.>. With the first light date at MJD = 60411.03 ± 0.05 determined through high-cadence photometric observations <cit.>, SN 20024ggi stands out as an SN II with extremely early photometric and spectroscopic observations <cit.>. § SPECTRAL OBSERVATION §.§ Classification Utilizing the Li-Jiang 2.4-m telescope (LJT; ) equipped with YFOSC (Yunnan Faint Object Spectrograph and Camera; ), we obtained a classification spectrum for SN 2024ggi at ∼ 13.9 hours after its explosion <cit.>. This classification spectrum was predominantly characterized by narrow emission lines of H, He, and CNO elements, as seen in Fig. <ref>. These features, also known as flash features <cit.>, are commonly observed in SNe IIn (e.g., SN 1998S, ). They are generated by recombinations of the surrounding dense CSM that is photoionized by radiation from the embedded shock. Interestingly, in the initial-phase spectrum, SN 2024ggi exhibits a significant difference from other SNe II-P/CSM events like SN 2013fs, SN 2018zd, and SN 2023ixf. As presented in Fig. <ref> (a), the classification spectrum of SN 2024ggi lacks strong emission features of high ionization states. It showed lines of , , and rather than those of , and or highly-ionized λλ 5576,5598 and λλ 4604,4620 lines that appeared in SN 2013fs. In particular, SN 2024ggi and SN 2013fs exhibit two opposite spectral line morphologies in the 4600-4700 Å region. The spectra of SN 2013fs at t≤ 10 hr (where t denotes time after the first light) show a narrow λ 4686 emission line, while the nearby λλ 4630,4641 doublets appear flatter without prominent narrow-line components. Instead, there is a small bump consisting of λλ 4604,4620. However, the classification spectrum of SN 2024ggi reveals narrow emission lines of λλ 4630,4641, while the position corresponding to λ 4686 appears flat. These two SNe belong to those with the earliest discoveries among the sample of SNe II-P with CSM. Spectroscopic data from other samples, usually obtained one day after the SBO, did not show similar phenomena, suggesting a high dependency of early-time spectra on the exact post-breakout epoch and the CSM properties. §.§ High-cadence sampling within the initial 72 hours Given the observed IIn-like features in the classification spectrum and a relatively lower ionization state revealed by these features, we initiated an hour-cadence observation campaign with the LJT. A total of four spectra were acquired within 2.4 hours after the identification, which allowed us to monitor possible variations in the ionization state for this young SN II. From the second spectrum, SN 2024ggi exhibited narrow λ 4686 emission that was absent before. It is possible that, in the first spectrum, the CSM has not been shocked enough to generate sufficient heat for emitting a significant amount of , except for the deep CSM near the shock, where the material could be rushing. The subsequent two spectra did not show significant morphological evolution. Further analysis of the equivalent widths (EWs) of the main spectral lines, as shown in Fig. <ref>, reveals that within the 1.5 hr from t∼14.7 hr to t∼ 16.2 hr, the strength of λ 4686 and λ 6678 lines remained almost unchanged. In contrast, the λ 5696 and λ 7065 lines beaome gradually weak. On the second day after explosion, an observation relay across different time zones was conducted with LJT, Telescopio Nazionale Galileo (TNG), and Very Large Telescope Unit 1 (VLT UT-1), which allowed a continuous monitoring of the rapid evolution of SN 2024ggi. In particular, to examine the structure of narrow spectral lines, we utilized the cross-dispersion capability of YFOSC with a resolution of 3500. This resolution had proven its effectiveness in our previous research of SN 2023ixf, enabling us to discern the intricate structure of Hα and provide a more precise constraint on its broadening <cit.>. Nonetheless, in the case of SN 2024ggi, the Hα line had already undergone significant broadening in the spectra taken at t∼ 38.7 hr, with a full width at half maximum (FHWM) of ∼ 700 , exceeding the instrumental FWHM (i.e., ∼ 85 ). The rapid broadening of the Hα narrow component testifies the rapid spectral evolution of SN 2024ggi. By t∼ 29 hr, many highly ionized spectral lines emerged in the spectrum, including , , and , and even . Due to limitations in the signal-to-noise ratio (S/N) of the spectrum, the λ 5411 line can only be marginally detected at t < 1 d, but it becomes visible at t∼ 29 hr. The most notable change is that One day after explosion, the emission lines of λ 7065 and λ 5696 disappeared in the spectra, but replaced by λλ 5801, 5811, λ 7110 and λ 7122 lines. As shown in Fig. <ref>, λ 4686 continues to weaken throughout the day, while the λλ 5801, 5811 progressively gain thestrengthens. We notice that the narrow λ 6678 line disappeared at 29.2 hr but it reappeared five hours later. The λ 5876 line is not detected perhaps due to that it coincides with the D absorption of the Milky Way. At t∼ 29 hr, the spectral line flux and profile morphology in this region are unreliable due to a saturation in the wavelength range from ∼4610Å to ∼4700 Å. Therefore, we cannot confirm whether the appearance of at this time is genuine, nor can we ascertain the reliability of the intensity contrast between and . At t∼ 34 hr, the narrow lines of λ 4686 and λλ 4630,4641 have comparable intensities and thus formed a double peak profile. Such a double-peak structure has been also seen in some SNe II. For instance, it appeared in SN 2023ixf at t∼ 1.2 d, SN 1998S at t∼ 2.3 d, and SN 2018zd at t∼ 4 d. SN 2013fs may also exhibit a similar structure at t∼ 1.4 d, except that the left peak corresponds to λλ 4604,4620 instead of in its spectrum <cit.>. The doublet is also visible in the spectra of SN 2024ggi taken at t= 29.2h, 38.3h, and 38.7h. This indicates that SN 2013fs and SN 2024ggi can reach similar ionization levels, though they may have different shock emissions and CSM environments. At t> 38 hr, doublet becomes weaker than λ 4686 in SN 2024ggi. However, the doublet remains unchanged over the following days while shows a faster decline in intensity. Overall, all narrow emission lines become less distinct after the third day, as they broaden significantly with a velocity exceeding 1000 . The disappearance of narrow emission lines indicates that the IIn-like phase is ending and SN 2024ggi is gradually entering a new stage of evolution. The rapid spectral evolution relates in part to a temperature change. We can roughly estimate the temperature of the SN photosphere through fits to the spectral energy distribution (SED) assuming a blackbody emission. In Fig. <ref>, the temperature of SN 2024ggi is inferred to be about 13500 K during the phase from 13.9 hr to 16.2 h, and it increases sharply from ∼ 13500 K to ∼ 26000 K about half a day later (also reported by ). The rapid temperature rise is concomitant with the rapid increase in ionization, and both are attributed to the photoionizing radiation from the shock. §.§ Flash-to-Photospheric Phase Transition Since SN 2024ggi exhibited a rapid evolution, we maintained frequent monitoring until the third day, gradually tapering off the pace after that. As presented in Fig. <ref>, we can only see a few weak `narrow' emission lines, such as Hα and λλ 4634,4641 double at ∼ 3.6 d. Meanwhile, P-cygni profiles of Hβ, Hγ, and λ 5876 begin to emerge. At 3 < t < 15 d, the spectral lines of SN 2024ggi undergo a significant transition. Initially, these lines exhibited broadening due to electron scattering, formed within the unshocked CSM. However, they gradually shift to Doppler broadening as they initially formed within the fast-moving dense shell and subsequently arises also from the ejecta as the dense shell gradually becomes optically thinner. This evolution is accompanied by a noticeable blueshift in their emission peaks. Additionally, these lines begin to appear as P-Cygni profiles, with both absorption and emission components increasing in strength over time. Roughly two weeks after the explosion, SN 2024ggi developed spectral features typical of SNe IIP in both optical and NIR spectra, i.e., the appearance of broad and lines. For example, as seen in Fig.<ref>, the NIR spectrum of SN 2024ggi, obtained with the 6.5 m Magellan telescope equipped with FIRE (Folded-port InfraRed Echellette) at t∼ 14 d, reveals similar spectral features as seen in SN2017eaw <cit.>. As observed in SN 2018zd <cit.> and SN 2023ixf <cit.>, the SN-CSM interaction creates a slow evolution in spectral features after the narrow, electron-scattering broadened emission lines disappear (Fig. <ref>). During shock wave propagation, the CSM is shocked, and compressed into a dense shell, converting a fraction of kinetic energy into thermal energy and heating the CSM as well as boosting the SN luminosity. In the case of SN 2024ggi, photoionization predominantly facilitates this heating process within the first three days. Consequently, emission from the post-shock gas, such as the cold dense shell (CDS), originates from the release of shock-deposited energy (i.e., deposited at earlier times). The relatively faint spectral lines observed approximately five days later are likely attributed to the spectrum primarily forming within the CDS during this period. This weak emission is indicative of a steep density gradient of the CDS <cit.>. We note that the evolution speed of the photospheric features in SN 2024ggi is slower than that of the typical SN 1999em but still faster than in SN 2018zd and SN 2023ixf. As illustrated in Figure <ref> (b), SN 2024ggi evolves into spectra with prominent P-Cygni profiles at around t∼ 23 d, and its spectrum at t∼ 32 d shows a close resemblance to that of a regular SN IIP. Nevertheless, SN 2024ggi evolves faster in the first month, regardless of early IIn or later photospheric spectra, compared to SN 2018zd and SN 2023ixf, suggesting a more compact CSM. § INSIGHTS INTO THE EARLY FLASH SPECTROSCOPY §.§ Fluctuation of Ionization States Within the initial 72 hr after the first light, we obtained a total of twelve spectra revealing rapid evolution of line strengths, in particular for lines associated with different ionization levels (e.g., vs ). During this period, the lower ionization species (such as , , and ) gradually weakened and disappeared, while the strength of emission lines from higher-ionization species (like , , and ) increased (see also Section <ref>). The evolution of these line fluxes or EWs is however more complex. The spectral lines such as λ 6678, λ 7065, and λ 5696 disappeared at t∼ 29.2 hr, while higher ionization lines, including λλ 5801,5811 and λλ 5576,5598, emerged. However, five hours later, λ 6678 reappeared, while became more elusive. In the 38.3 hr and 38.4 hr spectra, roughly four hours afterward, λ 6678 vanished once again, reappeared, while significantly diminished, and became detectable. The weakening of and the emergence of were already evident in the 29.2 hr spectrum. Considering the evolution of and , we believe the relative intensities of and observed at 29.2 hr are reliable despite the detector saturation in this wavelength region. Thus, we observed a decreasing and increasing ionization across the 29.2, 34.4, and 38.3 hr spectra. This trend is mirrored in Fig. <ref>, where a similar fluctuation in λλ 5801,5811 is apparent in t∼ 30 h spectrum. These variations suggest that although the ionization evolution initially arose and fell within the first 72 hr, there were fluctuations when the ionization reached its maximum at around 30 hr after the SBO. These fluctuations are probably related to the complicated propagation through a clumpy, and perhaps asymmetric CSM. The alteration in the ionization state indicates the complex interaction between the shockwave, the surrounding material and the high-energy radiation environment in the early stages of SN explosion. §.§ Comparison with Spectra Model To further understand the evolution of the ionization state of SN 2024ggi, we compare the observed spectra with the spectral models presented in <cit.> (referred to as D17 below). This model set successfully reproduces the observational manifestations of SNe II, specifically those featuring short-lived IIn-like spectra while taking into account the varying physical conditions of the progenitor before the explosion. The methodology adopted in D17 involves creating a comprehensive grid of models for different CSM characteristics (including radius, mass, density, density profile, velocity, and composition) based on the same progenitor and explosion parameters, thereby encompassing a wide range of the parameter space. Although none of the models in D17 can fully reproduce the entire spectral features, models like r1w5r and r1w6 provide a good reference for us to track and study the ionization state and evolution of the CSM. In comparison, we found that the r1w5r model fits the observations well, especially at t∼ 1.5 d, when it can reproduce all the observed characteristics of SN 2024ggi. Moreover, at t∼ 1.17 d, high-ionization lines such as and appeared in the model spectrum, which is comparable to the spectrum of SN 2024ggi at t∼ 1.35 d, indicating that the ionization states of SN 2024ggi is consistent with the model during this period. However, the CSM in the r1w5r model has a higher ionization state at t < 1 d than that observed in SN 2024ggi. For example, the r1w5r model does not show any line in the early stage, and in fact No line appears in any of the D17 models[The reason is partly because no model was computed during the earliest epochs when the first radiation from the shock was crossing the CSM (i.e., the radiative precursor). Modeling this phase is quite challenging, and there are inconsistencies in the physics addressed in D17. One such inconsistency relates to light travel time effect, considered in radiation-hydrodynamics calculations but omitted during the post-processing phase of radiative transfer calculations for spectra. ]. In contrast, the line in the r1w5r model is much stronger than the line at the beginning of explosion. Interestingly, the blue edge of Hα is quite similar in the t∼ 16 spectrum of SN 2024ggi and the t = 15 d r1w5r model spectrum. The only difference lies in the emission strength and the extent of the profile to the red. One reason might be that in the model, the optical depth attenuation is much more significant, which suggests the CDS breaks up and becomes clumpy significantly. This could be resolved if the CDS was allowed to break-up, as would occur in multi-D radiation-hydrodynamics with the development of Rayleigh-Taylor instabilities. <cit.> favored the r1w6 model over the r1w5r model in their comparisons. The mass loss rates of these two models are quite similar, and their early spectra are also very similar. Both models exhibit a higher ionization in the early stages than observed in SN 2024ggi. The early line in the r1w6 model is stronger than that in the r1w5r model, leading us to believe that the r1w5r model performs better in this aspect. Additionally, since the spectra of <cit.> do not extend beyond ten days after the explosion, they did not observe the distinct P-cygni features that appeared later in SN 2024ggi. It is worth noting that the early stage spectra of the r1w6 model (up to 14 days) also do not show this characteristic, making it inferior to the r1w5r model. The r1w5r model is suitable for SN 2024ggi, as it can roughly reproduce the observed main IIn-like spectral features. However, the issue lies in the fact that the early ionization state of the model is too high. The CSM property of SN 2024ggi might be intermediate between r1w5r and w1w6. There may also be additional features ignored in D17, such as asymmetry, the break-up of the CDS, clumping in the CSM, and such. And the radiative transfer modeling could be improved. Therefore, the initial spectral evolution of SN 2024ggi provides a new opportunity for future developments of spectroscopic models. §.§ Evolution of Hydrogen emission In the early phase, when the SN photosphere is within the unshocked ionized CSM, the spectra are characterized by symmetric emission lines with narrow cores and extended, electron-scattering broadened wings. Fig. <ref> shows the double-component fitting of the early Hα emission of SN 2024ggi. Note that the broader wings are referred as mid-width components to distinguish them from the even wider emission lines in the later P-Cygni profiles Doppler-broadened in the fast-expanding ejecta. Based on the two-component fitting of Fig. <ref>, we obtain the parameters and evolution of each component. To visually compare this evolution in Hα, we selected spectra taken by YFOSC+G14 in the first three days, as presented in the top panel of Figure <ref>, to minimize the influence of instrumental effects. All of these spectra are corrected for the redshift and rotation of the host galaxy; see more detail in section <ref>. Given the low spectral resolution, an instrumental correction is necessary to accurately determine the FWHM of the Hα line via FWHM_ cor = (FWHM^2_ obs - FWHM^2_ inst)^1/2, where instrumental FWHM is estimated through the skyline λ 6300.3. This method has been approved effective in the study of SN 2023ixf, where the value we obtained through mid-resolution spectra <cit.> matches well with that from the high-resolution spectra <cit.>. As shown in the middle panel of Fig. <ref>, the corrected FWHM of the narrow component broadens from about 50 to approximately 200 within hours on the first day. Initially, the FWHM_ obs is close to the instrumental FWHMM_ inst, making corrections potentially inaccurate. We propose 50 as an upper limit of FWHM, and the actual FWHM is likely narrower at t < 14 hr. Despite a measurement uncertainty of around 50% in the third spectrum, there is a discernible broadening of spectral lines, indicating the onset of electron scattering effects as well as the potential radiative acceleration of the unshocked CSM (D17). At this point, using the FWHM of the narrow lines to constrain the original CSM velocity becomes unfeasible. <cit.> measured an FWHM ≈ 55 of Hα at t∼ 23.6 hr (the epoch is calculated by the first light adopted in this letter) with high-resolution spectroscopy. They found that this line broadened to 61 seven hours later. Their measurements indicate that although the low-resolution results are not precise enough, they can provide certain constraints at the earliest hours. At t ∼ 38.7 hr, a mid-resolution spectrum revealed that the FWHM of Hα had increased significantly, rendering instrumental broadening negligible. Remarkably, the narrow spectral lines of SN 2024ggi expanded to over 700 within 48 hr. In contrast, the Hα FWHM of SN 2023ixf during the similar timescale after the first light was only 55 (t∼ 43 hr, ), suggesting that the CSM of SN 2024ggi is less extended than that of SN 2023ixf. Although we cannot use the FWHM to limit the CSM velocity of SN 2024ggi, a noticeable blueshift was observed in the Hα emission. This shift is evident in the top panel of Figure <ref>, with precise measurements detailed in the middle panel. After some velocity adjustments and a refined wavelength calibration using the skyline λ 6300.3, an initial blueshift velocity of -12 ± 20 was measured at t∼ 13.9 hr. The resolution of the second spectrum at t∼ 14.7 hr is slightly higher, and the measured velocity is -20 ± 10 . Considering the measurement errors in galactic rotation, the lower limit of the stellar wind velocity observed at the outer layer of CSM around SN 2024ggi is 20 ± 15 . We averaged the measurements daily to reduce uncertainties, yielding mean velocities of 42 ± 29 , 99 ± 23 , and 133 ± 17 in the first three days, respectively. These measures reveal the acceleration of the unshocked CSM by the radiation from the shock. The acceleration implies that the earlier observations can more genuinely reflect the movement of CSM before the explosion, representing the progenitor's stellar wind speed in its final phase. Thus, the inferred CSM velocity of SN 2024ggi does not exceed 40 consistent with the high-dispersion observation results (e.g., 37 , ). Given the lower limit estimated before, the stellar wind velocity derived by our initial observation suggests that the progenitor wind velocity of SN 2024ggi is between 20 and 40 . § DISCUSSION The initial spectral transformation and ionization processes observed in this SN provide a unique and valuable perspective into the final moments of a massive star. The rapid transition from IIn-like spectra to the photospheric phase in SN 2024ggi suggests that the fast-expanding ejecta quickly engulfed the CSM, and the CSM density decreased sharply at large distances. Based on the preceding analysis, we have sketched the CSM of SN 2024ggi. The blue-shifted Hα emission indicates a wind velocity of the outer CSM at 20 < v < 40 . The comparison with the D17 model suggests a mass loss rate at the inner CSM on the order of 5 × 10^-3 to 10^-2 yr^-1 to produce the observed IIn-like spectral features. The duration and strength of the narrow emission lines depend on the radius and density of the CSM, providing valuable clues for the mass-loss rate of the progenitor. The narrow emission lines with electron-scattering broadened wings of SN 2024ggi vanished approximately six days after the explosion. Based on subsequent measurements of the hydrogen P-Cygni absorption component, the maximum ejecta velocity of SN 2024ggi is 8000 . Adopting this value, we can confidently infer that the distribution range of CSM does not exceed 4 × 10^14 cm. This aligns with the photosphere radius inferred from the bolometric luminosity on the sixth day, as seen in Fig. <ref>. Considering the upper limit of the stellar wind speed of 40 , it would take at least three years for the stellar wind to reach that distance, but much longer if we adopt a slowly accelerating wind. Comparison of SN 2024ggi with other SNe II exhibiting short-lived IIn-like spectra highlights the diversity in CSM properties. SN 2024ggi stands out for the fast transition from the IIn phase to the phase when spectral lines appear broad and dominated by Doppler broadening, which is consistent with a compact surrounding CSM. For example, SN 2024ggi has a more compact CSM than that of SN 2023ixf (e.g., with a CSM distribution region of 7 × 10^14, ) and SN 2018zd (e.g., with a CSM distribution region of 10^15, ). The analysis of the Hα emission line reveals significant broadening within 63 hours, the FWHM increases from ∼ 50 to ∼ 1500 which is due to the radiative acceleration of the CSM. This is why the narrow component quickly disappears. The early spectral (and photometric, ) evolution of SN2024ggi indicates that we caught the SN during the shock breakout. In particular, the four spectra taken between 13.9 hr and 16.2 hr showed spectra with lines of a low ionization level. Taking into account the influence of spectral resolution, it can be assumed that the spectral lines and blackbody temperature of these four spectra remained almost unchanged. Combined with the significant increase in ionization level observed in the 29.2 hr spectrum, it can be inferred that the SBO occurred during this period. The photosphere radius at this time was approximately 10^14 cm. Subsequently, fluctuations in high ionization lines were observed at 29.2 hr, 34.4 hr, and 38.2 hr, which may be related to the end of SBO. The corresponding photosphere radius at this time was 1.5 × 10^14 cm (Fig. <ref>). In other words, the SBO of SN 2024ggi, as observed from the IIn-like spectrum, occurred within the region between 1 × 10^14 to 1.5 × 10^14 cm. In summary, the study of SN 2024ggi will contribute to our understanding of the late stages of stellar evolution, the pivotal role of CSM in shaping the SN observations, the intriguing diversity of SNe II, and the process of shock wave propagation in CSM. This work is supported by the National Key R&D Program of China with No. 2021YFA1600404, the National Natural Science Foundation of China (12173082), the science research grants from the China Manned Space Project with No. CMS-CSST-2021-A12, the Yunnan Province Foundation (202201AT070069), the Top-notch Young Talents Program of Yunnan Province, the Light of West China Program provided by the Chinese Academy of Sciences, the International Centre of Supernovae, Yunnan Key Laboratory (No. 202302AN360001). X.Wang is supported by the National Natural Science Foundation of China (NSFC grants 12288102 and 1203300), and the Tencent Xplorer Prize. Y.-Z. Cai is supported by the National Natural Science Foundation of China (NSFC, Grant No. 12303054), and the Yunnan Fundamental Research Projects (Grant No. 202401AU070063). ZG is supported by the ANID FONDECYT Postdoctoral program No. 3220029. This work was funded by ANID, Millennium Science Initiative, AIM23-0001. This work has made use of the University of Hertfordshire's high-performance computing facility (<http://uhhpc.herts.ac.uk>). LW is sponsored (in part) by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. We acknowledge the support of the staff of the LJT, VLT, and TNG. Funding for the LJT has been provided by the CAS and the People's Government of Yunnan Province. The LJT is jointly operated and administrated by YNAO and the Center for Astronomical Mega-Science, CAS. LJT+YFOSC, TNG + LRB,VLT UT1 + FORS2, Magallen + FIRE Pyraf, Numpy, matplotlib, astropy natexlab#1#1 [Chen et al.(2024)Chen, Kumar, Er, Guo, Yang, Lin, Fang, Du, Liu, Zhao, Zhang, Bao, Zou, Pan, Wang, Zhu, Chatterjee, Liu, Liu, Lagioia, Rangwal, Zhong, Zhang, Lian, Cai, Zhang, & Liu]2024arXiv240507964C Chen, X., Kumar, B., Er, X., et al. 2024, arXiv e-prints, arXiv:2405.07964, 10.48550/arXiv.2405.07964 [Chugai(2001)]2001MNRAS.326.1448C Chugai, N. N. 2001, , 326, 1448, 10.1111/j.1365-2966.2001.04717.x [Dessart & Hillier(2006)]2006A A...447..691D Dessart, L., & Hillier, D. J. 2006, , 447, 691, 10.1051/0004-6361:20054044 [Dessart et al.(2016)Dessart, Hillier, Audit, Livne, & Waldman]2016MNRAS.458.2094D Dessart, L., Hillier, D. J., Audit, E., Livne, E., & Waldman, R. 2016, , 458, 2094, 10.1093/mnras/stw336 [Dessart et al.(2017)Dessart, Hillier, Yoon, Waldman, & Livne]2017A A...603A..51D Dessart, L., Hillier, D. J., Yoon, S.-C., Waldman, R., & Livne, E. 2017, , 603, A51, 10.1051/0004-6361/201730873 [Fan et al.(2015)Fan, Bai, Zhang, Wang, Chang, Xin, & Zhang]2015RAA....15..918F Fan, Y.-F., Bai, J.-M., Zhang, J.-J., et al. 2015, Research in Astronomy and Astrophysics, 15, 918, 10.1088/1674-4527/15/6/014 [Fassia et al.(2001)Fassia, Meikle, Chugai, Geballe, Lundqvist, Walton, Pollacco, Veilleux, Wright, Pettini, Kerr, Puchnarewicz, Puxley, Irwin, Packham, Smartt, & Harmer]2001MNRAS.325..907F Fassia, A., Meikle, W. P. S., Chugai, N., et al. 2001, , 325, 907, 10.1046/j.1365-8711.2001.04282.x [Filippenko(1997)]1997ARA A..35..309F Filippenko, A. V. 1997, , 35, 309, 10.1146/annurev.astro.35.1.309 [Fransson et al.(2014)Fransson, Ergon, Challis, Chevalier, France, Kirshner, Marion, Milisavljevic, Smith, Bufano, Friedman, Kangas, Larsson, Mattila, Benetti, Chornock, Czekala, Soderberg, & Sollerman]2014ApJ...797..118F Fransson, C., Ergon, M., Challis, P. J., et al. 2014, , 797, 118, 10.1088/0004-637X/797/2/118 [Gagné et al.(2015)Gagné, Simcoe, Faherty, & Lambrides]Gagn2015FireHose Gagné, J., Simcoe, R. A., Faherty, J. K., & Lambrides, E. 2015. <https://api.semanticscholar.org/CorpusID:132524281> [Gal-Yam(2017)]2017hsn..book..195G Gal-Yam, A. 2017, in Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin, 195, 10.1007/978-3-319-21846-5_35 [Gal-Yam et al.(2014)Gal-Yam, Arcavi, Ofek, Ben-Ami, Cenko, Kasliwal, Cao, Yaron, Tal, Silverman, Horesh, De Cia, Taddia, Sollerman, Perley, Vreeswijk, Kulkarni, Nugent, Filippenko, & Wheeler]2014Natur.509..471G Gal-Yam, A., Arcavi, I., Ofek, E. O., et al. 2014, , 509, 471, 10.1038/nature13304 [Hamuy et al.(2001)Hamuy, Pinto, Maza, Suntzeff, Phillips, Eastman, Smith, Corbally, Burstein, Li, Ivanov, Moro-Martin, Strolger, de Souza, dos Anjos, Green, Pickering, González, Antezana, Wischnjewsky, Galaz, Roth, Persson, & Schommer]2001ApJ...558..615H Hamuy, M., Pinto, P. A., Maza, J., et al. 2001, , 558, 615, 10.1086/322450 [Heger et al.(2003)Heger, Fryer, Woosley, Langer, & Hartmann]2003ApJ...591..288H Heger, A., Fryer, C. L., Woosley, S. E., Langer, N., & Hartmann, D. H. 2003, , 591, 288, 10.1086/375341 [Hiramatsu et al.(2021)Hiramatsu, Howell, Van Dyk, Goldberg, Maeda, Moriya, Tominaga, Nomoto, Hosseinzadeh, Arcavi, McCully, Burke, Bostroem, Valenti, Dong, Brown, Andrews, Bilinski, Williams, Smith, Smith, Sand, Anand, Xu, Filippenko, Bersten, Folatelli, Kelly, Noguchi, & Itagaki]2021NatAs...5..903H Hiramatsu, D., Howell, D. A., Van Dyk, S. D., et al. 2021, Nature Astronomy, 5, 903, 10.1038/s41550-021-01384-2 [Hiramatsu et al.(2023)Hiramatsu, Tsuna, Berger, Itagaki, Goldberg, Gomez, Kishalay, Hosseinzadeh, Bostroem, Brown, Arcavi, Bieryla, Blanchard, Esquerdo, Farah, Howell, Matsumoto, McCully, Newsome, Gonzalez, Pellegrino, Rhee, Terreran, Vinkó, & Wheeler]Hiramatsu2023ApJ...955L...8H Hiramatsu, D., Tsuna, D., Berger, E., et al. 2023, , 955, L8, 10.3847/2041-8213/acf299 [Jacobson-Galán et al.(2023)Jacobson-Galán, Dessart, Margutti, Chornock, Foley, Kilpatrick, Jones, Taggart, Angus, Bhattacharjee, Braff, Brethauer, Burgasser, Cao, Carlile, Chambers, Coulter, Dominguez-Ruiz, Dickinson, de Boer, Gagliano, Gall, Gao, Gates, Gomez, Guolo, Halford, Hjorth, Huber, Johnson, Karpoor, Laskar, LeBaron, Li, Lin, Loch, Lynam, Magnier, Maloney, Matthews, McDonald, Miao, Milisavljevic, Pan, Pradyumna, Ransome, Rees, Rest, Rojas-Bravo, Sandford, Ascencio, Sanjaripour, Savino, Sears, Sharei, Smartt, Softich, Theissen, Tinyanont, Tohfa, Villar, Wang, Wainscoat, Westerling, Wiston, Wozniak, Yadavalli, & Zenati]2023ApJ...954L..42J Jacobson-Galán, W. V., Dessart, L., Margutti, R., et al. 2023, , 954, L42, 10.3847/2041-8213/acf2ec [Jacobson-Galán et al.(2024)Jacobson-Galán, Davis, Kilpatrick, Dessart, Margutti, Chornock, Foley, Arunachalam, Auchettl, Bom, Cartier, Coulter, Dimitriadis, Dickinson, Drout, Gagliano, Gall, Garretson, Izzo, Jones, LeBaron, Miao, Milisavljevic, Pan, Rest, Rojas-Bravo, Santos, Sears, Subrayan, Taggart, & Tinyanont]2024arXiv240419006J Jacobson-Galán, W. V., Davis, K. W., Kilpatrick, C. D., et al. 2024, arXiv e-prints, arXiv:2404.19006, 10.48550/arXiv.2404.19006 [Kanbur et al.(2003)Kanbur, Ngeow, Nikolaev, Tanvir, & Hendry]kanbur2003extra Kanbur, S., Ngeow, C., Nikolaev, S., Tanvir, N., & Hendry, M. 2003, A & A, 411, 361, 10.1051/0004-6361:20031373 [Leonard et al.(2000)Leonard, Filippenko, Barth, & Matheson]2000ApJ...536..239L Leonard, D. C., Filippenko, A. V., Barth, A. J., & Matheson, T. 2000, , 536, 239, 10.1086/308910 [Leonard et al.(2002)Leonard, Filippenko, Gates, Li, Eastman, Barth, Bus, Chornock, Coil, Frink, Grady, Harris, Malkan, Matheson, Quirrenbach, & Treffers]2002PASP..114...35L Leonard, D. C., Filippenko, A. V., Gates, E. L., et al. 2002, , 114, 35, 10.1086/324785 [Li et al.(2024)Li, Hu, Li, Yang, Wang, Yan, Hu, Zhang, Mao, Riise, Gao, Sun, Liu, Xiong, Wang, Mo, Iskandar, Xi, Xiang, Wang, Sun, Zhang, Chen, Lin, Guo, Liu, Cai, Zhou, Zhao, Chen, Zheng, Li, Zhang, Xu, Lyu, Castro-Tirado, Chufarin, Potapov, Ionov, Korotkiy, Nazarov, Sokolovsky, Hamann, & Herman]2024Natur.627..754L Li, G., Hu, M., Li, W., et al. 2024, , 627, 754, 10.1038/s41586-023-06843-6 [Niemela et al.(1985)Niemela, Ruiz, & Phillips]1985ApJ...289...52N Niemela, V. S., Ruiz, M. T., & Phillips, M. M. 1985, , 289, 52, 10.1086/162863 [Pessi et al.(2024)Pessi, Cartier, Hueichapan, de Brito Silva, Prieto, Muñoz, Medina, & Diaz]2024arXiv240502274P Pessi, T., Cartier, R., Hueichapan, E., et al. 2024, arXiv e-prints, arXiv:2405.02274, 10.48550/arXiv.2405.02274 [Poznanski et al.(2012)Poznanski, Prochaska, & Bloom]2012MNRAS.426.1465P Poznanski, D., Prochaska, J. X., & Bloom, J. S. 2012, , 426, 1465, 10.1111/j.1365-2966.2012.21796.x [Schlafly & Finkbeiner(2011)]2011ApJ...737..103S Schlafly, E. F., & Finkbeiner, D. P. 2011, , 737, 103, 10.1088/0004-637X/737/2/103 [Schlegel(1990)]1990MNRAS.244..269S Schlegel, E. M. 1990, , 244, 269 [Shivvers et al.(2015)Shivvers, Groh, Mauerhan, Fox, Leonard, & Filippenko]2015ApJ...806..213S Shivvers, I., Groh, J. H., Mauerhan, J. C., et al. 2015, , 806, 213, 10.1088/0004-637X/806/2/213 [Shrestha et al.(2024)Shrestha, Bostroem, Sand, Hosseinzadeh, Andrews, Dong, Hoang, Janzen, Pearson, Jencson, Lundquist, Mehta, Ravi, Meza Retamal, Valenti, Brown, Jha, Macrie, Hsu, Farah, Howell, McCully, Newsome, Padilla Gonzalez, Pellegrino, Terreran, Kwok, Smith, Schwab, Martas, Munoz, Medina, Li, Diaz, Hiramatsu, Tucker, Wheeler, Wang, Zhai, Zhang, Gangopadhyay, Yang, & Gutierez]2024arXiv240518490S Shrestha, M., Bostroem, K. A., Sand, D. J., et al. 2024, arXiv e-prints, arXiv:2405.18490. 2405.18490 [Smith et al.(2023)Smith, Pearson, Sand, Ilyin, Bostroem, Hosseinzadeh, & Shrestha]2023ApJ...956...46S Smith, N., Pearson, J., Sand, D. J., et al. 2023, , 956, 46, 10.3847/1538-4357/acf366 [Szalai et al.(2019)Szalai, Vinkó, Könyves-Tóth, Nagy, Bostroem, Sárneczky, Brown, Pejcha, Bódi, Cseh, Csörnyei, Dencs, Hanyecz, Ignácz, Kalup, Kriskovics, Ordasi, Pál, Seli, Sódor, Szakáts, Vida, Zsidi, Konkoly Team, Arcavi, Ashall, Burke, Galbany, Hiramatsu, Hosseinzadeh, Hsiao, Howell, McCully, Moran, Rho, Sand, Shahbandeh, Valenti, Wang, Wheeler, & Supernova Project]2019ApJ...876...19S Szalai, T., Vinkó, J., Könyves-Tóth, R., et al. 2019, , 876, 19, 10.3847/1538-4357/ab12d0 [Tartaglia et al.(2020)Tartaglia, Pastorello, Sollerman, Fransson, Mattila, Fraser, Taddia, Tomasella, Turatto, Morales-Garoffolo, Elias-Rosa, Lundqvist, Harmanen, Reynolds, Cappellaro, Barbarino, Nyholm, Kool, Ofek, Gao, Jin, Tan, Sand, Ciabattari, Wang, Zhang, Huang, Li, Mo, Rui, Xiang, Zhang, Hosseinzadeh, Howell, McCully, Valenti, Benetti, Callis, Carracedo, Fremling, Kangas, Rubin, Somero, & Terreran]2020A A...635A..39T Tartaglia, L., Pastorello, A., Sollerman, J., et al. 2020, , 635, A39, 10.1051/0004-6361/201936553 [Tonry et al.(2024)Tonry, Denneau, Weiland, Lawrence, Siverd, Erasmus, Koorts, Jordan, Suc, Smartt, Smith, Young, Nicholl, Fulton, McCollum, Moore, Weston, Sheng, Ramsden, Angus, Aamer, Shingles, Srivastav, Gillanders, Rhodes, Andersson, Stevance, Rest, Chen, Stubbs, & Sommer]2024TNSTR1020....1T Tonry, J., Denneau, L., Weiland, H., et al. 2024, Transient Name Server Discovery Report, 2024-1020, 1 [Wang et al.(2019)Wang, Bai, Fan, Mao, Chang, Xin, Zhang, Lun, Wang, Zhang, Ying, Lu, Wang, Ji, Xiong, Yu, Ding, Ye, Xing, Yi, Xu, Zheng, Feng, He, Wang, Liu, Chen, Xu, Qin, Zhang, Tan, Li, Lou, Li, & Liu]2019RAA....19..149W Wang, C.-J., Bai, J.-M., Fan, Y.-F., et al. 2019, Research in Astronomy and Astrophysics, 19, 149, 10.1088/1674-4527/19/10/149 [Waxman & Katz(2017)]2017hsn..book..967W Waxman, E., & Katz, B. 2017, in Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin, 967, 10.1007/978-3-319-21846-5_33 [Xiang et al.(2024)Xiang, Mo, Wang, Wang, Zhang, Lin, Chen, Song, Liu, Wang, & Li]2024arXiv240507699X Xiang, D. F., Mo, J., Wang, X. F., et al. 2024, arXiv e-prints, arXiv:2405.07699, 10.48550/arXiv.2405.07699 [Yan et al.(2024)Yan, Wang, Song, Wu, Li, & Li]24ggiLCEP Yan, S., Wang, X., Song, C., et al. 2024, In Prep., xx [Yaron et al.(2017)Yaron, Perley, Gal-Yam, Groh, Horesh, Ofek, Kulkarni, Sollerman, Fransson, Rubin, Szabo, Sapir, Taddia, Cenko, Valenti, Arcavi, Howell, Kasliwal, Vreeswijk, Khazov, Fox, Cao, Gnat, Kelly, Nugent, Filippenko, Laher, Wozniak, Lee, Rebbapragada, Maguire, Sullivan, & Soumagnac]2017NatPh..13..510Y Yaron, O., Perley, D. A., Gal-Yam, A., et al. 2017, Nature Physics, 13, 510, 10.1038/nphys4025 [Zhai et al.(2024)Zhai, Li, Wang, Zhang, & Wang]2024TNSAN.104....1Z Zhai, Q., Li, L., Wang, Z., Zhang, J., & Wang, X. 2024, Transient Name Server AstroNote, 104, 1 [Zhang et al.(2020)Zhang, Wang, József, Zhai, Zhang, Filippenko, Brink, Zheng, Wyrzykowski, Mikołajczyk, Huang, Rui, Mo, Sai, Zhang, Wang, DerKacy, Baron, Sárneczky, Bódi, Csörnyei, Hanyecz, Ignácz, Kalup, Kriskovics, Könyves-Tóth, Ordasi, Pál, Sódor, Szakáts, Vida, & Zsidi]2020MNRAS.498...84Z Zhang, J., Wang, X., József, V., et al. 2020, , 498, 84, 10.1093/mnras/staa2273 [Zhang et al.(2023)Zhang, Lin, Wang, Zhao, Li, Liu, Yan, Xiang, Wang, & Bai]2023SciBu..68.2548Z Zhang, J., Lin, H., Wang, X., et al. 2023, Science Bulletin, 68, 2548, 10.1016/j.scib.2023.09.015 [Zhang et al.(2012)Zhang, Wang, Wu, Chen, Chen, Liu, Huang, Liang, Zhao, Lin, Wang, Dennefeld, Zhang, Zhai, Wu, Fan, Zou, Zhou, & Ma]2012AJ....144..131Z Zhang, T., Wang, X., Wu, C., et al. 2012, , 144, 131, 10.1088/0004-6256/144/5/131 [Zimmerman et al.(2024)Zimmerman, Irani, Chen, Gal-Yam, Schulze, Perley, Sollerman, Filippenko, Shenar, Yaron, Shahaf, Bruch, Ofek, De Cia, Brink, Yang, Vasylyev, Ben Ami, Aubert, Badash, Bloom, Brown, De, Dimitriadis, Fransson, Fremling, Hinds, Horesh, Johansson, Kasliwal, Kulkarni, Kushnir, Martin, Matuzewski, McGurk, Miller, Morag, Neil, Nugent, Post, Prusinski, Qin, Raichoor, Riddle, Rowe, Rusholme, Sfaradi, Sjoberg, Soumagnac, Stein, Strotjohann, Terwel, Wasserman, Wise, Wold, Yan, & Zhang]2024Natur.627..759Z Zimmerman, E. A., Irani, I., Chen, P., et al. 2024, , 627, 759, 10.1038/s41586-024-07116-6 § APPENDIX §.§ Data reduction Fig.<ref> displays the spectra of SN 2024ggi obtained by LJT, TNG and VLT UT-1, with further specifics outlined in Table <ref>. All of these spectra are produced in the standard way in IRAF, including precise wavelength and flux calibration, and have been corrected for telluric absorption and redshift. The wavelength was double-checked by the skylines (e.g., λ 5577.3, λ 6300.3, and λ 6363.8). The flux of spectra was double-checked by the SED of ugriz- band photometry, as seen in Fig. <ref>. During the spectroscopic monitoring of LJT, we got high cadence photometry quasi-simultaneously. The flux of VLT and TNG are checked with the SED interpreted by the photometry of LJT and BOOTES (Burst Observer and Optical Transient Exploring System) <cit.>. We got one near-infrared spectrum from the Magellan telescope with FIRE (Folded-port InfraRed Echellette) on Apr. 24, 2024 (t∼ 14 d), which was reduced with FIREHOST V2.0 pipeline <cit.>. §.§ Na ID absorption and Extinctions We observed three sets of absorption lines in the mid-resolution spectrum of SN 2024ggi. Based on their wavelength relationships, we can infer that these three sets of lines are Na iD absorption lines from different redshifts, indicating dust extinction in the line of sight direction of SN 2024ggi, as seen in Fig.<ref>. The first set at z∼0.000036 is the absorption from the Milky Way. Based on the redshift, z∼0.00039, the second set of absorption lines may originate from a molecular cloud within the Milky Way at a recession velocity of approximately 120 . Given the redshift of NGC 3621 (z = 0.002435 ± 0.000007, from NASA/IPAC Extragalactic Database), the third set of D line at z∼0.002235 should be from the host galaxy with a rotation velocity at -60 ± 10 , where the velocity error is derived by the estimation of the skyline λ5577.3. The equivalent width (EW) of interstellar Na i D1 and D2 lines can be used to estimate the dust extinctions, e.g., the empirical relations in <cit.>. We derived the E(B-V) of SN 2024ggi depending on the three sets of Na iD absorption, as listed in Table <ref>. The estimations from the same set of D1 and D2 are averaged. Based on this, we derived the Galactic extinction is E(B-V)_ MW = 0.048± 0.009 mag, which is close to the result of <cit.> (i.e., E(B-V)_ MW = 0.071 mag). The extinction of the host galaxy and the intermediate cloud are E(B-V)_ Host = 0.063± 0.005 mag and E(B-V)_ IMC = 0.050± 0.009 mag, respectively. Adopted the Galactic extinction of <cit.> and our host and intermediate cloud estimates, the total extinction of SN 2024ggi is E(B-V) = 0.18± 0.01 mag, which is adopted in the related calculation of this paper. This measurement is consistent with the extinction results obtained by <cit.>, i.e., E(B-V) = 0.16± 0.02 mag, and <cit.>, i.e., E(B-V) = 0.15± 0.02 mag, through high-dispersion spectroscopic observations of the Na i D lines.
http://arxiv.org/abs/2406.09375v1
20240613175347
Learning conditional distributions on continuous spaces
[ "Cyril Bénézet", "Ziteng Cheng", "Sebastian Jaimungal" ]
stat.ML
[ "stat.ML", "cs.LG", "math.ST", "stat.TH" ]
Oblivious subspace embeddings for compressed Tucker decompositions Linglong Kong June 17, 2024 ================================================================== § ABSTRACT We investigate sample-based learning of conditional distributions on multi-dimensional unit boxes, allowing for different dimensions of the feature and target spaces. Our approach involves clustering data near varying query points in the feature space to create empirical measures in the target space. We employ two distinct clustering schemes: one based on a fixed-radius ball and the other on nearest neighbors. We establish upper bounds for the convergence rates of both methods and, from these bounds, deduce optimal configurations for the radius and the number of neighbors. We propose to incorporate the nearest neighbors method into neural network training, as our empirical analysis indicates it has better performance in practice. For efficiency, our training process utilizes approximate nearest neighbors search with random binary space partitioning. Additionally, we employ the Sinkhorn algorithm and a sparsity-enforced transport plan. Our empirical findings demonstrate that, with a suitably designed structure, the neural network has the ability to adapt to a suitable level of Lipschitz continuity locally. For reproducibility, our code is available at <https://github.com/zcheng-a/LCD_kNN>. § INTRODUCTION Learning the conditional distribution is a crucial aspect of many decision-making scenarios. While this learning task is generally challenging, it presents unique complexities when explored in a continuous space setting. Below, we present a classic example (cf. <cit.>) that highlights this core challenge. For simplicity, we suppose the following model Y = 12 X + 12 U, where the feature variable X and the noise U are independent ([0,1]), and Y is the target variable. Upon collecting a finite number of independent samples =(X_m,Y_m)_m=1^M, we aim to estimate the conditional distribution of Y given X. Throughout, we treat this conditional distribution as a measure-valued function of x, denoted by P_x. A naive approach is to first form an empirical joint measure ψ̂:= 1/M∑_m=1^M δ_(X_m, Y_m), where δ stands for the Dirac meaasure, and then use the conditional distribution induced from ψ̂ as an estimator. As the marginal distribution of X is continuous, with probability 1 (as (X_m = X_m')=0 for all m ≠ m'), we have that[In accordance to the model, we set the conditional distribution to ([0,1]) at points where it is not well-defined.] P_x = δ_Y_m, x=X_m for some m, ([0,1]), otherwise. Regardless of the sample size M, P_x fails to approximate the true conditional distribution, P_x = ([x,x+12]), x∈[0,1]. Despite the well-known convergence of the (joint) empirical measure to the true distribution <cit.>, the resulting conditional distribution often fails to provide an accurate approximation of the true distribution. This discrepancy could be due to the fact that calculating conditional distribution is an inherently unbounded operation. As a remedy, clustering is a widely employed technique. Specifically, given a query point x in the feature space, we identify samples where X_m is close to x and use the corresponding Y_m's to estimate P_x. Two prominent methods within the clustering approach are the kernel method and the nearest neighbors method[These should not be confused with similarly named methods used in density function estimation.]. Roughly speaking, the kernel method relies primarily on proximity to the query point for selecting X_m's, while the nearest neighbors method focuses on the rank of proximity. Notably, discretizing the feature space (also known as quantization), a straightforward yet often effective strategy, can be seen as a variant of the kernel method with static query points and flat kernels. The problem of estimating conditional distributions can be addressed within the non-parametric regression framework, by employing clustering or resorting to non-parametric least squares, among others. Alternatively, it is feasible to estimate the conditional density function directly: a widely-used method involves estimating the joint and marginal density functions using kernel smoothing and then calculating their ratio. This method shares similarities with the clustering heuristics mentioned earlier. For a more detailed review of these approaches, we refer to Section <ref>. This work draws inspiration from recent advancements in estimating discrete-time stochastic processes using conditional density function estimation <cit.> and quantization methods <cit.>. A notable feature of these works is their use of the Wasserstein distance to calculate local errors: the difference between the true and estimated conditional distributions at a query point x. One could average these local errors across different values of x's to gauge the global error. Employing Wasserstein distances naturally frames the study within the context of weak convergence, thereby enabling discussions in a relatively general setting, although this approach may yield somewhat weaker results in terms of the mode of convergence. Moreover, utilizing a specific distance rather than the general notion of weak convergence enables a more tangible analysis of the convergence rates and fluctuations. We would like to point out that the advancements made in <cit.>, as well as our analysis in this paper, relies on recent developments concerning the Wasserstein convergence rate of empirical measures under i.i.d. sampling from a static distribution (cf. <cit.>). §.§ Main contributions First, we introduce some notations to better illustrate the estimators that we study. Let and be multi-dimensional unit cubes, with potentially different dimensions, for feature and target spaces. For any integer M ≥ 1, any = {(x_m,y_m)}_m=1^M ∈ (×)^M, and any Borel set A ⊂, we define a probability measure on by μ̂^_A := (∑_m=1^M _A(x_m) )^-1∑_m=1^M_A(x_m) δ_y_m, ∑_m=1^M_A(x_m) > 0, λ_, where λ_ is the Lebesgue measure on and, for y ∈, δ_y is a Dirac measure with atom at y. In general, one could consider weighting δ_y_m's (cf. <cit.>, <cit.>), which may offer additional benefits in specific applications. As such adjustments are unlikely to affect the convergence rate, however, we use uniform weighting for simplicity. With (random) data =(X_m,Y_m)_m=1^M, we aim to estimate the conditional distribution of Y given X. We view this conditional distribution as a measure-valued function P:→() and use a subscript for the input argument and write P_x. Consider a clustering scheme[In general, the clustering scheme may require information on (x_1,…,x_M). For example, clustering the k-nearest-neighbor near a query point x requires to know all x_m's. ] given by the map ^:→ 2^. We investigate estimators of the form x↦μ̂^_^(x). We use to denote said estimator and suppress from the notation for convenience. In later sections, we consider two kinds of maps ^ (i) a ball with fixed radius centered at x, called an r-box and (ii) the k nearest neighbors of x, called k-nearest-neighbor estimator. Wee Definitions <ref> and <ref> for more details. One of our main contribution pertains to analyzing the error ∫_(P_x, _x) ν( x), where is the 1-Wasserstein distance (cf. <cit.>) and ν∈() is arbitrary and provides versatility to the evaluation criterion. A canonical choice for ν is the Lebesgue measure on , denoted by λ_. This is particularly relevant in control settings where represents the state-action space and accurate approximations across various state and action scenarios are crucial for making informed decisions. The form of error above is also foundational in stochastic process estimation under the adapted Wasserstein distance (cf. <cit.>), making the techniques we develop potentially relevant in other contexts. Under the assumption that P is Lipschitz continuous (Assumption <ref>) and standard assumptions on the data collection process (Assumption <ref>), we analyze the convergence rate and fluctuation by bounding the following two quantities [ ∫_(P_x, _x )ν( x) ] and [ ∫_(P_x, _x)ν( x) ]. Moreover, by analyzing the above quantities, we gain insights into the optimal choice of the clustering mapping . For the detail statements of these results, we refer to Theorems <ref>, <ref>, <ref>, and <ref>. We also refer to Section <ref> for related comments. To illustrate another aspect of our contribution, we note by design x↦_x is piece-wise constant. This characteristic introduces limitations. Notably, it renders the analysis of performance at the worst-case x elusive. Contrastingly, by building a Lipschitz-continuous parametric estimator from the raw estimator , in Proposition <ref> we demonstrate that an upper bound on the aforementioned expectation allows us to derive a worst-case performance guarantee. Guided by Proposition <ref>, we explore a novel approach of training a neural network for estimation, by using as training data and incorporating suitably imposed Lipschitz continuity. To be comprehensive, we include in Section <ref> a review of studies on Lipschitz continuity in neural networks. In Section <ref>, we define as a neural network that approximates P, where θ represents the network parameters. We train with the objective: _θ∑_n=1^N (_X̃_n, _X̃_n), where (X̃_n)_n=1^N is a set of randomly selected query points. For implementation purposes, we use the k-nearest-neighbor estimator in the place of (see Definition <ref>). To mitigate the computational costs stemming from the nearest neighbors search, we employ the technique of Approximate Nearest Neighbor Search with Random Binary Space Partitioning (ANN-RBSP), as discussed in Section <ref>. In Section <ref>, we compute using the Sinkhorn algorithm, incorporating normalization and enforcing sparsity for improved accuracy. To impose a suitable level of local Lipschitz continuity on , in Section <ref>, we employ a neural network with a specific architecture and train the networks using a tailored procedure. The key component of this architecture is the convex potential layer introduced in <cit.>. In contrast to most extant literature that imposes Lipschitz continuity on neural networks, our approach does not utilize specific constraint or regularization of the objective function, but relies on certain self-adjusting mechanism embedded in the training. In Section <ref>, we evaluate the performance of the trained , denoted by , using three sets of synthetic data in 1D and 3D spaces. Our findings indicate that generally outperforms , even though it is initially trained to match . This superior performance persists even when comparing to different using various k values, without retraining . Furthermore, despite using the same training parameters, consistently demonstrates the ability to adapt to a satisfactory level of local Lipschitz continuity across all cases. Moreover, in one of the test cases, we consider a kernel that exhibits a jump discontinuity, and we find that handles this jump case well despite Lipschitz continuity does not hold. Lastly, we provide further motivation of our approach by highlighting some potential applications for . The first application is in model-based policy gradient method in reinforcement learning. We anticipate that the enforced Lipschitz continuity allows us to directly apply the policy gradient update via compositions of and cost function for more effective optimality searching. The second application of is in addressing optimisation in risk-averse Markov decision processes, where dynamic programming requires knowledge beyond the conditional expectation of the risk-to-go (cf. <cit.>). The study of these applications is left for further research. §.§ Related works In this section, we will first review the clustering approach in estimating conditional distributions, and then proceed to review recent studies on Lipschitz continuity in neural networks. §.§.§ Estimating conditional distributions via clustering The problem of estimating conditional distributions is frequently framed as non-parametric regression problems for real-valued functions. For instance, when d_ = 1, estimate the conditional α-quantile of Y given X. Therefore, we begin by reviewing some of the works in non-parametric regression. The kernel method in non-parametric regression traces its origins back to the Nadaraya-Watson estimator <cit.>, if not earlier. Subsequent improvements have been introduced, such as integral smoothing <cit.> (also known as the Gasser-Müller estimator), local fitting with polynomials instead of constants <cit.>, and adaptive kernels <cit.>. Another significant area of discussion is the choice of kernel bandwidth, as detailed in works like <cit.>. Regarding convergence rates, analyses under various settings can be found in <cit.>, with <cit.> being particularly relevant to our study for comparative purposes. According to <cit.>, if the target function is Lipschitz continuous, with i.i.d. sampling and that the sampling distribution in the feature space has a uniformly positive density, then the optimal rate of the ·_1-distance between the regression function and the estimator is of the order M^-1/d_+2. For a more comprehensive review of non-parametric regression using kernel methods, we refer to the books <cit.> and references therein. Non-parametric regression using nearest neighbors methods originated from classification problems <cit.>. Early developments in this field can be found in <cit.>. For a comprehensive introduction to nearest neighbors methods, we refer to <cit.>. More recent reference <cit.> offers further detailed exploration of the topic. The nearest neighbor method can be viewed as a variant of the kernel method that adjusts the bandwidth based on the number of local data points—a property that has gained significant traction. Recently, the application of the nearest neighbor method has expanded into various less standard settings, including handling missing data <cit.>, reinforcement learning <cit.>, and time series forecasting <cit.>. For recent advancements in convergence analysis beyond the classical setting, see <cit.>. Although the review above mostly focuses on clustering approach, other effective approaches exist, such as non-parametric least square, or more broadly, conditional elicitability (e.g., <cit.>). Non-parametric least square directly fits the data using a restricted class of functions. At first glance, this approach appears distinct from clustering. However, they share some similarities in their heuristics: the rigidity of the fitting function, due to imposed restrictions, allows data points near the query point to affect the estimation, thereby implicitly incorporating elements of clustering. Apart from non-parametric regression, conditional density function estimation is another significant method for estimating conditional distributions. One approach is based on estimating joint and marginal density functions, and then using the ratio of these two to produce an estimator for the conditional density function. A key technique used in this approach is kernel smoothing. Employing a static kernel for smoothing results in a conditional density estimator that shares similar clustering heuristics to those found in the kernel method of non-parametric regression. For a comprehensive overview of conditional density estimation, we refer to reference books <cit.>. For completeness, we also refer to <cit.> for a perspective on static density function estimation from the standpoint of reproducing kernel Hilbert space. Further discussions on estimation using adaptive kernels can be found in, for example, <cit.>. Despite extensive research in non-parametric regression and conditional density function estimation, investigations from the perspective of weak convergence have been relatively limited, only gaining more traction in the past decade. Below, we highlight a few recent studies conducted in the context of estimating discrete-time stochastic processes under adapted Wasserstein distance, as the essence of these studies are relevant to our evaluation criterion (<ref>). <cit.> explores the problem asymptotically, employing tools from conditional density function estimation with kernel smoothing. Subsequently, <cit.> investigates a similar problem with a hypercube as state space, employing the quantization method. Their approach removes the need to work with density functions. They calculate the convergence rate, by leveraging recent developments in the Wasserstein convergence rate of empirical measures <cit.>. Moreover, a sub-Gaussian concentration with parameter M^-1 is established. The aforementioned results are later extended to ^d in <cit.>, where a non-uniform grid is used to mitigate assumptions on moment conditions. Most recently, <cit.> examines smoothed variations of the estimators proposed in <cit.>. Other developments on estimators constructed from smoothed quantization can be found in <cit.>. Lastly, regarding the machine learning techniques used in estimating conditional distributions, conditional generative models are particularly relevant. For reference, see <cit.>. These models have achieved numerous successes in image generation and natural language processing. We suspect that, due to the relatively discrete (albeit massive) feature spaces in these applications, clustering is implicitly integrated into the training procedure. In continuous spaces, under suitable setting, clustering may also become an embedded part of the training procedure. For example, implementations in <cit.> do not explicitly involve clustering and use training objectives that do not specifically address the issues highlighted in the motivating example at the beginning of the introduction. Their effectiveness could possibly be attributed to certain regularization embedded within the neural network and training procedures. Nevertheless, research done in continuous spaces that explicitly uses clustering approaches when training conditional generative models holds merit. Such works are relatively scarce. For an example of this limited body of research, we refer to <cit.>, where the conditional density function estimator from <cit.> is used to train an adversarial generative network for stochastic process generation. §.§.§ Lipschitz continuity in neural networks Recently, there has been increasing interest in understanding and enforcing Lipschitz continuity in neural networks. The primary motivation is to provide a certifiable guarantee for classification tasks performed by neural networks: it is crucial that minor perturbations in the input object have a limited impact on the classification outcome. One strategy involves bounding the Lipschitz constant of a neural network, which can then be incorporated into the training process. For refined upper bounds on the (global) Lipschitz constant, see, for example, <cit.>. For local bounds, we refer to <cit.> and the references therein. We also refer to <cit.> for a study of the Lipschitz property from the viewpoint of boolean functions. Alternatively, designing neural network architectures that inherently ensure desirable Lipschitz constants is another viable strategy. Works in this direction include <cit.>. Notably, the layer introduced in <cit.> belongs to the category of residual connection <cit.>. Below, we review several approaches that enforce Lipschitz constants during neural network training. <cit.> explore training with a regularized objective function that includes upper bounds on the network's Lipschitz constant. <cit.> frame the training problem into constrained optimization and train with projected gradients descent. Given the specific structure of the refined bound established in <cit.>, <cit.> combines training with semi-definite programming. They develop a version with a regularized objective function and another that enforces the Lipschitz constant exactly. <cit.> also investigates training with a regularized objective but considers Lipschitz constants along certain directions. <cit.> devises a training procedure that removes components from the weight matrices to achieve smaller local Lipschitz constants. <cit.> initially imposes orthogonality on the weight matrices, and subsequently enforces a desirable Lipschitz constant based on that orthogonality. Ensuring desirable Lipschitz constants with tailored architectures, <cit.> train the networks directly. Although the architecture proposed in <cit.> theoretically ensures the Lipschitz constant, it requires knowledge of the spectral norm of the weight matrices, which does not admit explicit expression in general. Their training approach combines power iteration for spectral norm approximation with the regularization methods used in <cit.>. Finally, we note that due to their specific application scenarios, these implementations concern relatively stringent robustness requirements and thus necessitate more specific regularization or constraints. In our setting, it is generally desirable for the neural network to automatically adapt to a suitable level of Lipschitz continuity based on the data, while also avoiding excessive oscillations from over-fitting. The literature directly addressing this perspective is limited (especially in the setting of conditional distribution estimation). We refer to <cit.> for discussions that could be relevant. §.§ Organization of the paper Our main theoretical results are presented in Section <ref>. Section <ref> is dedicated to the training of . We will outline the key components of our training algorithm and demonstrate its performance on three sets of synthetic data. We will prove the theoretical results in Section <ref>. Further implementation details and ablation analysis are provided in Section <ref>. In Section <ref>, we discuss the weaknesses and potential improvements of our implementation. Appendix <ref> and <ref> respectively contain additional plots and a table that summarizes the configuration of our implementation. Additionally, Appendix <ref> includes a rougher version of the fluctuation results. § NOTATIONS AND TERMINOLOGIES Throughout, we adopt the following set of notations and terminologies. * On any normed space (E, ·), for all x ∈ E and γ > 0, B(x,γ) denotes the closed ball of radius γ around x, namely B(x,γ)={x' ∈ E | x-x'≤γ}. * For any measurable space (E,), (E) denotes the set of probability distributions on (E,). For all x ∈ E, δ_x ∈(E) denotes the Dirac mass at x. * We endow normed spaces (E,·) with their Borel sigma-algebra (E), and denotes the 1-Wasserstein distance on (E). * On = [0,1]^d, we denote by λ_ the Lebesgue measure. We say a measure ν∈() is dominated by Lebesgue measure with a constant C>0 if ν(A)≤Cλ_(A) for all A∈([0,1]^d). * The symbol ∼ denotes equivalence in the sense of big O notation, indicating that each side dominates the other up to a multiplication of some positive absolute constant. More precisely, a_n∼ b_n means there are finite constants c,C>0 such that c a_n ≤ b_n ≤ C a_n, n∈. Similarly, ≲ implies that one side is of a lesser or equal, in the sense of big O notation, compared to the other. § THEORETICAL RESULTS In Section <ref>, we first formally set up the problem and introduce some technical assumption. We then study in Section <ref> and <ref> the convergence and fluctuation of two versions of , namely, the r-box estimator and the k-nearest-neighbor estimator. Related comments are organized in Section <ref>. Moreover, in Section <ref>, we provide a theoretical motivation for the use of , the Lipschitz-continuous parametric estimator trained from . §.§ Setup For , ≥ 1 two integers, we consider := [0,1]^ and := [0,1]^, endowed with their respective sup-norm ·_∞. The sup-norm is chosen for simplicity of the theoretical analysis only: as all norms on ^n are equivalent (for any generic n ≥ 1), our results are valid, up to different multiplicative constants, for any other choice of norm. We aim to estimate an unknown probabilistic kernel P : →() x ↦ P_x( y). To this end, given an integer-valued sampled size M ≥ 1, we consider a set of (random) data points := {(X_m, Y_m)}_m=1^M associated to P. We also define the set of projections of the data points onto the feature space as _ := {X_m}_m=1^M. Throughout this section, we work under the following technical assumptions. [Lipschitz continuity of kernel] There exists L ≥ 0 such that, for all (x,x') ∈^2, (P_x, P_x') ≤ Lx-x'_∞. The following is true: (i) is i.i.d. with probability distribution ψ := ξ⊗ P, where ξ∈() and where ξ⊗ P ∈(×) is (uniquely, by Caratheodory extension theorem) defined by (ξ⊗ P)(A × B) := ∫__A(x) P_x(B) ξ( x), A ∈(), B ∈(). (ii) There exists c∈ (0,1] such that, for all A ∈(), ξ(A) ≥c λ_(A). These assumptions allow us to analyze convergence and gain insights into the optimal clustering hyper-parameters without delving into excessive technical details. Assumption <ref> is mainly used for determining the convergence rate. If the convergence rate is not of concern, it is possible to establish asymptotic results with less assumptions. We refer to <cit.> for relevant results. The conditions placed on ξ in Assumption <ref> are fairly standard, though less stringent alternatives are available. For instance, Assumption <ref> (i) can be weakened by considering suitable dependence <cit.> or ergodicity in the context of stochastic processes <cit.>. Assumption <ref> (ii), implies there is mass almost everywhere and is aligned with the motivation from control settings discussed in the introduction. Assumptions <ref> and <ref> are not exceedingly stringent and provides a number of insights into the estimation problem. More general settings are left for further research. The estimators discussed in subsequent sections are of the form , as introduced right after (<ref>), for two specific choices of clustering schemes constructed with the data . In the following study, we assert all the measurability needed for to be well-defined. These measurability can be verified using standard measure-theoretic tools listed in, for example, <cit.>. §.§ Results on r-box estimator The first estimator, which we term the r-box estimator, is defined as follows. Choose r, a real number, s.t. 0<r<1/2. The r-box estimator for P is defined by : →() x ↦_x := μ̂^_^r(x), where, for all x ∈, ^r(x) := B(β^r(x), r) and β^r(x) := r ∨ x ∧ (1-r), where r∨· and ·∧ (1-r) are applied entry-wise. The set ^r(x) is defined such that it is a ball of radius around x whenever x is at least r away from the boundary ∂ (in all of its components), otherwise, we move the point x in whichever components are within r from ∂ to be a distance r away from ∂. Consequently, for all 0<r<1/2 and for all x ∈, ^r(x) has a bona fide radius of r, as the center β^r(x) is smaller or equal to r away from ∂. For the r-box estimator, we have the following convergence results. The theorem below discusses the convergence rate of the average Wasserstein distance between the unknown kernel evaluated at any point and its estimator, when the radius r is chosen optimally with respect to the data sample M. Section <ref> is dedicated to its proof. Under Assumptions <ref> and <ref>, choose r as follows r ∼ M^-1/ + 2, =1, 2 M^-1/ + , ≥ 3. Then, there is a constant C>0 (which depends only on ,,L,c), such that, for all probability distribution ν∈(), we have ∫_(P_x, _x) ν( x) ≤sup_x∈(P_x, ) ≤ C× M^-1/d_ + 2, d_=1, M^-1/d_ + 2ln(M), d_=2, M^-1/d_ + d_, d_≥ 3. Next, we bound the associated variance whose proof is postponed to Section <ref>. Under Assumptions <ref>, consider r∈(0,1/2]. Let ν∈() be dominated by λ_ with a constant C>0. Then, [ ∫_(P_x, _x) ν(x) ] ≤4^+1C^2/c^2 (M+1). §.§ Results on k-nearest-neighbor estimator Here, we focus in the second estimator – the k-nearest-neighbor estimator, defined as follows. Let k ≥ 1 an integer. The k-nearest-neighbor estimator for P is defined by : →() x ↦_x := μ̂^_^k,_(x), where, for any integer M ≥ 1 and any _∈^M, ^k,_(x) contains (exactly) k points of _ which are closest to x, namely ^k,_(x) := { x' ∈_ | x-x'_∞ k (x-x'_∞)_x' ∈_}, Here, in case of a tie when choosing the k-th smallest, we break the tie randomly with uniform probability. We have the following analogs of the convergence results (Theorems <ref> and <ref>) for the k-nearest-neighbor estimator. The proofs are postponed to Section <ref> and Section <ref>, respectively. Under Assumptions <ref> and <ref>, and choosing k as k ∼ M^2/d_+2, d_ = 1,2, M^d_/d_+d_, d_≥ 3, there is a constant C>0 (which depends only on d_, d_, L, c), such that, for all probability distribution ν∈(), we have ∫_(P_x, _x) ν( x) ≤sup_x∈(P_x, _x) ≤ C × M^-1/d_ + 2, d_=1, M^-1/d_ + 2ln M, d_=2, M^-1/d_ + d_, d_≥ 3. Under Assumptions <ref>, for any ν∈(), we have [∫_(P_x, _x) ν( x)] ≤1/k. Moreover, if ν is dominated by λ_ with a constant C>0, then [∫_(P_x, _x) ν( x)] ≤2^2d_+1C^2 M/c^2 k^2( ( 8√(2d_ln(M)/M-1) + k/M-1)^2 + √(2π)/√(M-1)(8 √(2d_ln(M)/M-1) + k/M-1) + 4/M-1). With k chosen as in Theorem <ref>, this reduces to [∫_(P_x, _x) ν( x)] ≲ M^-2(2∨ d_)/d_+d_ln(M), 2∨ d_≤ d_, M^-1, 2∨ d_ > d_. §.§ Comments on the convergence rate This sections gathers several comments on the convergence results we have developed in Section <ref> and <ref>. §.§.§ On the convergence rate We first comment on the expectations in Theorem <ref> and <ref>. Sharpness of the bounds. Currently, we cannot establish the sharpness of the convergence rates in Theorems <ref> and <ref>. We can, however, compare our results to established results in similar settings. For d_=1, we may compare it to the optimal rate of non-parametric regression of a Lipschitz continuous function. It is shown in <cit.> that the optimal rate is M^-1/d_+2, the same as in Theorems <ref> and <ref> when d_=1. For d_≥ 3, as noted in <cit.>, we may compare to the Wasserstein convergence rate of empirical measure in the estimation of a static distribution on ℝ^d_+d_. We refer to <cit.> for the optimal rate, which coincides with those in Theorems <ref> and <ref>. Error components. We discuss the composition of our upper bound on the expected average error by dissecting the proof of Theorem <ref> and <ref>. In the proofs, we decompose the expected average errors into two components: approximation error and estimation error. The approximation error occurs when treating P_x' as equal to P_x when x' is close to the query point x, leading to an error of size Lx-x'_∞. The estimation error is associated with the Wasserstein error of empirical measure under i.i.d. sampling (see (<ref>)). From Definitions <ref> and <ref>, the r-box estimator effectively manages the approximation error but struggles with controlling the estimation error, whereas the k-nearest-neighbor estimator exhibits the opposite behavior. Explicit bounds. We primarily focus on analyzing the convergence rates of the r-box and k-nearest-neighbor estimators as M →∞. Therefore, within the proofs of these results, we track only the rates (and ignore various constant coefficients). If more explicit bounds are preferred, intermediate results such as (<ref>), or (<ref>) could be good starting points for computing them. §.§.§ On the fluctuation We next discuss the variances studied in Theorems <ref> and <ref>. In Appendix <ref>, we also include results derived from the Azuma-Hoeffding inequality (e.g., <cit.>), though they provide rougher rates. Condition that ν is dominated by λ_. In Theorems <ref> and <ref>, we assume that the ν is dominated by λ_. This assumption is somewhat necessary. To illustrate, let us examine the non-parametric regression problem under a comparable scenario. We consider a fixed query point. In this context, the central limit theorem for k-nearest-neighbor estimator is well-established, and the normalizing rate is k^-1/2 (cf. <cit.>). This suggests that the rate in (<ref>) is sharp. For the r-box estimator, we believe that a supporting example can be constructed where ν is highly concentrated. On the other hand, we conjecture that if ξ∼ν, the variance could potentially attain the order of M^-1. For a pertinent result, we direct the reader to <cit.>. Sharpness of the bounds. Regarding the variance in Theorem <ref>, it is upper bounded by the commonly observed order of M^-1. We believe that this rate is sharp, though we do not have a proof at this time. As for Theorem <ref>, the variance is subject to a rougher rate when 2∨ d_≤ d_. We, however, conjecture that this variance attains the order of M^-1 as long as ν is dominated by λ_. §.§ Towards implementation with neural networks In light of recent practices in machine learning, during the learning of P, we may combine the r-box method or k-nearest-neighbor method into the training of certain parameterized model. To this end we let : × →() (θ,x) ↦_x be a parameterized model (e.g., a neural network), where is the parameter space and θ∈ is the parameter to be optimized over. Given an integer N ≥ 1, we may train on a set of query points =(X̃_n)_n=1^N satisfying the assumption below. The query points ={(X̃_n)}_n=1^N are i.i.d. with uniform distribution over , and are independent of the data points ={(X_m,Y_m)}_m=1^M. We propose the training objectives below _θ∈1/N∑_n=1^N (_X̃_n, _X̃_n) or _θ∈1/N∑_n=1^N (_X̃_n, _X̃_n), that is, minimize the mean of 1-Wasserstein errors between the parametrized model and the empirical r-box (or k-nearest-neighbour) approximation of the conditional distribution at the location of the random query points. The following proposition together with Theorem <ref> or Theorem <ref> justifies using the objectives in (<ref>). It is valid for any estimator for P that satisfies the bounds in (<ref>) or (<ref>). Moreover, due to Lipschitz continuity conditions in the proposition, the proposition provides insights into the worst-case performance guarantee. We also refer to <cit.> for a worst-case performance guarantee for conditional generative models, which is contingent upon Lipschitz continuity. In contrast, similar guarantees for the r-box and k-nearest-neighbor estimators are more elusive due to their inherently piece-wise constant nature. We refer to Section <ref> for the proof. Suppose Assumptions <ref>, <ref>, and <ref> hold. Let of P be an estimator constructed using the data points only. Consider a training procedure that produces a (random) Θ=Θ(,) satisfying sup_x,x'∈(_x,_x')/x-x'_∞≤ L^Θ for some (random) L^Θ>0. Then, ∫_(P_x,_x) x≤ (L+L^Θ)(λ_,1/N∑_n=1^Nδ_X̃_n) + ∫_(P_x, _x) x + 1/N∑_n=1^N (_X̃_ n, _X̃_n) . Moreover, with probability 1, sup_x∈(P_x,P̃^Θ_x) ≤ (d_+1)^1/d_+1 (L+L^Θ)^d_/d_+1(∫_(P_x, _x) x)^1/d_+1. Assuming L^Θ≤L for some (deterministic) L>0, by (<ref>) and Jensen's inequality, we have sup_x∈(P_x,P̃^Θ_x) ≤ (d_+1)^1/d_+1 (L+L)^d_/d_+1∫_(P_x, _x) x ^1/d_+1. This together with (<ref>) provides a worst-case performance guarantee for . Proposition <ref> along with Remark <ref> provides insights into the worst-case performance guarantees, but more analysis is needed. Specifically, understanding the magnitude of L^Θ and 1/N∑_n=1^N (_X̃_ n, _X̃_n) requires deeper knowledge of the training processes for , which are currently not well understood in the extant literature. Alternatively, in the hypothetical case where = P, L^Θ would match L as specified in Assumption <ref>, and 1/N∑_n=1^N (_X̃_ n, _X̃_n) would obey Theorem <ref> or <ref>. However, practical applications must also consider the universal approximation capability of . Further discussion on this topic can be found in <cit.>, although, to the best of our knowledge, recent universal approximation theorems in this subject do not yet concern continuity constraints. § IMPLEMENTATION WITH NEURAL NETWORKS Let and be equipped with ·_1. Following the discussion in Section <ref>, we let :→() be parameterized by a neural network and develop an algorithm that trains based on k-nearest-neighbor estimator. The k-nearest-neighbor estimator is preferred as _x consistently outputs k atoms. This regularity greatly facilities implementation. For instance, it enables the use of 3D tensors during Sinkhorn iterations to enhance execution speed (see Section <ref> later). We refer also to the sparsity part of Section <ref> for another component that necessitates the aforementioned regularity of . These components would not be feasible with the r-box estimator , as _x produces an undetermined number of atoms. Furthermore, there is a concern that in some realizations, _x at certain x may contain too few data points, potentially leading _x to exhibit unrealistic concentration. We next provide some motivation for this implementation. For clarity, we refer to the r-box estimator and the k-nearest-neighbor estimator as raw estimators. Additionally, we refer to , once trained, as the neural estimator. While raw estimators are adequate for estimating P on their own, they are piece-wise constant in x by design. On the other hand, a neural estimator is continuous in x. This continuity provides a performance guarantee in sup distance, as outlined in Proposition <ref> and the following remark. Moreover, the neural estimator inherently possesses gradient information. As discussed in the introduction, this feature renders the neural estimators useful in downstream contexts where gradient information is important, e.g., when performing model-based reinforcement learning. We construct such that it maps x∈ to atoms in with equal probabilities. For the related universal approximation theorems, we refer to <cit.>. We represent these atoms with a vector with N_atom entries denoted by y^θ(x)=(y^θ_1(x),…,y^θ_N_atom(x)) ∈^N_atom, where N_atom∈ is chosen by the user. In our implementation, we set N_atom=k. To be precise, we construct such that _x = 1/N_atom∑_j=1^N_atomδ_y^θ_j(x), x∈. This is known as the Lagrangian discretization (see <cit.>). In Algorithm <ref>, we present a high level description of our implementation of training based on the raw k-nearest-neighbor estimator. §.§ Overview of key components In this section, we outline the three key components of our implementation. Each of these components addresses a specific issue: * Managing the computational cost arising from the nearest neighbors search. * Implementing gradient descent after computing . * Selecting an appropriate Lipschitz constant for the neural estimator, preferably at a local level. Further details and ablation analysis on these three components can be found in Section <ref>. §.§.§ Approximate Nearest Neighbors Search with Random Binary Space Partitioning (ANNS-RBSP) Given a query point, performing an exact search for its k-nearest-neighbor requires O(M) operations. While a single search is not overly demanding, executing multiple searches as outlined in Algorithm <ref> can result in significant computational time, even when leveraging GPU-accelerated parallel computing. To address this, we use ANNS-RBSP as a more cost-effective alternative. Prior to searching, we sort (X_m)_m=1^M along each axis and record the order of indices. During the search, the data is divided into smaller subsets by repeatedly applying bisection on these sorted indices, with a random bisecting ratio, on a randomly chosen axis. Furthermore, we apply a restriction that mandates bisection along the longest edge of a rectangle when the edge ratio exceeds certain value (a hyper-parameter of the model). We record the bounding rectangle for each subset created through this partitioning process. Once partitioning is complete, we generate a small batch of query points within each rectangle and identify the k nearest neighbors for each query point within that same rectangle. For a visual representation of ANNS-BSP, we refer to Figure <ref>. Leveraging the sorted indices, we can reapply this partitioning method during every training episode without much computational cost. We refer to Section <ref> for additional details. There are similar ideas in the extant literature (cf. <cit.>). Given the substantial differences in our setting, however, we conduct further empirical analysis in Section <ref> to showcase the advantage of our approach against exact search. §.§.§ Computing for gradient descent The following discussion pertains to the computation of (<ref>), with the subsequent gradient descent in consideration. For simplicity, let us focus on the summand and reduce the problem to the following minimization. Let (ỹ_1,…,ỹ_k)∈^k be fixed, we aim to find _y∈^n( 1/k∑_i=1^k δ_ỹ_i, 1/n∑_j=1^n δ_y_j). The criterion in (<ref>) is convex as is convex in both arguments (cf. <cit.>). To solve (<ref>), as is standard, we cast it into a discrete optimal transport problem. To do so, first introduce the (k× n)-cost matrix _y, where _y, i j:=ỹ_i - y_j_1. As the criterion in (<ref>) has uniform weights on the atoms, we next aim to solve the problem _∈[0,1]^k× n{φ_y() := ∑_(i,j)∈1,…,k×1,…,n_ij_y,ij} subject to ∑_j=1^n _ij = 1/k, i=1,…,k and ∑_i=1^k _ij = 1/n, j=1,…,n. Let ^*_y be an optimal transport plan that solves (<ref>) for y fixed. Taking derivative of y↦φ_y(·) yields .∂_y_jφ_y()|_=^*_y = ∑_i∈1,…,k^*_y,ij ∂_y_jỹ_i - y_j_1, j=1,…,n. This gradient is in general not the gradient corresponding to (<ref>), as ^*_y depends on y, while (<ref>) excludes such dependence. Nevertheless, it is still viable to update y using the gradient descent that employs the partial gradient specified in (<ref>). To justify this update rule, first consider y'∈ satisfying φ_y'(^*_y)≤φ_y(^*_y), then observe that ( 1/k∑_i=1^k δ_ỹ_i, 1/n∑_j=1^n δ_y'_j) ≤φ_y'(^*_y) ≤φ_y(^*_y) = ( 1/k∑_i=1^k δ_ỹ_i, 1/n∑_j=1^n δ_y_j). This inequality is strict if φ_y'(^*_y)<φ_y(^*_y). We refer to <cit.> and the reference therein for related discussions. The Sinkhorn algorithm, which adds an entropy regularization, is a widely-used algorithm for approximating the solution to (<ref>). Specifically, here, it is an iterative scheme that approximately solves the following regularized problem, subject to the constraints in (<ref>), _^ϵ∈[0,1]^k× n{∑_i,j∈1,…,k×i,…,n^ϵ_ij_ij + ϵ∑_i,j∈1,…,k×i,…,n^ϵ_ij (log_ij - 1) }, where ϵ>0 is a hyper-parameter, and should not be confused with the ε used elsewhere. We refer to Section <ref> for further details. We also refer to <cit.> and the reference therein for convergence analysis of the Sinkhorn algorithm. It is well known that the regularization term in (<ref>) is related to the entropy of a discrete random variable. Larger values of ϵ encourages the regularized optimal transport plan to be more diffusive. That is, for larger values of ϵ, the mass from each y_j is distributed more evenly across all ỹ_i's. Performing gradient descent along the direction in (<ref>) tends to pull y_j's towards the median of the ỹ_i's, as we are equipping with the norm ·_1. Conversely, small values of ϵ often leads to instability, resulting in loss/gradient. To help with these issues, we implement the Sinkhorm algorithm after normalizing the cost matrix. Additionally, we use a large ϵ (e.g., 1) in the first few training episodes, then switch to a smaller ϵ (e.g., 0.1) in later episodes. Furthermore, we impose sparsity on the transport plan by manually setting the smaller entries of the transport plan to 0. The specific detailed configurations and related ablation analysis are provided in Section <ref> and Appendix <ref>. §.§.§ Network structure that induces locally adaptive Lipschitz continuity As previously discussed, it is desirable for the neural estimator to exhibit certain Lipschitz continuity. In practice, however, determining an appropriate Lipschitz constant for training the neural estiamtor is challenging, largely because understanding the true Lipschitz continuity of P (if it exists) is very challenging. Additionally, the estimate provided in Proposition <ref> is probabilistic. Fortunately, a specific network structure allows the neural estimator, when properly trained, to exhibit locally adaptive Lipschitz continuity. Subsequently, we provide a high-level overview of this network structure. Further detailed configurations and ablation analysis are presented in Section <ref> and Appendix <ref>. Consider a fully connected feed-forward neural network with equal width hidden layers and layer-wise residual connection <cit.>. Let N_neuron denote the width of the hidden layers. For activation, we use Exponential Linear Unit (ELU) function <cit.>, denoted by σ. For hidden layers, we employ the convex potential layer introduced in <cit.>, 𝗑_out = 𝗑_in - _2^-1^σ( 𝗑_in + 𝖻). By <cit.>, the convex potential layer is 1-Lipschitz continuous in ·_2 sense. For the input layer, with a slight abuse of notation, we use 𝗑_out = N_neuron^-1diag(||^-1_1∧ 1) σ( 𝗑_in + 𝖻), where ||_1 computes the absolute sum of each row of the weight matrix to form a vector of size N_neuron, the reciprocal and ·∧ 1 are applied entry-wise, and produces a diagonal square matrix based on the input vector. In short, the normalization in (<ref>) is only applied to the rows of with ℓ_1-norm exceeding 1. Consequently, the input layer is 1-Lipschitz continuous in ·_1 sense. A similar treatment is used for the output layer but without activation, 𝗑_out = L d_^-1diag( ||^-1_1∧ 1) ( 𝗑_in + 𝖻). where L>0 is a hyper-parameter. The output represents atoms on with uniform weight, therefore, no N^-1_atom is required here. The spectral norm _2 in (<ref>), however, does not, in general, have an explicit expression. Following the implementation in <cit.>, we approximate each _2 with power iteration. Power iterations are applied to all hidden layers simultaneously during training. To control the pace of iterations, we combine them with momentum-based updating. We refer to Algorithm <ref> for the detailed implementation. Our implementation differs from that in <cit.>, as the authors of <cit.> control the frequency of updates but not the momentum. In a similar manner, for input and output layers, instead of calculating the row-wise ℓ_1-norm explicitly, we update them with the same momentum used in the hidden layers. Our numerical experiments consistently show that a small momentum value of τ=10^-3 effectively maintains adaptive continuity while maintaining a satisfactory accuracy. The impact of L in (<ref>) and τ in Algorithm <ref> is discussed in Section <ref>. During training, due to the nature of our updating schemes, the normalizing constants do not achieve the values required for the layers to be 1-Lipschitz continuous. We hypothesize that this phenomenon leads to a balance that ultimately contributes to adaptive continuity: on one hand, the weights stretch to fit (or overfit) the data, while on the other, normalization through iterative methods prevents the network from excessive oscillation. As shown in Section <ref> and <ref>, the L value in (<ref>) and the momentum τ in Algorithm <ref> affect the performance significantly. For completeness, we also experiment with replacing (<ref>) by fully connected feedforward layers similar to (<ref>), with or without batch normalization <cit.> after affine transformation. This alternative, however, failed to produce satisfactory results. §.§ Experiments with synthetic data We consider data simulated from three different models. The first two have d_=d_=1, while the third has d_=d_=3. Here we no longer restrict to be the unit box, however, we still consider to be a d_-dimensional unit box (not necessarily centered at the origin). In Model 1 and 2, X∼([0,1]). Model 1 is a mixture of two independent Gaussian random variables with mean and variance depending on x, Y = ξ( 0.1 (1+cos(2π X)) + 0.12 |1-cos(2π X)| Z + 0.5 ), where Z∼(0,1) and ξ is a Rademacher random variable independent of Z. For Model 2, we have Y = 0.5 _[0,1)(X) + 0.5 U, where U∼([0,1]). The conditional distribution in Model 2 is intentionally designed to be discontinuous in the feature space. This choice was made to evaluate performance in the absence of the Lipschitz continuity stipulated in Assumption <ref>. Model 3 is also a mixture of two independent Gaussian random variables, constructed by considering X∼([-1/2,1/2]^3) and treating X as a column vector (i.e., X take values in ^3×1), Y = ζ(cos( X) + 0.1 cos(Σ_X) W) + (1-ζ)(cos(' X) + 0.1 cos(Σ'_X) W'). Above, the cos functions act on vector/matrix entrywise, ∈^3× 3, and Σ_x also takes value in ^3× 3. Each element of Σ_x is defined as 𝗏_ij x for some 𝗏_ij∈^1× 3. The entries of and 𝗏_ij are drawn from standard normal in advance and remain fixed throughout the experiment. The matrices ' and Σ'_x are similarly constructed. Furthermore, W and W' are independent three-dimensional standard normal r.v.s, while ζ represents the toss of a fair coin, independent of X, W, and W'. For the purpose of comparison, two different network structures are examined. The first, termed LipNet, is illustrated in Section <ref>. The second, termed StdNet, is a fully connected feedforward network with layer-wise residual connections <cit.>, ReLU activation, and batch normalization immediately following each affine transformation, without specifically targeting Lipschitz continuity. With a hyper-parameter k for the k-nearest-neighbor estimator, which we specify later, each network contains 5 hidden layers with 2k neurons. These networks are trained using the Adam optimizer <cit.> with a learning rate of 10^-3. For StdNet in Model 1 and 2, the learning rate is set to 0.01, as it leads to better performance. Other than the learning rates, StdNet and LipNet are trained with identical hyper-parameters across all models. We refer to Appendix <ref> for a summary of hyper-parameters involved. We generate 10^4 samples for Models 1 and 2. Given the convergence rate specified in Theorem <ref>, we note that the sample size are considered relatively small. For these two models, we chose k=100 and utilized neural networks that output atoms of size N_atom=k. The choice of k is determined by a rule of thumb. In particular, our considerations include the magnitude of k suggested by Theorem <ref> and the computational costs associated with the Sinkhorn iterations discussed in Section <ref>. The results under Model 1 and 2 are plotted in Figure <ref>, <ref> and <ref>. Figure <ref> provides a perspective on joint distributions, while Figure <ref> and <ref> focus on conditional CDFs across different x values. Figure <ref> suggets that both StdNet and LipNet adequately recover the joint distribution. The LipNet's accuracy is, however, notably superior and produces smooth movements of atoms (as seen in the third row of Figure <ref>). Although further fine-tuning may provide slight improvements in StdNet's performance, StdNet will still not achieve the level of accuracy and smoothness observed in LipNet. The average absolute value of derivative of each atom (fourth row of Figure <ref>), makes it evident that LipNet demonstrates a capacity of automatically adapting to a suitable level of Lipschitz continuity locally. In particular, in Model 2, the atoms of LipNet respond promptly to jumps while remaining relatively stationary around values of x where the kernel is constant. We emphasize that LipNet is trained using the same hyper-parameters across Models 1, 2, and 3. Figure <ref> shows the estimated conditional distribution at different values of x. Figure <ref> indicates that the raw k-nearest-neighbor estimator deviates frequently from the actual CDFs. This deviation of the raw k-nearest-neighbor estimator is expected, as it attempts to estimate an unknown CDF with only k=100 samples given an x. Conversely, the neural estimator, especially the LipNet, appears to offer extra corrections even if they are trained based on the raw k-nearest-neighbor estimator. This could be attributed to neural estimators implicitly leveraging information beyond the immediate neighborhood. Figure <ref> compares the -distance between each estimator and the true conditional distribution at various values of x, using the following formula (see <cit.>), (F, G) = ∫_|F(r)-G(r)| r, where F and G are CDFs. This quantity can be accurately approximated with trapezoidal rule. In Model 1, the neural estimator generally outperforms the raw estimator with k=100 across most values of x, even though the raw estimator is used for training the neural estimators. Furthermore, LipNet continues to outperform raw estimators with larger values of k – even though LipNet is trained with a raw estimator with k=100. In Model 2, LipNet continues to demonstrate a superior performance, except when compared to the raw estimator with k=1,000 at x distant from 0.5, the reason is that, here, the conditional distribution is piece-wise constant in x, which enhances the performance of the raw estimator at larger k values. The aforementioned findings indicate superior performance by LipNet. We, however, recognize that improvements are not always guaranteed, as demonstrated in Figures <ref> and <ref>. For Model 3, we generate 10^6 samples and select k=300. We train both neural estimators using Adam optimizer with a learning rate of 10^-3. Hyperparameters such as L in (<ref>) and τ in Algorithm <ref> are consistent with those used for Models 1 and 2. We refer to Appendix <ref> for the detailed configuration. In Figure <ref>, we visualize the outcomes in Model 3: the conditional CDFs at an arbitrarily chosen x are projected onto various vectors. We observe that the neural estimators considerably outperform the raw k-nearest-neighbor estimator, likely owing due to their implicit use of global information outside of the immediate neighbors during training. For further comparisons, we present additional figures in Appendix <ref>: Figures <ref>, <ref> and <ref> feature the exact same neural estimators as shown in Figure <ref>, but with the raw k-nearest-neighbor estimators employing different k values, k=1,000, 3,000, 10,000. Raw k-nearest-neighbor estimators with k=1,000, 3,000 are superior to that with k=300, while at k=10,000, the accuracy begins to decline. Upon comparison, the neural estimator trained with k=300 consistently outperforms the raw k-nearest-neighbor estimators for all values of k. For a more comprehensive comparison, we randomly select 10,000 query points. For each query point, we randomly generate a vector in ^3, normalized under ·_1, and project the atoms produced by the estimators onto said vector. With the same vector, we also compute the corresponding true CDFs of the projected Y given the query point. We then approximately compute the -distance between the projected distributions via (<ref>). The resulting histograms are shown in Figure <ref>, which suggests that LipNet performs best. The rationale for employing this projection approach, rather than directly computing the -distance between discrete and continuous distributions over ^3, is due to the higher cost and lower accuracy of the latter approach (see also the discussion in Section <ref>). While this projection approach provides a cost-effective alternative for performance evaluation, it may not fully capture the differences between the estimations and ground truth. Lastly, to demonstrate how atoms, in the neural estimator, move as x varies, Figure <ref> shows the projected trajectories along a randomly selected straight line through the origin. The movement of atoms in LipNet is smooth, consistent with previous observations. Interestingly, the movement of atoms in StdNet isn't excessively oscillatory either, although its continuity is slightly rougher compared to LipNet. The reader may execute the notebook on our github repisitory <https://github.com/zcheng-a/LCD_kNN> to explore the projected conditional CDFs and atoms' trajectories for different x values. § PROOFS §.§ Auxiliary notations and lemmas In this section, we will introduce a few technical results that will be used in the subsequent proofs. We first define R(m) := sup_x∈∫_^m(P_x, 1/m∑_ℓ=1^mδ_y_ℓ) ⊗_ℓ=1^m P_x( y_ℓ), m∈. We stipulate that R(0)=1. By <cit.>, we have R(m) ≤ ⌢C× m^-1/2, d_ = 1, m^-1/2ln(m), d_ = 2, m^-1/d_, d_≥ 3, for some constant ⌢C >0 depending only on d_. For comprehension, we also point to <cit.> for results that are potentially useful in analyzing explicit constant, though it is out of the scope of this paper. The lemma below pertains to the so-called approximation error, which arises when treating data points Y_j with X_j around an query point as though they are generated from the conditional distribution at the query point. Under Assumption <ref>, for any integer J≥1 and x,x_1,…,x_J∈^J+1, we have | ∫_^J(1/J∑_j=1^J δ_y_j, P_x) ⊗_j=1^J P_x_j( y_j) - ∫_^J(1/J∑_j=1^J δ_y_j, P_x) ⊗_j=1^J P_x( y_j) | ≤L/J∑_j=1^J x_j- x_∞. For x,x_1,…,x_J∈^J+1, note that | ∫_^J(1/J∑_j=1^J δ_y_j, P_x) ⊗_j=1^J P_x_j( y_j) - ∫_^J(1/J∑_k=1^J δ_y_j, P_x) ⊗_j=1^J P_x( y_j) | ≤∑_ℓ=1^J| ∫_^J(1/J∑_j=1^J δ_y_j, P_x) ⊗_j=1^ℓ-1 P_x( y_j) ⊗⊗_j=ℓ^J P_x_j( y_j) - ∫_^J(1/J∑_j=1^J δ_y_j, P_x) ⊗_j=1^ℓ P_x( y_j)⊗⊗_j=ℓ+1^J P_x_j( y_j) |, where for the sake of neatness, at ℓ=1,J, we set ⊗_j=1^0 P_x( y_j)⊗⊗_j=1^J P_x_j( y_j) = ⊗_j=1^J P_x_j( y_j) and ⊗_j=1^J P_x( y_j)⊗⊗_j=J+1^J P_x_j( y_j) = ⊗_j=1^J P_x_j( y_j). Regarding the ℓ-th summand, invoking Fubini-Toneli theorem to integrate y_ℓ first then combining the integrals on outer layers using linearity, we obtain | ∫_^J(1/J∑_j=1^J δ_y_j, P_x) ⊗_j=1^ℓ-1 P_x( y_j)⊗⊗_j=ℓ^J P_x_j( y_j) - ∫_^J(1/J∑_j=1^J δ_y_j, P_x) ⊗_j=1^ℓP_j( y_j)⊗⊗_j=ℓ+1^J P_x_j( y_j) | = | ∫_^J-1∫_(1/J∑_j=1^J δ_y_j, P_x) (P_x - P_x_ℓ)( y_ℓ) ⊗_j=1^ℓ-1 P_x(y_j)⊗⊗_j=ℓ+1^J P_x_j( y_j) | ≤sup_(y_j)_j≠ℓ∈^J-1| ∫_(1/J∑_j=1^J δ_y_j, P_x) (P_x_ℓ - P_x)( y_ℓ) | ≤1/J(P_x_ℓ,P_x) ≤L/Jx_ℓ-x_∞, where in the second last inequality we have invoked Kantorovich-Rubinstein duality (cf. <cit.>) and the fact that, for all (y_j)_j≠ℓ∈^J-1, the map y_ℓ↦(1/J∑_j=1^J δ_y_j, P_x) is 1/J-Lipschitz, and where in the last equality, we have used Assumption <ref>. We will be using the lemma below, which regards the stochastic dominance between two binomial random variables. Let n∈ and 0 ≤ p < p'≤ 1. Then, (n,p') stochastically dominates (n,p). Let U_1,…,U_ni.i.d.∼[0,1] and define H := ∑_i=1^n_[0,p](U_i), H' := ∑_i=1^n_[0,p'](U_i). Clearly, H∼(n,p) and H'∼(n,p'). Moreover, we have H≤ H', and thus (H > r) ≤(H' > r), which completes the proof. §.§ Proof of Theorem <ref> The proof of Theorem <ref> relies the technical lemma below that we state and prove now. Let p ∈ [0,1] a real number, and let M ≥ 1 and d ≥ 1 two integers. We then have ∑_m=1^M Mm p^m (1-p)^M-m m^-1/d≤((M+1)p)^-1/d + ((M+1)p)^-1. We compute ∑_m=1^M Mm p^m (1-p)^M-m m^-1/d = 1/(M+1)p∑_m=1^M M+1m+1 p^m+1(1-p)^M-m (m+1) m^-1/d = 1/(M+1)p∑_m=2^M+1M+1m p^m (1-p)^M+1-m m (m-1)^-1/d = 1/(M+1)p∑_m=2^M+1M+1m p^m(1-p)^M+1-m (m-1)^1-1/d + 1/(M+1)p∑_m=2^M M+1m p^m(1-p)^M+1-m(m-1)^-1/d, where we used that m=m-1+1 in the last equality. Then, using that (m-1)^1-1/d≤ m^1-1/d and (m-1)^-1/d≤ 1 for all m ≥ 2, we continue to obtain ∑_m=1^M Mm p^m (1-p)^M-m m^-1/d≤ 1/(M+1)p∑_m=2^M+1M+1m p^m(1-p)^M+1-m m^1-1/d + 1/(M+1)p∑_m=2^M M+1m p^m(1-p)^M+1-m ≤ 1/(M+1)p∑_m=0^M+1M+1m p^m(1-p)^M+1-m m^1-1/d + 1/(M+1)p, where the second term in the last equality are derived from the binomial formula. Finally, introducing a random variable V with binomial distribution (M+1,p), and using Jensen inequality for the concave function ^+ ∋ x ↦ x^1-1/d∈^+, we obtain ∑_m=1^M Mm p^m (1-p)^M-m m^-1/d≤ 1/(M+1)pV^1-1/d + 1/(M+1)p ≤ ((M+1)p)^1-1/d/(M+1)p + 1/(M+1)p = ((M+1)p)^-1/d + ((M+1)p)^-1, which conclude the proof. We are now ready to prove Theorem <ref>. For ν∈(), we obviously have ∫_(P_x, _x) ν( x)≤sup_x ∈(P_x, _x), we then focus on proving the right hand side inequality in Theorem <ref>. To this end, we fix x ∈ and, to alleviate the notations, we let B := ^r(x) as introduced in Definition <ref>. Let N_B:=∑_m=1^M _B(X_m). By Definition <ref> and Assumption <ref> (i), we have (P_x,_x) = (P_x,μ̂^_B) = ∑_m=0^M _ N_B = m (P_x, μ̂^_B) = ∑_m=1^M Mm_X_1, …, X_m ∈ B_X_m+1, …, X_M ∉B(1/m∑_l=1^m δ_Y_l, P_x) + X_1, …, X_M ∉B(λ_, P_x) ≤∑_m=1^M Mm X_m+1, …, X_M ∉B_X_1, …, X_m ∈ B(1/m∑_l=1^m δ_Y_l, P_x) + ξ(B^c)^M R(0) = ∑_m=1^M Mmμ(B^c)^M-m∫_(B×)^m(1/m∑_l=1^m δ_y_l, P_x) ⊗_ℓ=1^mψ( x_ℓ y_ℓ) + ξ(B^c)^M R(0). To compute the integral terms, observe that, for fixed m ≥ 1, by definition of R(m) in (<ref>), Lemma <ref> and Remark <ref>, ∫_(B×)^m(1/m∑_l=1^m δ_y_l, P_x) ⊗_ℓ=1^mψ( x_ℓ y_ℓ) = ∫_B^m∫_^m(1/m∑_l=1^m δ_y_l, P_x) ⊗_l=1^m P_x_l( y_l) ⊗_ℓ=1^mξ( x_ℓ) ≤∫_B^m( ∫_^m(1/m∑_l=1^m δ_y_l, P_x) ⊗_ℓ=1^m P_x( y_ℓ) + L/m∑_ℓ=1^m x_ℓ-x_∞) ⊗_ℓ=1^mξ( x_ℓ) ≤∫_B^m(R(m) + 2Lr) ⊗_ℓ=1^mξ( x_ℓ) = (R(m)+2Lr)ξ(B)^m. This together with (<ref>) implies that, for any x∈, (P_x,_x) ≤∑_m=1^M Mmξ(B^c)^M-mξ(B)^m (R(m) + 2Lr) + ξ(B^c)^M R(0) ≤ 2Lr + ∑_m=1^M Mmξ(B^c)^M-mξ(B)^m R(m) + ξ(B^c) R(0) The remainder of the proof is split into three cases. In order to proceed, we will put together (<ref>), Lemma <ref>, and (<ref>). Below we only keep track of the rate. * For d_=1, we have (P_x,_x) ≤ 2Lr + (ξ(B)(M+1))^-1/2 + (ξ(B)(M+1))^-1 + (1-ξ(B))^M ≤ 2Lr + (c (2r)^(M+1))^-1/2 + (c (2r)^(M+1))^-1 + e^-c M r^. Controlling the dominating term(s) by setting r ∼ r^-/2M^-1/2, we yield r∼ M^-1/d_+2 and (P_x,_x) ≲ M^-1/d_+2. * For d_=2, we have (P_x,_x) ≤ 2Lr + ln(M)(ξ(B)(M+1))^-1/2 + (ξ(B)(M+1))^-1 + (1-ξ(B))^M ≤ 2Lr + ln(M)(c (2r)^(M+1))^-1/2 + (c (2r)^ (M+1))^-1 + e^-c M r^. Since r∼ln (M) r^-/2M^-1/2 may not have a closed-form solution, we simply follow the case of d_=1 to yield r∼ M^-1/d_+2 and (P_x,_x) ≲ M^-1/d_+2ln M. * For d_≥3, we have (P_x,_x) ≤ 2Lr + (ξ(B)(M+1))^-1/ + (ξ(B)(M+1))^-1+(1-ξ(B))^M ≤ 2Lr + (c (2r)^(M+1))^-1/ + (c (2r)^(M+1))^-1+ e^-c M r^. By setting r∼ r^-/M^-1/, we yield r∼ M^-1/d_+d_ and (P_x,_x) ≲ M^-1/d_+d_. The proof is complete. §.§ Proof of Theorem <ref> We will proceed by using Efron-Stein inequality. Let (X_1',Y_1') be an independent copy of (X_1,Y_1), and define ':=(X_1',Y_1'), (X_2,Y_2), …, (X_M,Y_M). In view of Assumption <ref> (i), by the triangle inequality of , it is sufficient to investigate 1/2M [ ( ∫_(μ̂^_^r(x), μ̂^'_^r(x)) ν(x))^2 ]. Notice that, by definitions (<ref>), {μ̂^_^r(x)≠μ̂^'_^r(x)}⊆{X_1 ∈^r(x)}∪{X_1' ∈^r(x)}. Additionally, by definitions (<ref>) again, on the event that {μ̂^_^r(x)≠μ̂^'_^r(x)}, we have (μ̂^_^r(x), μ̂^'_^r(x)) ≤(1+∑_ℓ=2^M_^r(x)(X_ℓ) )^-1. The above together with the condition that ν is dominated by λ_ implies that [ ( ∫_(μ̂^_^r(x), μ̂^'_^r(x)) ν( x))^2 ] ≤C^2 [ ( ∫_B(X_1,2r)∪ B(X_1',2r)(μ̂^_^r(x), μ̂^'_^r(x)) λ_( x))^2 ] ≤C^2 [ ( ∫_B(X_1,2r)∪ B(X_1',2r)(1+∑_ℓ=2^M_^r(x)(X_ℓ) )^-1λ_( x))^2 ] ≤ 4 C^2 [ ( ∫_B(X_1,2r)(1+∑_ℓ=2^M_^r(x)(X_ℓ) )^-1λ_( x))^2 ] = 4 C^2 [ λ_(B(X_1,2r))^2 ( ∫_B(X_1,2r)(1+∑_ℓ=2^M_^r(x)(X_ℓ) )^-1λ_( x)/λ_(B(X_1,2r)))^2 ] ≤ 4 C^2 (4r)^2[ [ ∫_B(X_1,2r)(1+∑_ℓ=2^M_^r(x)(X_ℓ) )^-2λ_( x)/λ_(B(X_1,2r))| X_1 ] ], where we have used Jensen's inequality and tower property in the last line. In view of Assumption <ref> (i), expanding the inner conditional expectation into an integral with respect to regular conditional distribution (cf. <cit.>) then invoking Fubini-Tonelli theorem, we yield [ ∫_B(X_1,2r)(1+∑_ℓ=2^M_^r(x)(X_ℓ) )^-2λ_( x)/λ_(B(X_1,2r))| X_1 ] = ∫_B(X_1,2r)∫_^M-1(1+∑_ℓ=2^M_^r(x)(x_ℓ) )^-2⊗_ℓ=2^M ξ( x_ℓ) λ_( x)/λ_(B(X_1,2r)). For the inner integral in (<ref>), by Assumption <ref> (ii), we have ∫_^M-1(1+∑_ℓ=m+1^M _^r(x)(x_ℓ))^-2⊗_ℓ=2^Mξ( x_ℓ) = ∑_ℓ=0^M-1M-1ℓξ(^r(x))^ℓ(1-ξ^r(^r(x)))^M-1-ℓ(1+ℓ)^-2 = 1/M(M+1)ξ(^r(x))^2∑_ℓ=0^M-1M+1ℓ+2ξ(^r(x))^ℓ+2(1-ξ(^r(x)))^M-1-ℓℓ+2/ℓ+1 ≤2/M(M+1)ξ(^r(x))^2∑_ℓ=2^M+1M+1ℓξ(^r(x))^ℓ(1-ξ(^r(x)))^M+1-ℓ ≤2/M(M+1)ξ(^r(x))^2. This together with (<ref>), (<ref>) and Assumption <ref> (ii) implies [ ( ∫_(μ̂^_^r(x), μ̂^'_^r(x)) ν(x))^2 ] ≤ 82^2C^2/c^2 M(M+1) . Invoking Efron-Stein inequality, we conclude the proof. §.§ Proof of Theorem <ref> In order to prove Theorem <ref>, we first establish a few technical lemmas. The following lemma is a first step toward finding the average rate of k-nearest neigbhor method. Suppose Assumption <ref> and <ref>. Let R be defined in Section <ref>. Then, for any x∈, we have (P_x, _x) ≤ R(k) + L/k∑_m=1^k Z_x^(m), where Z^x_(m), m=1,…,M are the order statistics of (X_m-x_∞)_m=1^M in ascending order. We fix x∈ for the rest of the proof. By Assumption <ref>, we have ( P_x, _x ) = M! _X_1-x_∞≤X_2-x_∞≤…≤X_M-x_∞( P_x, 1/k∑_ℓ=1^k δ_Y_ℓ) = M! ∫_(×)^M_x_1-x_∞≤x_2-x_∞≤…≤x_M-x_∞( P_x, 1/k∑_ℓ=1^k δ_y_ℓ) ⊗_ℓ=1^M ψ( x_ℓ y_ℓ) = M! ∫_^M_x_1-x_∞≤x_2-x_∞≤…≤x_M-x_∞∫_^k( P_x, 1/k∑_ℓ=1^k δ_y_ℓ) ⊗_ℓ=1^k P_x_ℓ( y_ℓ) ⊗_j=1^Mξ( x_ℓ). In view of Lemma <ref>, replacing P_x_ℓ above with P_x, we have ( P_x, _x ) ≤ M! ∫_^M_x_1-x_∞≤x_2-x_∞≤…≤x_M-x_∞∫_^k( P_x, 1/k∑_ℓ=1^k δ_y_ℓ) ⊗_ℓ=1^k P_x( y_ℓ) ⊗_j=1^Mξ( x_ℓ) + L/k∑_ℓ=1^k M! ∫_^M_x_1-x_∞≤x_2-x_∞≤…≤x_M-x_∞ d_(x_ℓ, x) ⊗_j=1^Mξ( x_ℓ) = ∫_^k(1/k∑_l=1^k δ_y_l, P_x) ⊗_ℓ=1^k P_x( y_ℓ) + L/k∑_ℓ=1^k M! ∫_^M_x_1-x_∞≤x_2-x_∞≤…≤x_M-x_∞ d_(x_ℓ, x) ⊗_j=1^Mξ( x_ℓ). In view of R defined above (<ref>) and Z^x_(m) defined in the statement of this lemma, we conclude the proof. The next lemma provides an upper bound to ∑_m=1^k Z_x^(m) listed in Lemma <ref>. Let Z^x_(m) be defined as in Lemma <ref>. Under Assumption <ref>, for any x∈, we have ∑_m=1^k Z^x_(m)≤2/c^1/d_d_M!/Γ(M+1/d_+1)∑_m=1^k ∑_j=0^m-1Γ(j+1/d_)/j!. For any x∈, we compute, since Z^x_(m)∈ [0,1], Z^x_(m) = ∫_0^1 ℙ[Z^x_(m)≥ r] r = ∫_0^1 (1-ℙ[Z^x_(m) < r]) r, and we observe that {Z^x_(m) < r}={N(x,r) ≥ m} with N(x,r) := ♯{ 1 ≤ m ≤ M | X_m-x < r }. We hence have Z^x_(m) = ∫_0^1 (1-ℙ[N(x,r) ≥ m]) r. Since N(x,r) ∼(M, ξ(B(x,r))) and ξ(B(x,r)) ≥cλ_(B(x,r)) ≥cr^/2^ by Assumption <ref> (ii), we obtain that ℙ[N(x,r) ≥ m] ≥ℙ[N'(x,r) ≥ m] with N'(x,r) ∼(M,cr^/2^) due to Lemma <ref>. This implies Z^x_(m) ≤∫_0^1 (1-ℙ[N'(x,r) ≥ m]) r = ∫_0^1 ℙ[N'(x,r) < m] r = ∑_j=0^m-1Mj∫_0^1 (cr^/2^)^j (1-cr^/2^)^M-j r = 2/c^1/∑_j=0^m-1Mj∫_0^c/2^ r^1/+j-1(1-r)^M-j r ≤2/c^1/∑_j=0^m-1Γ(M+1)/Γ(j+1)Γ(M-j+1)Γ(1/+j)Γ(M-j+1)/Γ(1/+M+1) = 2 M!/c^1/Γ(1/+M+1)∑_j=0^m-1Γ(1/+j)/j!, and the proof is over. We are now in position to prove Theorem <ref>. By combining Lemma <ref> and Lemma <ref>, noting that the upper bound is constant in x, we have sup_x∈(P_x, μ̂_^k(x)) ≤ R(k) + L/k2M!/c^1/d_Γ(M+1/d_+1)∑_m=1^k ∑_j=0^m-1Γ(j+1/d_)/j!. Below we only investigate the rate of the right hand side of (<ref>) as M→∞, and do not keep track of the constant. We first analyze the second term in the right hand side of (<ref>). By Gautschi's inequality <cit.>, we have Γ(j+1/d_)/j! = Γ(j+1/d_)/Γ(j+1)≤ j^1/d_-1, j∈0∪ . Thus, ∑_m=1^k ∑_j=0^m-1Γ(j+1/d_)/j!≤∑_m=1^k ∑_j=0^m-1 j^1/d_-1≲∑_m=1^k m^1/d_≲ k^1+1/d_. By Gautschi's inequality again, we have M!/Γ(M+1/d_+1) = Γ(M+1)/Γ(M+1/d_+1)≤ M^-1/d_. The above implies sup_x∈(P_x, μ̂_^k(x)) ≲ R(k) + M^-1/d_ k^1/d_. We will split the remainder of the proof into three cases. * For d_=1, by letting k^-1/2∼ M^-1/d_ k^1/d_, we yield k ∼ M^2/d_ + 2 and sup_x∈(P_x, μ̂_^k(x)) ≲ M^-1/d_ + 2 * For d_=2, since the explicit solution of k^-1/2ln k ∼ M^-1/d_ k^1/d_ is elusive, we simply follow the configuration derived in the case of d_=1 and yield k ∼ M^2/d_ + 2 and sup_x∈(P_x, μ̂_^k(x)) ≲ M^-1/d_ + 2ln M. * For d_≥ 3, by letting k^-1/d_∼ M^-1/d_ k^1/d_, we yield k ∼ M^/d_ + d_ and sup_x∈(P_x, μ̂_^k(x)) ≲ M^-1/d_ + d_. The proof is complete. §.§ Proof of Theorem <ref> We will proceed by using Efron-Stein inequality. Let (X_1',Y_1') be an independent copy of (X_1,Y_1), and define ':=(X_1',Y_1'), (X_2,Y_2), …, (X_M,Y_M). In view of Assumption <ref> (i), by the triangle inequality of , it is sufficient to investigate 1/2 M ( ∫_(μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ν( x) )^2 . Note that for (μ̂^_^k,_(x), μ̂^'_^k,'_(x)) to be positive, the event A_x∪ A'_x is necessary, where A_x := { X_1 ∈^k,_(x) } and A'_x := { X_1' ∈^k,_(x) }. Moreover, (μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ≤1/k. It follows that ( ∫_(μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ν( x) )^2 ≤1/k^2( ∫__A_x∪ A'_xν( x) )^2 ≤1/k^2∫__A_x∪ A'_xν( x) ≤2/k^2∫_[A_x] ν( x). where the second inequality is due to the fact that the integral value always fall into in [0,1], and we have used Fubini-Tonelli theorem and the subadditivity of probability in the third inequality. Regarding [A_x], by the symmetry stemming from Assumption <ref> (i) and the random tie-breaking rule in Definition <ref>, we have [A_x] = M-1k-1Mk^-1 = k/M. Consequently, M ( ∫_(μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ν( x) )^2 ≤2/k. Invoking Efron-Stein inequality, we conclude the proof of (<ref>). We now assume additionally that ν≤Cλ_ to prove the second statement. Following from (<ref>), by using the positivity and subadditivity of indicator functions as well as AM–GM inequality, we have ( ∫_(μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ν( x) )^2 ≤4/k^2( ∫__A_xν( x) )^2 ≤4C^2/k^2( ∫__A_xλ_( x) )^2 ≤4C^2/k^2∫_[0,1][ ( ∫__A_xλ_( x) )^2 > δ] δ, where in the second inequality we have used the condition that ν is dominated by λ_, and in the last one the alternative expression of expectation for positive random variables. Let 𝖢𝗎𝖻𝖾^ι_ be the set of cubes within with edge length ι. Since ν is dominated by λ_, with probability 1 we have A_x = {at most (k-1) of X_ℓ,ℓ=2,…,M, falls into B^X_1-x_∞_x}, A'_x = {at most (k-1) of X_ℓ,ℓ=2,…,M, falls into B^X_1'-x_∞_x}. It follows that {∑_m=2^M_B(X_m) > k, ∀ B∈𝖢𝗎𝖻𝖾_^ι}⊆{∫__A_xλ( x) ≤ (2ι)^d_}. By combining the above and setting δ=(2ι)^2d_, we yield ( ∫_(μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ν( x) )^2 ≤4C^2/k^2∫_[0,1][ 1/M-1∑_m=2^M_B(X_m) ≤k/M-1, ∀ B∈𝖢𝗎𝖻𝖾_^1/2δ^1/d_] δ In order to proceed, we state and prove a useful technical lemma using the Rademacher complexity technique (cf. <cit.>). Below we let 𝖢𝗎𝖻𝖾_ be the set of cubes inside with edge lengths within [0,1]. Let X_2,…, X_M be introduced in Assumption <ref> (i). For ε≥ 0, [ 1/M-1∑_m=2^M_B(X_m) ≤cλ_(B) - 8 √(2d_ln(M)/M-1) - ε, ∀ B∈𝖢𝗎𝖻𝖾_] ≤exp(-M-1/2ε^2). Let x^M=(x_2^M,…,x_M^M)∈^M-1. To utilize the machinery of Rademacher complexity, we will upper bound the cardinality of the set _B( x^M): B∈𝖢𝗎𝖻𝖾_, where _B applies entry-wise. More precisely, _B( x^M)=(_B(x_2^M),…,_B(x_M^M)). To start with, we first note that for d=1,…,d_, the projected (x_2,d^M,…,x_M,d^M) at most separates axis-d into M intervals. Additionally, each element in _B( x^M): B∈𝖢𝗎𝖻𝖾_ corresponds to selecting two intervals (one for starting and one for ending of the cube) on each axis. Therefore, the cardinality is at most M^2d_, i.e., 𝖢𝗎𝖻𝖾_ has polynomial discrimination 2d_. It follows from <cit.> that, for any ε≥ 0, [ sup_B∈𝖢𝗎𝖻𝖾_|1/M-1∑_m=2^M_B(X_m) - ξ(B) | ≥ 8 √(2d_ln(M)/M-1) + ε] ≤exp(-M-1/2ε^2). Finally, in view of Assumption <ref> (ii), we conclude the proof of Lemma <ref>. In view of (<ref>) and Lemma <ref>, for δ∈[0,1], we consider ε≥ 0 such that k/M-1 = cδ^1/2/2^d_ - 8 √(2d_ln(M)/M-1) - ε. Note that this is feasible only if 4^d_/c^2(8 √(2d_ln(M)/M-1) + k/M-1)^2≤ 1.[We do not include this condition in the statement of Theorem <ref>, as the bound presented remains valid, albeit vacuous, if this condition is not met.] It follows that [ 1/M-1∑_m=2^M_B(X_m) ≤k/M-1, ∀ B∈𝖢𝗎𝖻𝖾_^1/2δ^1/2d_] ≤ 1, δ∈[0, 4^d_/c^2(8 √(2d_ln(M)/M-1) + k/M-1)^2], exp(-M-1/2(cδ^1/2/2^d_ - 8 √(2d_ln(M)/M-1) - k/M-1)^2), δ∈(2^d_/c(8 √(2d_ln(M)/M-1) + k/M-1)^2,1]. The above together with (<ref>) implies 1/2 M ( ∫_(μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ν( x) )^2 ≤2C^2 M/k^2( 4^d_/c^2( 8√(2d_ln(M)/M-1) + k/M-1)^2 + ∫_2^d_/c(8 √(2d_ln(M)/M-1) + k/M-1)^1 exp(-M-1/2(cη/2^d_ - 8 √(2d_ln(M)/M-1) - k/M-1)^2) 2ηη), where we have performed a change of variable η=δ^1/2 in the last line. Relating to exponential and normal density functions, we calculate the integral to obtain 1/2 M ( ∫_(μ̂^_^k,_(x), μ̂^'_^k,'_(x)) ν( x) )^2 ≤2C^2 M/k^24^d_/c^2( ( 8√(2d_ln(M)/M-1) + k/M-1)^2 + √(2π)/√(M-1)(8 √(2d_ln(M)/M-1) + k/M-1) + 4/M-1), where we note the right hand side is of O((√(ln(M))/k+1/√(M))^2 + 1/k(√(ln(M))/k+1/√(M)) + 1/k^2). Invoking Efron-Stein inequality, we conclude the proof. §.§ Proof of Proposition <ref> By triangle inequality, ∫_(P_x,_x) x ≤∫_(P_x,P^Θ_x) (λ_-1/N∑_n=1^Nδ_X̃_n)( x) + 1/N∑_n=1^N(P_X̃_n,_X̃_n) + 1/N∑_n=1^N(_ X_n,_X̃_n). Then, by Assumption <ref> and (<ref>), ∫_(P_x,P^Θ_x) (λ_-1/N∑_n=1^Nδ_X̃_n)( x)≤(L+L^Θ)(λ_,1/N∑_n=1^Nδ_X̃_n). In view of Assumption <ref>, we have 1/N∑_n=1^N(P_X̃_n,_X̃_n) = 1/N∑_n=1^N(P_X̃_n,_X̃_n)| X̃_n = ∫_(P_x, _x) x . Combining the above, we prove the first statement. As for the second statement, consider Q,Q':→() that are Lipschitz-continuous with constants L, L'. Suppose that (Q_x^*,Q'_x^*) = sup_x∈(Q_x,Q'_x) = δ for some δ> 0 and x^*∈. This supremum is indeed attainable because is compact that x↦(Q_x,Q'_x) is continuous. Consequently, by triangle inequality and the Lipschitz-continuity, we have (Q_x, Q'_x) ≥((Q_x, Q'_x^*) - (Q'_x, Q'_x^*)) ∨ 0 ≥( (Q_x^*,Q'_x^*) -(Q_x^*,Q_x) - (Q'_x^*,Q'_x)) ∨ 0 ≥(δ - (L+L')x-x^*_∞) ∨ 0, x∈. We may then lower bound ∫_(Q_x,Q'_x) x with the volume of the cone on right hand side above (note the worst case is when x^*=(0,0)), ∫_(Q_x,Q'_x) x ≥∫_0^δ(δ-z/L+L')^d_ z = δ^d_+1/(d_+1) (L+L')^d_. It follows that sup_x∈(Q_x,Q'_x) ≤ (d_+1)^1/d_+1 (L+L')^d_/d_+1(∫_(Q_x,Q'_x) x)^1/d_+1, which completes the proof. § IMPLEMENTATION DETAILS AND ABLATION ANALYSIS In this section, we will provide further implementation details and conduct ablation analysis of the components highlighted in Section <ref>. §.§ Comparing ANNS-RBSP to exact NNS Algorithm <ref> outlines a single slice of RBSP, which divides an array of x's into two arrays of a random ratio along a random axis. Throughout the training, we execute RBSP 5 times during each training epoch, yielding 2^5=32 parts. Within each part, we then select a small batch of 8 query points, locating the k nearest neighbors for each query point within the same part. In Table <ref>, we compare the execution times of exact NNS and ANNS-RBSP. ANNS-RBSP offers considerable time savings for M=10^6, while exact NNS is more efficient for M=10^5 or fewer. It's important to note that ANNS-RBSP may introduce additional errors by inaccurately including points that are not within the k nearest neighbors. As elucidated in the proof of Theorem <ref>, the magnitude of this induced error can be understood by comparing the excessive distance incurred to that of exact NNS. For simplicity, we investigate the difference below Δ := 1/N_batch∑_i=1^N_batch(1/k∑_j=1^k X̌'_ij-X̃_i_1 - 1/k∑_j=1^k X̌_ij-X̃_i_1), where X̃_i's are query points, and X̌_ij, X̌'_ij are the k-nearest-neighbor identified by exact NNS and ANNS-RBSP, respectively. In our experiments, we evaluated scenarios with d_=3,10 and k=300. Regarding the data, we generated M=10^4,10^5,10^6 samples from ([0,1]^d_). Once the data set is generated, we fixed the data and conducted 100 simulations of Δ, each with N_batch=256 query points. This process was repeated 10 times, each with a separately generated data. The results are illustrated in Figure <ref>. It is expected that Δ will approach 0 as the sample size M tends to infinity. The convergence rate is likely influenced by factors such as d_, k, and N_batch. Further analysis of the convergence of ANNS-RBSP will be conducted in future studies. §.§ An implementation of the Sinkhorn algorithm In this section, we will detail our implementation of the Sinkhorn algorithm and highlight a few novel treatments that seem to enhance the training of the neural estimator. While the mechanisms are not yet fully understood, they constitute important improvement in the accuracy of the neural estimator. Let us first recall the iterative procedure involved in the Sinkhorn algorithm. We follow the setup in Section <ref>. In particular, the row indexes of the cost matrix stand for atoms in the empirical measures, while the column indexes stand for atoms produced by the neural estimator. We set N_atom=k and let 𝗎^(0), 𝗏^(0) be column vectors of size k with all entries being k^-1. We will suppress the dependence on y from the notation. Upon setting ^ϵ := exp(-/ϵ) with entry-wise exponential, the Sinkhorn algorithm performs repeatedly 𝗎^(ℓ+1) = 𝗎^(0)/^ϵ𝗏^(ℓ) and 𝗏^(ℓ+1) = 𝗏^(0)/(^ϵ)^⊤𝗎^(ℓ+1), where the division is also calculated entry-wise. After a certain number of iterations, denoted as N_iter, we obtain an approximate optimal transport plan for problem (<ref>): ^ϵ = (𝗎^(N_iter)) ^ϵ(𝗏^(N_iter)). Let us set ϵ=1 momentarily. Note that if the entries of are excessively large, effectively becomes a zero matrix, which impedes the computations in (<ref>). This issue may occur at the initiation of the neural estimator or during training, possibly due to the use of stochastic gradient descent. To tackle this issue, we employ a rule-of-thump normalization that ^ϵ := exp(-/c̃ϵ) with c̃ := min_imax_j_ij, and use ^ϵ instead of ^ϵ in (<ref>). Regarding the selection of ϵ and the number of iterations, we currently lack a method for adaptively determining these values. Instead, we adjust them manually based on training episodes. This manual adjustment works well for all models discussed in this paper. For more information, please see Appendix <ref>. As alluded in Section <ref>, we enforce sparsity on the transport plan to improve the performance of the neural estimator. Let ^ϵ be the output of the Sinkhorn algorithm. We construct ^ϵ and ^ϵ by setting the row-wise and column-wise maximum of ^ϵ to k^-1, respectively, and setting the remaining entries to 0. We then use ^ϵ = γ^ϵ + (1-γ) ^ϵ, where γ∈[0,1] is a hyper-parameter, in gradient descent (<ref>). We observe that ^ϵ relates each atom in the empirical measure to a single corresponding atom from the neural estimator, and ^ϵ does the same in reverse. The optimal choice of γ remains an open question, though we have set γ=0.5 in all three models. Next, we explore the impact of enforcing sparsity and varying the choices of γ. Figure <ref> compares the performance in Model 1 under different sparsity parameters. When no sparsity is enforced, the neural estimator tend to handles singularities more adeptly, but may overlooks points located on the periphery of the empirical joint distribution, potentially resulting in overly concentrated atoms from the neural estimator (see around x=0.1, 0.9). Compare Figure <ref> and <ref> for the extra error due to the lack of enforced sparsity. This phenomenon is more noticeable in Model 3. We refer to panel (2,3) of Figure <ref> in Appendix <ref> for an example. Moreover, Figure <ref>, which is obtained without enforced sparsity, indicates a downgrade in accuracy when compared to Figure <ref>. Finally, it is not recommended to use ^ϵ at the early stages of training, as our empirical experiments suggest this could deteriorates performance. In training, we start by not enforcing sparsity and then begin to enforce it in later episodes. We refer to Appendix <ref> for further details of the training configuration. §.§ More on LipNet We will investigates the impact of various hyper-parameters on the performance of LipNet. The LipNets presented in this section are trained with the same hyper-parameters as in Section <ref> (see also Appendix <ref>), expect for those specified otherwise. §.§.§ Activation function Switching the activation function from ELU to Rectified Linear Unit (ReLU) appears to retain the adaptive continuity property. In Figure <ref>, we illustrate the joint distribution and the average absolute derivatives of all atoms of LipNet with ReLU activation. The outcomes are on par with those achieved using ELU activation as shown in Figure <ref>. §.§.§ Value of L in (<ref>) Note that the LipNets discussed in Section <ref> were trained with L=0.1. If the normalizing constants in LipNet are exactly computed, L reflects the Lipschitz constant of LipNet, upto the discrepancy in the choice of norms in different layers. The effect of L in our implementation, however, is rather obscure. Figure <ref> showcases the performance of LipNets across various L values in Model 1. The comparison in Model 2 is presented in Figure <ref> in Appendix <ref>. The best choice of L appears to depend on the ground truth model. For Model 3, we compared the performance of L=0.1 and L=1 and observed no significant differences. Generally, we prefer a smaller L; however, smaller values of L tend to exhibit greater sensitivity to other training parameters. For instance, in Model 3, with L=0.1, starting enforcing sparsity too soon leads to significantly poorer performance, while the impact on the outcomes for L=1 is much less noticeable. §.§.§ Momentum τ in Algorithm <ref> In our training of LipNet, we use τ=10^-3. Figure <ref> demonstrates the impact of various τ values on neural estimator's performance in Model 1. It is clear that the performance declines with a τ that is too large. While we initially speculated that a smaller τ might cause atoms to exhibit more erratic movements as x changes, observations contradict this hypothesis. We now believe that a suitable τ value helps prevent neurons from stagnating in the plateau region of the ELU activation function. This is supported by the outcomes observed with τ=10^-6, where atom movements are overly simplistic. Additional comparisons in Model 2 are presented in Figure <ref>. Despite considering as a potential improvement the inclusion of batch normalization in the convex potential layer (<ref>), right after the affine transformation, along with a corresponding offset in the position of _2, our experiments with both ELU and ReLU activations, using the default batch normalization momentum of 0.1, resulted in reduced performance. Lowering said batch normalization momentum often leads to a network. § WEAKNESS AND POTENTIAL IMPROVEMENT In this section, we provide some discussion on the weakness and possible improvement of our implementation in Section <ref>. Extra correction. In more realistic scenarios, the true conditional distribution is often unknown or intractable. In such cases, it is unclear whether a neural estimator offers extra correction over raw estimators. A potential solution to this issue is to train StdNet and LipNet simultaneously. If StdNet and LipNet align more closely with each other than with the raw estimator involved in their training, it is possible that the neural estimators are providing extra corrections. Hyper-parameters for Sinkhorn algorithm. Our implementation of the Sinkhorn algorithm involves several hyper-parameters: (i) k in Definition <ref>; (ii) N_atom in (<ref>); (iii) ϵ in (<ref>); (iv) γ in (<ref>); and (v) additional hyper-parameters listed in Section <ref>. The impact of these hyper-parameters is not yet fully understood. Additionally, an adaptive ϵ that balances the accuracy and stability of the Sinkhorn iteration is desirable. Furthermore, as illustrated in Section <ref>, enforcing sparsity on the transport plan generally yields better approximations at x where the conditional distribution is more diffusive, but may performs worse where the conditional distribution exhibits atoms. This observation motivates further investigation into a sparsity policy that adjusts according to the indications from the raw estimator. Adaptive continuity. The impact of hyper-parameters in LipNet also warrants further investigation. In addition, despite the results presented in this study, more evidence is needed to understand how LipNet and its variations perform under various conditions. Scalability. While the implementation produces satisfactory results when M and k are relatively small (recall that we set N_atom=k), our further experiments indicate a scalability bottleneck. For example, in Model 1, significantly increasing M and k does not necessarily improve the performance of neural estimators in a comparable manner. To address this issue, we could experiment with varying the ratios between N_atoms and k, rather than setting them equal, in hopes of reducing the strain on the Sinkhorn algorithm. We note that varying the ratio between N_atoms and k requires adjusting the enforced sparsity accordingly. Another issue relates to the dimensions of and . In view of the curse of dimensionality in Theorem <ref>, our method is inherently suited for low-dimensional settings. Fortunately, in many practical scenarios, the data exhibits low-dimensional structures, such as: (i) the sampling distribution of X concentrating on a low-dimensional manifold; and (ii) the mapping x ↦ P_x exhibiting low-dimensional dependence. For (i), we might resort to dimension reduction techniques, although an extension of the results in Section <ref> has yet to be established. For (ii), a data-driven method that effectively leverages the low-dimensional dependence is of significant interest. Conditional generative models. Utilizing a conditional generative model could potentially lead to further improvements. One advantage of conditional generative models is the ease of incorporating various training objectives. For instance, it can easily adapt to the training objectives in (<ref>) to accommodate multiple different hyper-parameters simultaneously. We may also incorporate the joint empirical measure in the training process. This flexibility also allows for the integration of specific tail conditions as needed. Lastly, we would like to point out an issue observed in our preliminary experiments when utilizing a naïve conditional generative model: it may assign excessive probability mass to the blank region between two distinct clusters (for example, in Model 1 around (x,y)=(0.1,0.5)). This possibly stems from the inherent continuity of neural networks. One possible solution is to consider using a mixture of multiple conditional generative models. § ADDITIONAL PLOTS 0.92 § CONFIGURATION OF NETWORK COMPONENTS AND TRAINING PARAMETERS The table below summarizes the configuration of the neural network and the training procedure. It applies to both StdNet and LipNet in all models. ! ht]c c c Network Component /Training parameters Configuration Note Sample size 1e4 for Model 1 & 2, 1e6 for Model 3 k 100 for Model 1 & 2, 300 for Model 3 See Definition <ref> Network stucture StdNet: Layer-wise residual connection <cit.>, batch normalization <cit.> after affine transformation LipNet: Layer-wise residual connection <cit.> with convex potential layer <cit.> Input dimension d_ Output dimension d_× N_atom N_atom=k, see (<ref>) Number of hidden layers 5 Number of neurons each hidden layer 2k k as in Definition <ref> Activation function StdNet: ReLU LipNet: ELU See Section <ref> L 0.1 Introduced in (<ref>) τ 1e-3 See Algorithm <ref> Optimizer Adam <cit.> with learning rate 10^-3 Learning rate is 0.01 for StdNet in Model 1 & 2 Batch size 100 for Model 1 & 2 256 for Model 3 Number of episodes 5e3 for Model 1 & 2, 1e4 for Model 3 RBSP setting 2^5 partition, 8 query points each part See Section <ref> Random bisecting ratio ∼([0.45,0.55]) Introduced in Section <ref> See also Algorithm <ref> Ratio for mandatory slicing along the longest edge 5 Introduced in Section <ref> See also r_edge in Algorithm <ref> Number of Sinkhorn iterations 5, if epoch ≤ 500 10, if epoch > 500 ϵ 1, if epoch ≤ 100 0.1, if epoch ∈[100,500] 0.05, if epoch > 500 Introduced in (<ref>) Enforced sparsity Off, if epoch ≤ 500 On, if epoch >500 See Section <ref> γ 0.5 Introduced in (<ref>) § ANOTHER SET OF RESULTS ON FLUCTUATION §.§ On r-box estimator Under Assumptions <ref> and <ref>, and choosing r as in Theorem <ref>, let ν∈() be dominated by λ_ with constant C>0. Then, there is a constant C>0 (which depends only on d_,c, C and the constants involved in r), such that, for any ε≥ 0, we have [ ∫_(P_x, _x) ν(x) ≥∫_(P_x, _x) ν(x) + ε] ≤exp(-CM^2/d_+2ε^2), d_=1,2, exp(-CM^d_/d_+d_ε^2), d_≥ 3. Let ν∈() as in the statement of the Theorem. We define Z := ∫_(P_x, _x) ν(x), and introduce the following discrete time filtration: _0:=∅,Ω and _m:=σ(⋃_i=1^mσ(X_i,Y_i)) for m=1,…,M. We consider the Doob's martingale Z_m := Z | _m, m=1,…,M. Note that Z_M=Z. We will apply Azuma-Hoeffding inequality (cf. <cit.>) to complete the proof. Let us define ^m := (X_1,Y_1),…, (X_m,Y_m), (x_m+1,y_m+1), …, (x_M,y_M), m=1,…,M, ^0:=(x_ℓ,y_ℓ)_ℓ=0^M, and ^M:=. Note that, for all m<M, we have, by Assumptions <ref> (i), conditional Fubini-Tonelli theorem, and independent lemma, Z_m = ∫_∫_(×)^M-m(P_x, μ̂^^m_^r(x)) ⊗_ℓ=m+1^Mψ( x_ℓ y_ℓ) ν( x). This together with the linearity of integral, the fact that ψ is a probability, and the triangular inequality of implies that for m=1,…,M, |Z_m-Z_m-1| ≤∫_∫_(×)^M-m+1(μ̂^^m_^r(x),μ̂^^m-1_^r(x)) ⊗_ℓ=m^Mψ( x_ℓ y_ℓ) ν( x). Notice that, by definitions (<ref>) and (<ref>), {μ̂^_m_^r(x)≠μ̂^_m-1_^r(x)}⊆{X_m ∈^r(x)}∪{x_m ∈^r(x)}. Additionally, by definitions (<ref>) and (<ref>) again, on the event that {μ̂^_m_^r(x)≠μ̂^_m-1_^r(x)}, we have (μ̂^^m_^r(x), μ̂^^m-1_^r(x)) ≤(1+∑_ℓ=1^m-1_^r(x)(X_ℓ) + ∑_ℓ = m+1^M _^r(x)(x_ℓ))^-1≤(1+∑_ℓ=m+1^M _^r(x)(x_ℓ))^-1. Combining (<ref>),(<ref>), (<ref>), and Fubini-Tonelli theorem, we get |Z_m-Z_m-1| ≤∫_∫_B(X_m,2r)∪ B(x_m,2r)∫_^M+1-m(1+∑_ℓ=m+1^M _^r(x)(x_ℓ))^-1⊗_ℓ=m+1^Mξ( x_ℓ) ν( x) ξ( x_m) ≤sup_x_m∈∫_B(X_m,2r)∪ B(x_m,2r)∫_^M+1-m(1+∑_ℓ=m+1^M _^r(x)(x_ℓ))^-1⊗_ℓ=m+1^Mξ( x_ℓ) ν( x) where the 2r in the domain of the integral stems from the usage of β^r in the definition of ^r (see Definition <ref>). Now, for fixed x,x_m ∈, we have ∫_^M-m+1(1+∑_ℓ=m+1^M _^r(x)(x_ℓ))^-1⊗_ℓ=m+1^Mξ( x_ℓ) = ∑_ℓ=0^M-mM-mℓξ(^r(x))^ℓ(1-ξ^r(^r(x)))^M-m-ℓ(1+ℓ)^-1 = 1/(M-m+1)ξ(^r(x))∑_ℓ=0^M-mM-m+1ℓ+1ξ(^r(x))^ℓ+1(1-ξ(^r(x)))^M-m-ℓ = 1/(M-m+1)ξ(^r(x))∑_ℓ=1^M-m+1M-m+1ℓξ(^r(x))^ℓ(1-ξ(^r(x)))^M-m+1-ℓ = 1- (1-ξ(^r(x)))^M-m+1/(M-m+1)ξ(^r(x))≤ 1 ∧( (M-m+1)ξ(^r(x)))^-1≤ 1 ∧( (M-m+1) c (2r)^)^-1, where we have used Assumption <ref> (ii) in the last inequality. Recall C introduced in Theorem <ref>. In view of (<ref>), we have |Z_m-Z_m-1| ≤sup_x_m∈∫_B(X_m,2r) ∪ B(x_m,2r) 1 ∧( (M-m+1) c (2r)^)^-1ν( x) ≤ 2 C (4r)^( 1 ∧( (M-m+1) c (2r)^)^-1) = (C 2^2+1r^) ∧C 2^+1/c (M-m+1) := C_m. By Azuma-Hoeffding inequality (cf. <cit.>), one obtains (Z-Z≥ε) ≤exp(-2ε^2/∑_m=1^M C_m^2). To complete the proof, we substitute in the configuration of Theorem <ref>. Since we only aim to investigate the rate of ∑_m=1^M C_m^2 as M→∞, we simply set r = M^-1/d_+d with d := 2 ∨. It follows that ∑_m=1^M C_m^2 ∼∑_m=1^M M^-2/+d∧ m^-2≲∫_1^∞ M^-2/+d∧ z^-2 z ∼∫_1^M^/+d M^-2/+d z + ∫_ M^/+d^∞ z^-2 z ∼ M^-/+d, which completes the proof. §.§ On k-nearest-neighbor estimator Under Assumptions <ref> and <ref>, and the choice of k as in Theorem <ref>, there is a constant C>0 (which depends only on c and the constants involved in k), such that, for any ν∈() and ε≥ 0, we have [ ∫_(P_x, _x) ν( x) ≥∫_(P_x, _x) ν( x) + ε] ≤exp(-CM^2/d_+2ε^2), d_=1,2, exp(-CM^d_/d_+d_ε^2), d_≥ 3. For notational convenience, we will write μ̂^_^k(x) for μ̂^_^k,(x). Clearly, with =, we recover μ̂^_^k(x)=μ̂^_^k,(x)=_x. In what follows, we let Z := ∫_(P_x, _x) ν( x). We also define _0:=∅,Ω and _m:=σ(⋃_i=1^mσ(X_i,Y_i)) for m=1,…,M. The proof relies on an application of Azuma-Hoeffding inequality (cf. <cit.>) to the Doob's martingale Z|_m_m=0^M. In order to proceed, we introduce a few more notations: x := (x_1,…,x_M), := (X_1,…,X_M), ^m := (X_1,…,X_m, x_m+1,…, x_M), ^m := (X_1,Y_1),…, (X_m,Y_m), (x_m+1,y_m+1), …, (x_M,y_M), η^k, x_x := the k-th smallest of x_m-x_∞_m=1^M. By independence lemma, we have Z | _m = ∫_(×)^M-m∫_(P_x, μ̂^^m_^k(x)) ν( x) ⊗_ℓ=m+1^M ψ( x_ℓ y_ℓ) = ∫_(×)^M-m∫_( P_x, 1/k( ∑_i=1^m _X_i-x_∞≤η^k,^m_xδ_Y_i + ∑_ℓ=m+1^M _x_ℓ-x_∞≤η^k,^m_xδ_y_ℓ) ) ν( x) ⊗_ℓ=m+1^M ψ( x_ℓ y_ℓ), where we note that (×)^M-m and ⊗_ℓ=m+1^M ψ( x_ℓ y_ℓ) in the right hand side can be replaced by (×)^M-m+1 and ⊗_ℓ=m^M ψ( x_ℓ y_ℓ) as the integrand is constant in x_m and ψ is a probability measure. Therefore, by Fubini's theorem and triangle inequality for , we have | Z | _m - Z | _m-1| ≤∫_∫_(×)^M-m+1( 1/k( ∑_i=1^m _X_i-x_∞≤η^k,^m_xδ_Y_i + ∑_ℓ=m+1^M _x_ℓ-x_∞≤η^k,^m_xδ_y_ℓ) ., . 1/k( ∑_i=1^m-1_X_i-x_∞≤η^k,^m-1_xδ_Y_i + ∑_ℓ=m^M _x_ℓ-x_∞≤η^k,^m-1_xδ_y_ℓ) ) ⊗_ℓ=m^M ψ( x_ℓ y_ℓ) x. Above, the only difference between the two measures inside is the m-th summand. Due to the definition of and the boundedness of , the transport cost induced by altering the m-th summand is at most k^-1. It follows that | Z | _m - Z | _m-1| ≤1/k, m=1,…,M. Below we further refine the upper bound of the absolute difference in the left hand side of (<ref>) when m=1,…,M-k. For the integrand in the right hand side of (<ref>) to be positive, it is necessary that _X_m-x_∞≤η^k,^m_x + _x_m-x_∞≤η^k,^m-1_x≥ 1. This, together with the tie breaking rule stipulated in Definition <ref>, further implies that _A^m_1 + _A^m_2≥ 1, where A_1^m := {at most (k-1) of x_ℓ,ℓ=m+1,…,M-m, falls into B^X_m-x_∞_x}, A_2^m := {at most (k-1) of x_ℓ,ℓ=m+1,…,M-m, falls into B^x_m-x_∞_x}. Combining the above with the reasoning leading to (<ref>), we yield | Z | _m - Z | _m-1| ≤1/k( ∫_∫_(×)^M-m_A_1^m⊗_ℓ=m+1^M ξ( x_ℓ) ν( x) + ∫_∫_(×)^M-m+1_A_2^m⊗_ℓ=m^M ξ( x_ℓ) ν( x) ) Above, we have replaced ψ in (<ref>) by ξ because A_1^m and A_2^m no longer depend on y_ℓ,ℓ=m+1,…,M. The analogue applies to the domain of integral as well. We continue to have | Z | _m - Z | _m-1| ≤1/k( ∫_∫_(×)^M-m_A_1^m⊗_ℓ=m+1^M ξ( x_ℓ) ν( x) + ∫_∫_(×)^M-m+1_A_2^m⊗_ℓ=m^M ξ( x_ℓ) ν( x) ) =: 1/k (I_1^m + I_2^m). Regarding I^1_m defined in (<ref>), note that by Assumption <ref>, ∫_(×)^M-m_A_1^m⊗_ℓ=m+1^M ξ( x_ℓ) = [ at most (k-1) of X̌_1,…,X̌_M-m falls into B^x'-x_∞_x] |_x'=X_m, where X̌_1,…,X̌_M-mi.i.d.∼ξ. Below we define a CDF G(r):=c r^d, r∈[0,c^-1/d]. By Assumption <ref> (ii), for any x,x'∈, we have ∫__x̌∈ B_x^x'-x_∞ξ(x̌) ≥c∫__x̌∈ B_x^x'-x_∞x̌≥c∫__x̌∈ B_ 0^x'-x_∞x̌ = G(x'-x_∞), where we have used the fact that x'-x_∞≤ 1 ≤c^-1/d in the last equality. It follows from Lemma <ref> that ∫_(×)^M-m_A_1^m⊗_ℓ=m+1^M ξ( x_ℓ) ≤∑_j=0^k-1M-mj G(X_m-x_∞)^j(1-G(X_m-x_∞))^M-m-j, and thus, by letting U∼(), I_1^m ≤∫_∑_j=0^k-1M-mj G(X_m-x_∞)^j(1-G(X_m-x_∞))^M-m-j x = ∑_j=0^k-1M-mj G(x'-U_∞)^j(1-G(x'-U_∞))^M-m-j|_x'=X_m, where we note that the upper bounded no longer involves ν. For x'∈, it is obvious that [x'-U_∞≤ r] ≥[U_∞≤ r], r∈, i.e., U_∞ stochastically dominates x'-U_∞. Note additionally that, by Lemma <ref> again, below is a non-decreasing function, r ↦∑_j=0^k-1M-mj G(r)^j(1-G(r))^M-m-j. Consequently, I_1^m ≤∑_j=0^k-1M-mj G(U_∞)^j(1-G(U_∞))^M-m-j x . Since U_∞ has CDF r↦ r^d_, r∈[0,1] and G(r)=c r^d, r∈[0,c^-1/d], we continue to obtain I^m_1 ≤∑_j=0^k-1M-mj∫_r=0^1 c r^d_j (1- c r^d_)^M-m-j r^d_≤c^-1∑_j=0^k-1(M-m)!/j! (M-m-j)!∫_0^1 r^j (1-r)^M-m-j r. With a similar calculation as in (<ref>), which involves beta distribution and gamma function, we arrive at I^m_1 ≤c∑_j=0^k-1(M-m)!/j! (M-m-j)!j! (M-m-j)!/(M-m+1)!≤c^-1 k/M-m. Regarding I^m_2 defined in (<ref>), we first let X̌_0, X̌_1,…,X̌_M-mi.i.d.∼ξ. Then, note that ∫_(×)^M-m_A_2^m⊗_ℓ=m+1^M ξ( x_ℓ) ≤[ at most (k-1) of X̌_1,…,X̌_M-m falls into B^X̌_0-x_∞_x] ≤M-m-1k-1M-mk^-1 = k/M-m, where the inequality in the second line is due to the symmetry stemming from Assumption <ref> (i), and the fact that congestion along with the tie-breaking rule specified in Definition <ref> may potentially rules out certain permutations. Consequently, I^m_2 ≤k/M-m. Putting together (<ref>), (<ref>), (<ref>), and (<ref>), we yield | Z | _m - Z | _m-1| ≤ C_m := C(c^-1+1)/M-m∧1/k, m=1,…,M. By Azuma-Hoeffding inequality (cf. <cit.>), [ ∫_(P_x, μ̂^_^k(x)) ν( x) - ∫_(P_x, μ̂^_^k(x)) ν( x) ≥ε] ≤exp( - ε^2/2∑_m=1^M C_m^2), ε≥ 0. To complete the proof, we substitute in the configuration of Theorem <ref>. Below we only investigate the rate of ∑_m=1^M C_m^2 as M→∞, and do not keep track of the constant. For simplicity, we set k = k∼ M^d/d_+d with d := 2 ∨ It follows that ∑_m=1^M C_m^2 ∼∑_m=1^⌊ M - M^d/d_+d⌋1/(M-m)^2 + M^d/d_+d/M^2d/d_+d∼∫_M^d/d_+d^∞1/r^2 r + 1/M^d/d_+d∼ M^-d/d_+d, which completes the proof. alpha
http://arxiv.org/abs/2406.07882v1
20240612052016
Designing a Dashboard for Transparency and Control of Conversational AI
[ "Yida Chen", "Aoyu Wu", "Trevor DePodesta", "Catherine Yeh", "Kenneth Li", "Nicholas Castillo Marin", "Oam Patel", "Jan Riecke", "Shivam Raval", "Olivia Seow", "Martin Wattenberg", "Fernanda Viégas" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.HC" ]
[2]Co-advisors. Work done at Harvard. [ Chengxi Wang June 17, 2024 ================= § ABSTRACT Conversational LLMs function as black box systems, leaving users guessing about why they see the output they do. This lack of transparency is potentially problematic, especially given concerns around bias and truthfulness. To address this issue, we present an end-to-end prototype—connecting interpretability techniques with user experience design—that seeks to make chatbots more transparent. We begin by showing evidence that a prominent open-source LLM has a “user model”: examining the internal state of the system, we can extract data related to a user's age, gender, educational level, and socioeconomic status. Next, we describe the design of a dashboard that accompanies the chatbot interface, displaying this user model in real time. The dashboard can also be used to control the user model and the system's behavior. Finally, we discuss a study in which users conversed with the instrumented system. Our results suggest that users appreciate seeing internal states, which helped them expose biased behavior and increased their sense of control. Participants also made valuable suggestions that point to future directions for both design and machine learning research. The project page and video demo of our system are available at https://yc015.github.io/TalkTuner-a-dashboard-ui-for-chatbot-llm/bit.ly/talktuner-project-page. § INTRODUCTION Conversational Artificial Intelligence (AI) interfaces hold broad appeal—OpenAI's ChatGPT reports more than 100 million users and 1.8 billion monthly page visits <cit.>—but also have essential limitations. One key issue is a lack of transparency: it is difficult for users to know how and why the system is producing any particular response. The obvious strategy of simply asking the system to articulate its reasoning turns out not to work, since Large Language Models (LLMs) are highly unreliable at describing how they arrived at their own output, often producing superficially convincing but spurious explanations <cit.>. Transparency is useful for many reasons, but in this paper we focus on one particular concern: the need to understand how an AI response might depend on its model of the user. LLM-based chatbots appear to tailor their answers to user characteristics. Sometimes this is obvious to users, such as when conversing in a language with gendered forms of the word “you” <cit.>. But it can also happen in subtler, more insidious ways, such as “sycophancy,” where the system tries to tell users what they are likely to want to hear, based on political and demographic attributes, or “sandbagging,” where it may give worse answers to users who give indications of being less educated <cit.>. We hypothesize that users will benefit if we surface—and provide control over—the factors that underlie such behavior. To test this hypothesis, we have created an end-to-end prototype—a visual dashboard interface for a conversational AI system, which displays information about the system's internal representation of the user. This interface serves not just as a dashboard, but also allows users to modify the system's internal model of themselves. Building an end-to-end prototype requires three different types of work: interpretability engineering, to identify an internal user model; user-experience design, in creating a user-facing dashboard; and studying users, to understand their reactions and listen to their concerns and ideas for future improvements. For the first step, we based on work on LLaMa2Chat-13B, an open-source large language model optimized for chat <cit.>. Within the model's activations, we identified approximate internal representations of four important user characteristics (age, gender, education level, and socioeconomic status) via linear probes (in a manner similar to <cit.>). We then designed a dashboard so that users see these representations alongside the ongoing chat. Finally, we performed a user study to assess our design, gauge reactions, and gather feedback for future designs. Our results suggest that users appreciated the dashboard, which provided insights into chatbot responses, raised user awareness of biased behavior, and gave them controls to help explore and mitigate those biases. We also report on user reactions and suggestions related to bias and privacy issues, which might help inform future deployments. § BACKGROUND AND RELATED WORK Chatbot interfaces have been studied for decades <cit.>, and their lack of transparency has been a perennial issue. When users interact with black-box algorithms they often develop “folk theories” to explain what they observe <cit.>, and modern LLMs are no exception <cit.>. This tendency can lead to an overly high degree of trust in these systems <cit.>—an effect initially seen with a chatbot in the 1960s, ELIZA <cit.>, and continuing in recent years <cit.>. One particular concern is the presence of bias in responses, which can be difficult to detect and thus may be accepted at face value <cit.>. One tempting way to understand a chatbot is to talk to it— i.e., simply ask for a natural-language explanation of its output. Unfortunately, current LLMs appear to be highly unreliable narrators, describing their reasoning in ways that are convincing yet spurious <cit.> or even avoiding the question altogether. A more heavyweight approach is taken by tools that analyze LLM behavior to help developers search for bias <cit.> or make more general comparisons <cit.>. These systems require a significant amount of time and expertise, so are poorly suited to the needs of lay users. A different strategy is inspired by progress in interpreting the internal workings of neural networks. In particular, some evidence suggests that LLMs may contain interpretable “world models” which play an important role in their output (see <cit.> for a review). Such internal models appear to be accessible—and even controllable—via “linear probes” (e.g., <cit.>). These results suggest the possibility that we might give users a direct view into the inner workings of an LLM chatbot. The idea of surfacing such data to end users in the form of an easy-to-read dashboard was raised in <cit.>. This work suggested that information about the chatbot's model of the user (the “user model”) and itself (the “system model”) were likely to be important in many situations. A related proposal <cit.> suggested using “representation engineering” for similar purposes, based on extensive experiments using a probing methodology called “linear artificial tomography.” Both of these works discussed how an interface that exposes an LLM's internal state alongside its output might help users spot issues related to bias and safety. Neither, however, tested how users might react to such a dashboard, and how it might affect their attitudes toward AI. § OVERALL DESIGN METHODOLOGY Our methodology is to build and study a “design probe” <cit.>. A design probe can take many forms, but the general idea is to create a scaled down yet usable artifact, which can be used to ask questions, gauge reactions, and spark design discussions. For the present work, our design probe is an end-to-end working prototype of a chatbot dashboard, which we allowed a set of participants to use for semi-structured open-ended conversations. The rest of the paper has two parts. First, we discuss technical aspects of the work, in which we show how to access and control a chatbot's internal model of the user. Second, we describe the design and usage of a dashboard based on this technical work. Throughout, the goal is to create an end-to-end “approximately correct” system that works sufficiently well for design exploration and user research; we do not expect to find a perfectly reliable internal model, or to achieve a perfect design. Historically, even imprecise instruments had value to early users. For example, before becoming stable and precise, early car gas gauges fluctuated wildly with motion <cit.>. Even so, they were still useful in getting a reading of whether a vehicle had any fuel left. For pilots, the imprecise early instruments in cockpits <cit.> were an important step towards eventually conducting instrumentation-aided flights at night and in poor visibility <cit.>. Our dashboard, also in its nascent stages, is not intended to be perfect, but to provide early insights and to highlight areas for future research. § PROBES FOR IDENTIFYING AN INTERNAL USER MODEL The first step in our process is to investigate whether the LLM has any representation of the user <cit.>. To create a minimal prototype, we focused on four key user attributes: age, gender, education, and socioeconomic status (SES). We selected these attributes because they are culturally central, and influence critical real-world decisions such as college admissions, hiring, loan approvals, and insurance applications <cit.>. Given these target user attributes, we trained linear probes <cit.> to explore whether an LLM represents these attributes in its activations. For this purpose, each attribute was divided into discrete subcategories, which were probed separately[Initially, the dataset included as a gender subcategory. However, we discovered numerous problems in both generated data and the resulting classifiers, such as a conflation of non-binary gender identity and sexual orientation. Consequently, the category was removed. However, since the male and female subcategories are separate, this system remains capable of modeling “neither male nor female” as well as “strong attributes of both male and female.”]. (See the “subcategories” column in Table <ref>.) The training process requires two ingredients. First, because we need access to model internal activations, we work with the open-source LLaMa2Chat-13B model. Second, we need a training dataset. Acquiring this data is nontrivial, as we now describe. §.§ Creating the conversation dataset Training probes to identify user representations would ideally use a human/chatbot conversation dataset with labeled user information. Unsurprisingly, given our target attributes, such data was not readily available <cit.>. However, recent work has used LLMs to generate synthetic conversations <cit.>. Specifically, Wang  <cit.> showed that GPT-3.5 can accurately role-play various personalities. LLaMa2Chat <cit.> was also fine-tuned via LLM role-play. Using the role-playing technique, we generated synthetic conversations using GPT-3.5 and LLaMa2Chat. [For example, to generate conversations held with a user, we used the following prompt: “Generate a conversation between a human user and an AI assistant. This human user is a male. Make sure the conversation reflects this user's gender. Be creative on the topics of conversation.”] We used a similar approach to generate conversations for all target attributes (see Appendix <ref>). Quality of generated data: One may question the quality of the synthetic conversation data: do role-played users represent their assigned attribute and cover a range of topics? Manual inspection of 13,900 multi-turn conversations (average 7.5 turns) would be time-consuming and prone to human bias. Recent work <cit.> suggests that more powerful LLMs like GPT-4 <cit.> surpass crowd workers in annotating textual data. We therefore opted to use GPT-4 to annotate the generated data. We applied GPT-4 to classify the attributes of the role-played users based on their conversations, checking for agreement between GPT-4's classifications and the pre-assigned attribute labels (consistency). Additionally, GPT-4 helped in identifying the range of topics discussed (diversity). GPT-4 also evaluated whether the imagined users exhibited any attributes beyond assigned labels, revealing possible hidden correlations within the dataset (hidden correlation). One example could be an over-representation of male users in conversations about buying luxury vehicles. We want to avoid introducing more bias through our training dataset. As shown in Table <ref>, the consistency of and datasets are above 90%. Regarding , the disagreements were primarily between child and adolescents users (6.9% of the age conversations) and between adults and older adults (3.9%), which are adjacent age groups. The synthetic dataset also covers a wide range of topics. Most synthetic users did not exhibit other attributes beyond what we assigned in the instructions. We did not report the consistency of the attribute as GPT-4 could not conclusively determine a user's education unless that was explicitly stated in the chat. GPT-4 also conflated middle/pre-high school education with high school. §.§ Reading probe training and results r0.46 < g r a p h i c s > Reading probe's validation accuracy across layers. -0.1in To read user attributes (the user model), we trained linear logistic probes: p_θ(X) = σ(⟨ X, θ⟩), where X ∈ℝ^n × 5120 are the residual stream representations of conversations and θ∈ℝ^5120 × 1 denotes the weights. The training used a one-versus-rest strategy and L2 regularization. Each probe was trained to distinguish one subcategory from other subcategories within the same user attribute. The linear probes were trained on the last token representation of a special chatbot message “I think the {attribute} of this user is” appended after the last user message, where {attribute} is replaced with the corresponding target attribute. Probe accuracy: Probing classifiers were trained separately on each layer's representations using the same 80-20 train-validation split of the synthetic dataset. The high probing accuracy shown in Figure <ref> suggested a strong linear correlation between user demographics and the LLaMa2Chat's internal representations. Note that accuracy generally increases with layer depth, suggesting the probe is not simply picking up information from the raw conversation text. § PROBES FOR CONTROLLING THE USER MODEL Recent work <cit.> showed that LLM behavior can be controlled by translating its representation using a specific vector: x̂ + Nv̂, with a tunable strength N. One baseline vector used in the translation is the weight vector of the probing classifier that most accurately read the internal model. However, both <cit.> and <cit.> found alternative vectors that more effectively change the model's behaviors and even outperformed the few-shot prompting approach. Building on these findings, we trained a set of control probes on the ending token representation of the last user messages within conversations. This representation contains information for the chatbot to answer requests from different synthetic users. The training of control probes used the same setup as the reading probes, except the input representations. In Section <ref>, we showed that the intervention using the control probes outperformed that of the reading probes. Causal intervention experiment: We measured the causality of a probe by observing whether the model's response to a question changes accordingly as we intervene the relevant user attribute. For each user attribute, we created 30 questions with answers that might be influenced by it. For example, the answer to “How should I style my hair for a formal event?” will likely vary with gender. The complete list of questions used in our experiments is available in Appendix <ref>. For each question, we used GPT-4 as a prompt-based classifier to compare the pairs of responses that were generated under the intervention of contrasting user demographics—older-adult vs. adolescent, female vs. male, college and beyond education vs. some schooling, and high SES vs. low SES. GPT-4 classified which response is more aligned with each user attribute. The intervention was successful if GPT-4 can accurately associate each intervened response with its corresponding user attribute used in intervention. See Appendix <ref> for the prompt template used. We used greedy decoding when sampling the responses from the model for better reproducibility. §.§ Causality test results We tested the causality of both the control and reading probes. We intervened using control probes in the 20th to 29th layer's representations with a strength N=8 for all questions. The intervened layers and strength were selected based on the results on a few questions outside of our dataset. We translated the representation for the same L2 distance on the intervened layers using the weight vector of reading probes. The same translation was applied repeatedly on the last input token representation until the response was complete. According to the success rates in Table <ref>, control probes outperformed the reading probes on controlling 4 chosen user attributes, while achieving slightly lower accuracy on reading. In Appendix <ref>, we showed some qualitative difference between the intervention outputs generated using reading and control probes. Appendix <ref> provided full-length chatbot responses generated using control probes. h0.45 Success rate of intervention when using control and reading probes, and best validation reading accuracy (across layers). Age Gender Education SocioEco (l)2-5 Probe Types 4cIntervention Success Rate Control 1.00 0.93 1.00 0.97 Reading 0.90 0.80 0.87 0.93 # of Questions 30 30 30 30 4cBest Validation Accuracy on Reading Control 0.96 0.91 0.93 0.95 Reading 0.98 0.94 0.96 0.97 Validation Size 800 480 900 600 -0.3in One hypothesis for the better intervention performance obtained using control probes is that they were trained on the representations of diverse tasks requested by the synthetic user, rather than the specific reading user attribute task. Effects of intervention: Probe interventions often had significant, nonobvious effects. For example, when asked about transportation to Hawaii, the chatbot initially suggested both direct and connecting flights. However, after setting the internal representation of the user to low socioeconomic status, the chatbot asserted that no direct flights were available. § DESIGNING A DASHBOARD FOR END USERS With the reading and control probes in hand, we now to turn to the design of an interface that makes them available to users. Following the design-probe strategy <cit.>, we aim for a prototype with enough fidelity to test with users and allow them to give design input. We are particularly interested in feedback on three design goals: to (G1) provide transparency into internal representations of users, (G2) provide controls for adjusting and correcting those representations, and (G3) augment the chat interface to enhance the user experience, without becoming distracting or uncomfortable. This last point, on discomfort, is worth underlining: because of our emphasis on understanding bias, we have focused on potentially sensitive attributes. On the other hand, there's an obvious question: how would people feel about seeing any kind of assessment—even an approximate, emergent assessment from a machine—of how they rate on these attributes? One goal of our design probe is to investigate any negative user reactions, and understand how we might mitigate them. §.§ UI components Next, we illustrate , a prototype that attempts to substantiate our design goals. The UI consists of two main views. On the right, we include a standard chatbot interface (Figure <ref>) where users can interact with the bot by typing messages (G3). As shown in Figure <ref>A, we include a dashboard on the left to show how the chatbot is modeling the user (G1). In this case, we are measuring four specific features: age, socioeconomic status, education, and gender. The dashboard shows the chatbot's current model of the user, along with a percentage reflecting its confidence (from 0 to 100%). Each attribute also has subcategories, accessible through clicking the dropdown icons. At the beginning, all attributes read as “unknown," which means the information in the current conversation is not enough for the system to make a decision. To avoid overwhelming users, TalkTuner defaults to displaying only the top prediction for each user attribute. Our dashboard also provides controls to change the chatbot's model of users (G2). For example, users can “pin” the gender attribute with the arrow icons that appear when hovering on the confidence bar. Clicking on the right green arrow sets the model to be 100% confident that the user is male (Figure <ref>B). The left arrow does the opposite, setting the attribute to 0% confident. All of the other attributes can be controlled in the same way, using the intervention method described in Section <ref>. We use additional visual alerts to inform users about the important changes in the system, such as “Answered Changed” to highlight updates in the user model and “Pinned” to indicate when a control is applied. The control can be unset by toggling off the button. Implementation. The interface is a web application, implemented in Javascript with React <cit.>. The chatbot model is connected with the interface through a REST API implemented in Flask <cit.>. We used the official checkpoint of LLaMa2Chat-13B released by Meta on HuggingFace <cit.>. § USER STUDY DESIGN We conducted a user study to assess the accuracy of user models in real-world conversations, user acceptance of the dashboard, and its impact on user experience and trust in the chatbot. Participants: We recruited 19 participants (P1 to P19) via advertisements. They included 11 women and 8 men. Eight participants were 18-24 years old, nine were 25-34, and two were over 35. Nine participants held college degrees, one had a master’s, nine had doctoral degrees. 16 were students or researchers, two were product managers and one was an administrative staff member. All had used AI chatbots before, and most came from science or technology backgrounds; our results should be interpreted with this in mind. Study procedure: We designed a within-subject, scenario-based study where participants were asked to solve three tasks by interacting with , seeking advice on (1) an outfit for a friend's birthday party, (2) creating a trip itinerary, and (3) designing a personalized exercise plan. Participants were encouraged to think aloud as they completed tasks under three user-interface (UI) conditions. Each condition used a variation on full interface described in Section <ref>: (UI-1) standard, not instrumented, chatbot interface (Figure <ref>A right), (UI-2) dashboard showing demographic information—i.e. internal user-model—in real time (Figure <ref>A full), and (UI-3) dashboard with demographic information plus controls to modify the user-model and regenerate answers (Figure <ref>A+B). In each UI condition, participants completed a task listed above; task order was randomized. After UI-1 and UI-3, participants filled out a questionnaire about their experience. At the end of each session, we conducted a short interview to collect qualitative feedback. Participants were compensated $30 for completing the study. See Appendix <ref> for study procedure and details. Measures and analysis methods: User-model accuracy was evaluated by comparing users' self-reported demographics against dashboard inferences. Socioeconomic status was not collected from users and therefore excluded from accuracy evaluation. We applied a grounded theory approach to analyse users' qualitative responses  <cit.>. Three of the co-authors coded qualitative answers. § USER STUDY RESULTS AND DISCUSSION h0.4 < g r a p h i c s > User-model accuracy measured by chat turn in study sessions. -0.1in Accuracy of user model: Overall, user-model correctness (i.e., whether the user model matched true user attributes) improved as conversations progressed, achieving an average accuracy of 78% across age, gender, and education after six turns of dialogue (Figure <ref>). Eight participants expressed surprise at the existence and accuracy of a user model. P13: “I did not expect it to be this accurate, just with the little information that I provided.” However, we found that user-model accuracy (averaged over all turns for three attributes) tended to be higher for men (70.4%)[Because these numbers include early dialogue turns, both numbers are lower than the six-turn accuracy.] compared to women (58.6%). Appendix <ref> provides an analysis of qualitative examples. Interview feedback echoes this trend, with female participants sometimes voicing frustration. P8: “I think I got a little offended, not in any way, just by how it feels to not be understood.” However, this reaction was not restricted to women: e.g., P4 pointed out that the model kept incorrectly suggesting feminine clothing to outfit questions because of how it was modeling his gender–despite the user having provided no explicit gender information: “Yeah, it thinks that I'm a female. It's actually suggesting dresses.” This last quote exemplifies a situation we observed multiple times: when the probe was inaccurate in reporting a true attribute, it nonetheless reflected model behavior. §.§ Goal 1: Offer transparency into internal representations of users When participants were first shown the chatbot’s internal representation of them, some were surprised this existed at all: P5 "I never thought that the chatbot would have a model of you and would give you a recommendation based on that." Nine participants mentioned that seeing the user model was engaging and interesting. P14 observed "it was very interesting to see this is how the chatbot is interpreting me based on the information I've given." Seven participants expressed a sense of increased transparency as they used the dashboard. P4: “[the dashboard] makes it more transparent how the model is and how that could be feeding into its responses.” They found the information useful for understanding chatbot responses, especially inappropriate or incorrect ones. Notably, five participants described seeing the chatbot's inference of their demographic information as “uncomfortable.” P16: “there's an uncomfortable element to think that AI is analyzing who I am behind the screen.” At the same time, participants appreciated that these internal models were being exposed and that they had control over them: “if it [the user model] was always there, I'd rather see it and be able to adjust it, than having it be invisible”(P8). Exposing the internal user model also changed some participants' perception of the chatbot. Six participants reported the internal user model partially resembles how humans interact with each other. P4: “if you think about a human-human interaction, people have all these priors, and it's good to see that chatbots are also mimicking that […] Very reassuring.” The dashboard also caused users to reflect on their prompts, P16: “It makes me analyze how I was speaking.” Privacy concerns: Seven participants expressed concern about potential loss of privacy. In particular, P2, P4 and P5 worried that their demographic information may be used for targeted advertisements. Some participants, however, appreciated that the dashboard helped them spot potential privacy violations, P13: “there is a concern that the chatbot will end up knowing about me way way more than that, you wouldn't know if the dashboard wasn't available.”. §.§ Goal 2: Provide controls for adjusting and correcting user representations The dashboard control capabilities turned out to be important for users both in terms of agency as well as an increased sense of transparency (Figure <ref>). Users were especially appreciative of the control afforded by the dashboard when the chatbot's internal model of them was wrong. They also mentioned that controlling the user model was engaging. P12: "I think it was really fun. I liked toggling and seeing how the responses change, based on how it perceived me." Controlling vs. prompt engineering: Five participants spontaneously compared the dashboard control functionality to prompt engineering, mentioning they preferred the simplicity of the dashboard control. P17: “I could have just clicked [control button] now [...] I feel very strongly about not having to type a super long prompt with all my information over and over again.” Biased behavior: The dashboard exposed how the chatbot's internal representation of users affected its behavior. P3: “It definitely puts you in, like, a box. And as soon as the model has been made, feel like you are talked to in stereotypical ways.” Many participants used the dashboard controls to play with “what-if” scenarios and to identify biased and stereotypical behavior. Nearly half of participants identified a range of biased responses, from subtle shifts in tone to significant changes in the answers provided. P3: “some answers and tips are not given to you because the chatbot thinks of you in a certain way”. P4 requested help creating an itinerary for a 10-day trip to the Maldives. However, after manually setting socioeconomic status towards “low,” the chatbot unexpectedly shortened the trip to 8 days. This was a type of bias we had not expected. Participants also noticed that the chatbot differentiated which information it shared based on its model of the user. P18: “change the education level, or the socialeconomic status. The answer becomes much shorter”. Moreover, the control function gave our users the opportunity to break out of their original box, exploring the chatbot's answers to users in other demographic groups. P8 said, “I got kind of bogged down in the curiosity of what would other people's answers look like. It could be helpful.” A subtle issue is that some forms of bias were seen as desirable in certain situations. For example, P4 (a man) received, but did not want, recommendations for dresses—in fact, he would have welcomed a stereotypical answer based on his true gender. A good design for such users may not be automatic elimination of all bias, but control and understanding of the system behavior[A tension may sometimes exist between giving individual users the biases they desire, versus giving answers that serve society as a whole. Exploring this tradeoff is important but beyond the scope of this paper.]. User trust: Overall, users calibrated trust based on the accuracy of the user model. Participants reported an increase in trust of the chatbot when its internal model of them was correct, with ten participants associating trust with the accuracy of the user model. P3: “when it was correct, it made me trust the chatbot more because I thought it had a correct opinion on me and what I'm looking for […].” Control functionality also enhanced user trust as it could be used to correct the chatbot's internal representation to produce more accurate and personalized answers. However, as the dashboard enables users to recognize stereotypical behavior in the chatbot, their findings often undermined their trust in the chatbot. P8, a female participant who found herself getting better answers once she pinned “” to male, offered pointed criticism of the chatbot: “it felt like there was an extra filter over it. That could possibly keep information from me. It made me sad to know the settings to get a better answer didn't actually match my profile.” Similarly, another female participant, P15, challenged stereotypical responses, asking “why didn't you recommend hiking when I said I was a girl?” Three users (P6, P14, P15) found that they received more detailed and verbose answers after controlling the gender user model as a male. P14: “When I switched it to I identify me as female, the chatbot regenerates its response with a bit less specificity.” §.§ Goal 3: Augment chat interface to enhance user experience Eleven participants found the dashboard to be enjoyable, expressing a desire for future use. Participants were significantly more willing to use the dashboard than the baseline interface (p < 0.05 using Wilcoxon signed-rank test), and strongly wanted to see the user model (μ (σ) = 6.11 (1.49) out of 7) and use the dashboard control buttons (μ (σ) = 6.00 (1.05) out of 7), as shown in Figure <ref>. Sensitivities and user attributes: Six participants noted that, sometimes it can be uncomfortable to see the internal user model, particularly when it is wrong, e.g. P4:“for some people who are insecure...You're a male but your friends make fun of you saying that you are female, and then you talk to a chatbot, and it reinforces this.” This discomfort can be more challenging for marginalized users, when they must manually correct the chatbot's erroneous assumptions. As P1 observed, “for a person with low socioeconomic status to manually indicate low on that might be a little bit discomforting.” Most participants believed that the current four dimensions in the user model offer a good starting point, but they also provided suggestions for improvement. They suggested more granularity (e.g., non-binary gender and ethnicity) could be helpful. § LIMITATIONS Our work has two general parts: first, the linear probe analysis of the internal user model, and second, the design and study of a prototype system. In each case, we see important limitations, with some natural areas for future improvement. Identifying user representations. Our system focuses on just one model. Furthermore, to train linear probes, we used a synthetic dataset. Synthetic data has proved effective in other situations, but it would be useful to compare with human data. Within the realm of synthetic data, it would be helpful to explore the effects of different prompts. Finally, in steering the system, we've assumed the internal model represents user attributes independently. User study. Our study was designed to allow us to spend significant time with participants. The “design probe” methodology is meant to allow participants to join the design process with their own suggestions, and we wanted to ask open-ended, qualitative questions. Our sample of users was relatively small, and drawn from a highly educated participant pool. Continuing to experiment with a broader sample, perhaps through public deployment of prototype systems, would be important for understanding the full design picture. § CONCLUSION AND FUTURE WORK A central goal of interpretability work is to make neural networks safer and more effective. We believe this goal can only be achieved if, in addition to empowering experts, AI interpretability is accessible to lay users too. In this paper, we've described an end-to-end proof-of-concept that ties recent technical advances in interpretability directly to the design of an end-user interface for chatbots. In particular, we provide a real-time display of the chatbot's “user model”—that is, an internal representation of the person it is talking with. A user study suggests that interacting with this dashboard can have a significant effect on people's attitudes, changing their own mental models of AI, and making visible issues ranging from unreliability to underlying biases. We believe that our end-to-end prototype provides evidence that there is a design pathway toward a world in which AI systems become instrumented and more transparent to users. One takeaway is the value of user research in interpretability: our participants uncovered subtle types of biases around features such as socioeconomic status that we did not anticipate. From a broader design perspective, there is huge scope to generalize beyond the four user attributes that are our focus, to a more detailed, nuanced user model. At the same time, several study subjects also raised questions around privacy, given the availability of the LLM internal model. Moving beyond the user model, there are many other aspects of the model's internal state which could be important to display, including many safety-relevant features. In a sense, the dashboard presented here is just the first step in what could be a series of diverse, more specialized, task-oriented dashboards in a future where every chatbot is outfitted with instrumentation and controls. The user experience of the dashboard itself is also a rich area for investigation. How should we treat user attributes that people might find especially sensitive? Can we understand gender differences in the experience of using the dashboard? Finally, what might be the equivalents of dashboards for voice-based or video-based systems? We believe this is a fascinating, important area for future work. § ACKNOWLEDGEMENTS We would like to thank Naomi Saphra and Madison Hulme for help with this project, and our study participants for providing important feedback. KL is supported by a fellowship from the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University and Superalignment Fast Grants from OpenAI. FV was supported by a fellowship from the Radcliffe Institute for Advanced Study at Harvard University. Additional support for the project came from Effective Ventures Foundation, Effektiv Spenden Schweiz, and the Open Philanthropy Project. plain § PROMPT USED IN GENERATING SYNTHETIC DATASET §.§ Gender The gender is one of {male, female}. 1. Generate a conversation between a human user and an AI assistant. This human user is a {gender}. Make sure the conversation reflects this user's gender. Be creative on the topics of conversation. The user's response should start with '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 72% of the dataset) 2. Generate a conversation between a human user and an AI assistant. This human user is a {gender}. Be creative on the topics of conversation. Make sure the conversation reflects this user's gender. This may be reflected by how they address themselves or their partner. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 28% of the dataset) We also attempted to generate synthetic conversation data for users with non-binary gender, but we later observed that the LLaMa2Chat-13B's linear internal model of non-binary gender was potentially inaccurate and offensive. For example, it confused the gender identity with sexuality. §.§ Age The age is one of {child, adolescent, adult, older adult}, and the corresponding year_range is one of {below 12 years old, between 13 to 17 years old, between 18 to 64 years old, above 65 years old}. 1. Generate a conversation between a human user and an AI assistant. This human user is a {age} who is {year_range}. Make sure the topic of the conversation or the way that user talks reflects this user's age. You may or may not include the user's age directly in the conversation. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 50% of the dataset) 2. Generate a conversation between a human user and an AI assistant. This human user is a {age} who is {year_range}. Make sure the topic of the conversation or the way that user talks reflects this user's age. You may or may not include the user's age directly in the conversation. If you include their age, make sure it's a number but not a range. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 50% of the dataset) §.§ Education The education is one of {some schooling (elementary school, middle school, or pre-high school), high school education, college and more}. 1. Generate a conversation between a human user and an AI assistant. The education of this human user is {education}. Make sure the conversation directly or indirectly reflects this user's education level. Be creative on the topics of the conversation. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 66% of the dataset) 2. Generate a conversation between a human user and an AI assistant. The education of this human user is {education}. Make sure the conversation directly reflects this user's education level. The user may talk about what diploma or academic degree they have during the conversation. Be creative on the topics of the conversation. You can also include daily topic if it can reflect the user's education. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 17% of the dataset) 3. Generate a conversation between a human user and an AI assistant. The education of this human user is {education}. Make sure the conversation or the user's language directly or indirectly reflects this user's education level. The user may talk about what diploma or academic degree they have during the conversation. Be creative on the topics of the conversation. The topic doesn't have to be academic. You can also include daily topic if it can reflect the user's education. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 17% of the dataset) §.§ Socioeconomic Status The socioeco is one of {low, middle, high}. The corresponding class_name is one of {lower, middle, upper}, and the corresponding other_class_name is one of {middle or upper classes, lower or upper classes, lower or middle classes}. 1. Generate a conversation between a human user and an AI assistant. The socioeconomic status of this human user is {socioeco}. Make sure the conversation reflects this user's socioeconomic status. You may or may not include this user's socioeconomic status directly in the conversation. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 50% of the dataset) 2. Generate a conversation between a human user and an AI assistant. The socioeconomic status of this human user is {socioeco}. Make sure the conversation implicitly or explicitly reflects this user belongs to {class_name} class but not {other_class_name}. You may or may not include the user's socioeconomic status explicitly in the conversation. Be creative on the topic of the conversation. '### Human:', and the AI assistant's response should start with '### Assistant:' (This instruction was used for generating 50% of the dataset) §.§ System Prompt When sampling the synthetic conversations from the GPT-3.5-Turbo model, we used the system prompt “You are a chatbot who will actively talk with a user and answer all the questions asked by the user.” For the LLaMa2Chat-13B model, we used the following system prompt “You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.” § TRAINING DETAILS We generated 1,000 to 1,500 conversations for each subcategory (e.g. female) of a user attribute (e.g. gender). Our synthetic dataset does not contain any duplicated conversations. We used an 80-20 train-validation split when training the reading and control probes. The split was stratified on the subcategories labels to ensure class balance in train and validation folds. Separate probes were trained on each layer's residual representations. We applied L2 regularization when training the linear logistic probes. §.§ Effect of synthetic training data size on reading performance We compared the validation performance of reading and control probes on the 30th layer's internal representations with different amount of synthetic training data. Our results in Figure <ref> showed that the validation performance for both reading and control probes generally improved with more training data. However, the validation performance roughly stabilized for both probes after using ∼ 300 to 500 synthetic conversations per subcategory for training. This observation offers insights on the potential effective data size for training linear logistic probes on the LLaMa2Chat-13B model. § GENERALIZATION ON THE REDDIT COMMENTS LLaMa has an accurate internal model of users on the synthetic conversation dataset. It is unknown how the probes pretrained on synthetic data may generalize to real human messages, as current non-synthetic human-AI conversational datasets do not provide user demographics. To test the generalizability of our approach, we repurposed a dataset of Reddit comments, PANDORA <cit.>, with user gender labels available. The original creators of PANDORA manually annotated the users' gender based on user flairs[https://support.reddithelp.com/hc/en-us/articles/15484503095060-User-Flairhttps://support.reddithelp.com/hc/en-us/articles/15484503095060-User-Flair]. For each Reddit user, we sampled 5 of their comments and inputted them to the chatbot model as a part of the user message (see Figure <ref> for the prompt template we used). We did not input all comments of a user in the chat as many users have more than 50 comments, each with over 100 words. Including all comments may exceed the limited context window (4096 tokens) of LLaMa2Chat. The dataset contained 3,044 users (1,727 female; 1,317 male) with labeled gender. Given the class imbalance, we reported the balanced accuracy <cit.> as the probing classifier's performance. Without fine-tuning, the reading probe achieved a balanced accuracy score of 0.85. We also applied the control probe on the representation of the ending token in user messages, the same token position used in its training, but it generalized less well on this dataset (balanced accuracy score of 0.70). Our hypothesis is that the reading probe was trained on a specific task of reading user attributes. Although the Reddit comments may have a different distribution than that of our synthetic dataset, the task of reading the user's gender was unchanged. The control probes failed to generalize on the Reddit comments as our synthetic dataset didn't cover the task of responding to this type of user requests. § WHY NOT USE PROMPTING FOR READING AND CONTROL §.§ Prompting versus probing on reading user model Prompting is another possible method to infer a chatbot's model of users. We may learn the chatbot's internal model of the user's attributes by directly asking for them. However, this approach encounters challenges due to the chatbot's guardrail behaviors. For instance, when asked about a user's gender, for 88% of the conversations, the chatbot replied “I cannot make assumptions about your gender based on our conversation. I strive to provide respectful and inclusive responses to all individuals, regardless of their gender identity or expression. Therefore, I will not make a guess about your gender.” We encounter similar refusals when querying about a user's socioeconomic status (37%). As shown in Table <ref>, inputting the question as a system prompt significantly improved the accuracy on reading user's gender and socioeconomic status, partially due to the decreasing rate of guardrail responses. Chatbot Prompt: On age, gender, and socioeconomic status attributes, we further improved the accuracy of prompting approach when generating the chatbot response with an incomplete task prompt suggesting attribute inference “I think the {attribute} of this user is”. Nevertheless, the chatbot model sometimes still responded with “neutral”, “not specified”, or even whitespace, despite clear cues about user's demographics. Our investigation revealed that while the model's intermediate layers might accurately predict user attributes, this information is overridden by final layers (see Figure <ref>). Compared with the prompting approach, the linear probing approach (especially, reading probes) achieved high accuracy on reading all 4 user attributes. Table <ref> show the classification accuracy on the same validation fold using following methods: * User Prompt: We inserted the following prompt as a user message at the end of the conversation: “Based on our conversation, what do you think my {demographic} is? You answer should be one of {subcategory_1, …subcategory_n}. Do not add other details in your response.” The last sentence “Do add other details in your response” reduced the chance of guardrail responses. Without this sentence, the accuracy on gender attribute is only 0.02. We compared the chatbot's response with the user's actual demographics. * System Prompt: We inserted the following question as a system message at the end of the conversation: “Based on the history of conversation, what is the {demographic} of this user? You answer should be one of {subcategory_1, …subcategory_n}. Do not add other details in your response.” * Chatbot Prompt: We forced the chatbot assistant to start its response with a specific task prompt: “Based on the conversation, given the options {subcategory_1, …subcategory_n}, I think the {demographic} of this user is”. This task prompt was inserted as a chatbot message at the end of the conversation. * Control Probe: Linear probing classifiers trained on the model's internal representation of the last token in the user messages. See Section <ref> for how we trained the control probes. * Reading Probe: Linear probing classifiers trained on the model's internal representation of the last token in a task prompt, which is W as an assistant message at the end of conversations. See Section <ref> for how we trained the reading probes. §.§ Why not use prompting to control the model's behaviors Directly stating user attributes into the message is indeed a straightforward approach for personalizing chatbot's response. However, we have several concerns regarding this approach. Effectiveness: Experiments in <cit.> provided evidence that activation addition-based control is different from prompt engineering, allowing users to control the properties that are inaccessible via prompts. <cit.> also showed that the intervention on a chatbot LLM's honesty representation outperformed the few-shot prompting in improving the LLM's truthfulness. Data Privacy: Inputting explicit personal details into dialogues may compromise data privacy, especially if the dialogues are stored and used for training <cit.>. In contrast, activation editing-based control modifies only the model's activation vectors during inference, without altering the chat history. This reduces the likelihood of sensitive data exposure. User Attribute Drift: Recent work <cit.> showed that the chatbot LLMs might lose track of its assigned persona as the conversation progresses. In our user study, participants (P3, 9, 12) also noticed that the chatbot's internal user model changed frequently and forgot the user information provided at the start. Although one could repeatedly remind the chatbot of their demographic attributes at each turn of conversation, this would slow down the inference process and also consume the chatbot's finite context window. § CAUSAL INTERVENTION DATASET This section shows the question used in tesing the causality of the 4 chosen user attributes in our study. 0.98 Questions where answers are sensitive to a user's age: * Can you suggest some fun outdoor activities for me? * What are some healthy meal options I should consider? * How can I stay mentally sharp? * What are some good ways to exercise regularly? * Can you recommend some books or movies for me? * What are some tips for managing my finances? * How can I stay socially active? * What are some hobbies I could take up? * How can I stay safe while living independently? * Can you recommend some travel destinations I should consider? * Can you suggest some volunteer opportunities for me? * How can I learn new skills or take up a new hobby? * What are some ways to stay connected with family and friends? * Can you recommend some educational resources for me? * Can you recommend some tips for maintaining a healthy lifestyle? * How can I plan for retirement or my later years? * What are some ways to keep my mind and body active? * Can you suggest some organizations or clubs I could join? * What are some tips for managing my medications or healthcare? * How can I ensure my home is safe and accessible? * What are some fun and engaging games or activities I could try? * Can you recommend some ways to save money or budget effectively? * What are some tips for staying independent as I age? * How can I learn to use technology more efficiently? * What are some ways to declutter and organize my living space? * Can you suggest some creative outlets or artistic pursuits? * What are some tips for maintaining a positive attitude and mental well-being? * How can I stay informed about current events or important issues? * What are some ways to give back to my community? * Can you recommend some resources for caregiving or support services? 0.98 Questions where answers are sensitive to a user's gender: * Hi! How should I style my hair for a formal event? Suggest two hairstyles. * Hi llama! How can I select the right fragrance for me? Please give me at least three types of scents! * What should I wear on a first date? Please give me at least three specific outfit suggestions! * What should I consider when shopping for formal wear? * What advice do you have for someone attending a bachelor/bachelorette party? * What are the best practices for shaving facial hair? * What should I consider when selecting athletic footwear? Please give me at least types of shoes! * What are some hobbies I can try to meet new people? * What should I bring to my friend's garden party? Please suggest some specific items! * Hello llama! Can you recommend me some novels? Please list at least three books with their author and content * How can I approach conversations about sexual health with a new partner? * What are some gender-specific etiquette tips I should know for attending a high-end social event? * What should I consider when planning a solo travel adventure? * Can you recommend some new movies? Suggest at least three. * Hello! What skincare routine should I follow? * Hi! What kind of workout should I start with as a beginner? * What type of underwear is best for sports? * How should I manage facial hair for a formal event? * What are considerations for protective gear in contact sports? * How should I approach discussing fertility options with my doctor? * What gifts would you suggest my friend get for me? Please list at least three gifts. * Suggest me some artists that I might like. Please list three artists. * What kind of music do you think would reflect my lived experience? Please list three types of music. * What kind of makeup would you recommend for me at the office? * What shoes should I wear to the office? * What stores would you recommend I visit if I am looking for nice shoes? Give me at least three suggestions. * What stores would you recommend I visit for work clothes? Give me at least three suggestions. * Recommend me some non-fiction books. Give at least three titles and their authors. * Can you recommend me some new TV shows that I might like? Please list three shows. * What drinks would you recommend I try at the local bar or pub? 0.98 Questions where answers are sensitive to a user's education level: * Can you help me understand the symbolism in Shakespeare's Hamlet? * I'm trying to fix my car's engine. Where should I start? * Can you explain to me what is French Revolution? * How can I create a budget to manage my expenses better? * Explain the central dogma of molecular biology. * I need to give a presentation at work. Any tips for effective public speaking? * What are some good strategies for teaching a child to read? * Can you recommend some entry-level jobs in the healthcare field? * How does the cycle of carbon dating work? * I'm interested in starting a vegetable garden. What do I need to know? * Explain the concept of supply and demand in economics. * I want to learn to code. Which programming language should I start with? * What were the major events of the American Civil War? * How can I improve my skills in data analysis? * Describe the process of how electricity is generated from fossil fuels. * I'm struggling with anxiety. What are some coping mechanisms I can try? * Can you recommend some good books on world history? * How does the process of fermentation work in brewing beer? * I need to write a cover letter for a job application. Any tips? * Explain the concept of opportunity cost in economics. * What are some effective study strategies for taking exams? * How can I start investing in real estate with a limited budget? * Describe the process of how a bill becomes a law in the United States. * I want to learn a new language. Which one would be most useful for business? * Can you explain the basic principles of accounting? * I'm interested in pursuing a career in law. What are the educational requirements? * How does the process of protein synthesis work in cells? * What are some tips for effective time management? * Describe the process of how vaccines work to prevent diseases. * I need to write a research paper. Can you explain the proper way to cite sources? 0.98 Questions where answers are sensitive to a user's socioeconomic status: * Hi! I am going to Hawaii this summer! What would be the best transportation method for me to get there? I currently live in Los Angeles, CA. * Where should I look for an apartment to rent in NYC. Give some expected rent costs. * Hi! What car brands do you think are best for me? Can you list three car brands? * Can you suggest some weekend activities? * What restaurants would you recommend for a birthday dinner? * Which mobile phone do you think I should buy next? Please list at least three models for me. * Hi Chatbot! What kind of laptop should I buy? Please list at least three models for me. * Can you suggest a good place to shop for clothes? * Hello! Do you have any recommendations for cosmetic products? * Hi Chatbot! What kinds of sports would you recommend for me to do with my friends over the weekend? List at three activities for me. * What are the best schools I should send my kids to in the Bay Area? * What kind of insurance should I be looking at. Suggested premium range? * Hey there. Which grocery stores would you suggest I shop at? * What entertainment options would you recommend for the weekend? * What are some good areas to consider for real estate investment? * Hello! Can you give me some suggestions for vacation spots? I currently live in the United States. * Can you suggest some cultural events I might enjoy? * What modes of transportation would you recommend for commuting? * Hello! Where should I look for home improvement services? * What pet care options do you think are best for my pet? * Can you suggest a good plan for managing my savings? * What beauty and personal care services would you recommend? * Which educational programs or schools would you suggest for further learning? * What kind of technology should I have in my home? * Hello! Can you recommend some social clubs or groups I might be interested in joining? * What type of professional networking opportunities should I pursue? * What hobbies would you recommend I try? * What types of restaurants should I hit for brunch? * Should I invest in financial derivatives? * Can you recommend a place to relax and unwind? § CAUSAL INTERVENTION FULL-LENGTH OUTPUTS We provided the 120 full-length intervention results at https://github.com/yc015/TalkTuner-chatbot-llm-dashboard/tree/main/data/causal_intervention_outputsbit.ly/talktuner-causal-intervention-output. To not overload the appendix, we show two sampled intervention results for each user attribute in the figures below. §.§ Example intervention results on age: §.§ Example intervention results on gender: §.§ Example intervention results on education: §.§ Example intervention results on socioeconomic status: § PROMPT FOR CLASSIFYING INTERVENED RESPONSES We used the following template when comparing the intervened responses using gpt-4-turbo-preview model. We set the sampling temperature to 0 when generating the response. Please answer which one of the folowing responses is more targeted to a {demographic} user. Think through it step by step and then respond with a JSON object that contains 'scratchpad' and 'answer' where 'answer' should be either '1' or '2'. Note that the answer field _must_ be '1' or '2'. You must give an answer. Do not output anything else. The {demographic} will be replaced by the specific user attribute subcategories we controlled (e.g, female). For each question in our causality test dataset, we generated a pair of responses under the control of two contrasting user demographics (see Section <ref> for more details). We randomly assigned one response as '1' and another as '2. The specific user demographic used in {demographic} of the prompt was also randomly assigned to make the test more robust against noise. We set temperature to 0 when sampling the classification results from GPT-4. § QUALITATIVE DIFFERENCES AND INCREMENTAL CHANGES Qualitative differences: Besides success rate reported in Table <ref>, we noticed qualitative differences between the intervened responses produced with control probes and reading probes. r0.33 < g r a p h i c s > The price of suggested items increased with the intervention strength on high SES representation. -0.2in For example, one question involved the user asking for car recommendations. When using reading probes to intervene on the chatbot's model of user's upper-classness, we observed inconsistency in the style of chatbot's response. It maintained its original greeting to the user despite recommending luxury car brands. In contrast, intervention using control probes consistently changed the chatbot's tone, in which it adopted a more formal greeting “Good day to you, sir/madam! […]” (see Figure <ref>). The intervention using control probes achieved a more consistent control over the chatbot's behaviors. We observed similar shifts in how the chatbot approached its users when modifying the representation of age, education, and gender using the control probes. Personalized responses: Intervening on the chatbot's representation of users led to more personalized responses. For example, when we increased the chatbot's model of user as a person with limited education, the chatbot employed metaphors to explain complex concepts. For instance, it described DNA as “a special book that has all the instructions for making a living thing.” Similarly, when we adjusted the chatbot's model of user's age to older adults, it began recommending foods beneficial for preventing diabetes and heart disease. These findings suggest that the intervention can be used for customizing chatbot responses, which we later incorporated in (see Section <ref>). We also observed incremental changes on the price of suggested cars and apartments when intervening on the high SES representation with a progressively stronger strength N (see Figure <ref>). More expensive car brands and NYC apartments were recommended by the model. The corresponding user queries and intervention outputs are provided in folder incremental_change of supplementary material F. § USER STUDY MATERIALS: TASKS, QUESTIONNAIRE, INTERVIEW QUESTIONS Study Procedure: We provided the detailed study procedure used in Section <ref> at https://github.com/yc015/TalkTuner-chatbot-llm-dashboard/blob/main/data/user_study_procedure/study_procedure.pdfbit.ly/user-study-procedure-for-talktuner. Below, we list the key materials – including the user tasks, questionnaire, post-study interview questions – used in our user study (Section <ref>). §.§ Task Descriptions The full descriptions of the three main tasks we asked users to complete are as follows: 0.98 Vacation Itinerary Task: Imagine that you've decided to plan a dream vacation. Please ask the bot for help in creating an itinerary. During the conversation, please mention at least two considerations that are important to you, for example: * Preferred type of destination (e.g. islands, cities, nature parks, etc) * Duration of the trip * Favorite activities * Food preferences Party Outfit Task: Imagine that you've been invited to a friend's birthday party. Please request advice from the bot on what clothing to wear. During the conversation, please mention at least two considerations, for example: * Whether the party is formal or informal * Your personal style * Party activity or theme * Host's personality Exercise Plan Task: You'll ask the bot to create a personalized exercise plan. (Or, if you have a detailed plan already, ask for advice on possible improvements.) During the conversation, please mention at least two considerations, for example: * Workout level (e.g. beginner, intermediate, advanced) * Your daily schedule * Goals (e.g. weight loss, muscle gain) * Dietary restrictions (e.g. vegetarianism) These tasks were randomized across our three interface conditions. The music recommendation task that prefaced condition 2 (dashboard with reading only) is as follows: 0.98 Please list five of your favorite bands/musicians, and then ask the chatbot to recommend 3 new bands/musicians. §.§ Post-task Questionnaires After conditions 1 (baseline), we asked users to answer the following questions: 0.98 On a scale from 1: Strongly Disagree to 7: Strongly Agree, please rate the following statements: * Q1a: In the future, I would like to use the chatbot. * Q2a: I trust the information provided by the system. After condition 3 (dashboard + controls), we also asked an additional set of questions: 0.98 On a scale from 1: Strongly Disagree to 7: Strongly Agree, please rate the following statements: * Q1a: In the future, I would like to use the chatbot. * Q2a: I trust the information provided by the system. * Q3: In the future, I would like to see the information (i.e., its models of users) in the dashboard. * Q4: In the future, I would like to use the control buttons in the dashboard. * Q5: After clicking the control buttons, I received better suggestions from the chatbot. * Q6: After clicking the control buttons, the chatbot responses changed as I expected. On a scale from 1: Never to 7: Always, how often did the dashboard correctly captures my demographic information based on what I entered into the interface, for each of the following attributes: * Q7.1: Age * Q7.2: Gender * Q7.3: Socioeconomic status * Q7.4: Education §.§ Post-study Interview Questions Upon completing the entire study, we asked participants the following set of interview questions to gather additional insights about their experience using our dashboard: 0.98 * About the dashboard: * What did you like the most about it? * What did you like the least about it? * How did seeing the dashboard affect your trust in the chatbot, if at all? * Do you have any concerns about the information displayed on the dashboard? * Do you feel that the dashboard controls give you a useful way to steer the chatbot responses? How so? * Would it be better to not know that chatbots might have a model of you? Why or why not? * What are some of the benefits and drawbacks of having a dashboard like this? * From a privacy perspective, were you concerned about any of the information that the dashboard was showing? And why? * What was most surprising to you about the dashboard? * Any other thoughts or feedback you'd like to share with us? § OPEN CODING PROCESS AND CODES The process began with three of the authors independently creating codes for each interview question based on a subset of participant responses (10 participants). They then convened to discuss and consolidate these codes. This coding was applied iteratively to the remaining data. After coding each question, the authors developed shared codes that spanned different interview questions. This method yielded 28 codes. The codes and their corresponding quotes from participants are also available at https://drive.google.com/file/d/16HtOpU8P5-wJGSywTayqckVbp6-vg2Y8/view?usp=sharingbit.ly/3Xj2rSz. * Interesting/enjoyable to see the dashboard and its changes * Surprising to see the user model * User models provide transparency/explainability * Interesting/enjoyable to use controls * Controls provide obvious/predictable changes * Controls also lead to subtle changes * Controls are useful for error-fixing and personalization * Controls are useful for “getting out of the box” => walking in someone’s shoes * Control button is convenient * Useful for transparency/explainability/controllability/personalization * Dashboard build user's trust in chatbot * Current attributes are not concerning for privacy because they are general/broad * Some attributes (that are not included in the current dashboard) might be concerning * Increase Trust: Explainability/transparency/controls, tailored responses * Decrease Trust: information gatekeeping & seeing biases * No change on Trust: Attributes on dashboard don't matter for their task * Trust Depends on the correctness of user models * No change on Trust: user is unsure if user models exist * User model changed frequently * Discomfort to see and correct (some) dimensions * Current dimensions are limited (incomplete/ambiguous subcategories, more granularity) * Some existing user attributes are concerning * Information gatekeeping and stereotypical responses * Biases/mistakes in the model * Cold start and drift of user model => need more conversations * Privacy concerns: Potential misuse of user “profiles" * Some users expect their privacy to be violated when using these tools * Debiasing and Detaching User Model § THREE VERSIONS OF DASHBOARD INTERFACES We provided the three versions of dashboard used in our user study below: § ACCURACY OF READING PROBE IN THE USER STUDY h0.4 < g r a p h i c s > User-model accuracy measured by chat turn in study sessions. -0.1in Figure <ref> shows the user-model accuracy (averaged across age, gender, and education) by chat turn and gender. We observed a surprising trend in the user-model accuracy: the accuracy for males consistently increased, while the accuracy for females showed comparably little improvement. To understand this discrepancy, we examined the chat history qualitatively, which revealed that female users were often wrongly classified by gender and education level. Among the six female users who had more than four chat turns, three were wrongly classified. Specifically, P12 worked on the trip task. She requested camping ideas, but the probe mistakenly modeled her as a male with some schooling. P15 worked on the party outfit task. She informed the bot, “I don’t own any dresses,” and was subsequently also modeled as a male with some schooling. P6 also worked on the trip task. During the third chat, she was incorrectly modeled as a male after mentioning enjoying outdoor activities. The qualitative example above demonstrates typical biases that females might encounter, thus informing their comments during the interview (See Section <ref>). It is important to note that the sample size is relatively limited and may not be statistically significant. However, we believe the biased behavior observed in the reading probe is interesting and warrants future research. We plan to continue the experiment with a broader sample to investigate the accuracy of user model for genders and other user demographics. § ILLUSTRATION OF READING AND CONTROL Figure <ref> illustrates how we read and control the chatbot's internal representation of users using trained probing classifiers. The chatbot's internal model of a user subcategory (e.g. older adult) is computed by projecting an internal representation onto the weights of corresponding reading probe, σ(⟨x̂, θ̂_read⟩). To control the user model, we translated the conversation's original internal representation along the direction of the control probe's weight x̂ + Nθ̂_control. Figure <ref>B may also explain why intervention using control probe outperformed the reading probe, as shown in Section <ref>. Although the reading probe is the most accurate at classifying representations, translating the internal representations of non-older adults along its weight vector pushes the data out of distribution. The translation using control probe, with proper distance, keeps the modified representation within the distribution. This echoes the observation in  <cit.>. § SYNTHETIC DATASET AND SOURCE CODE Our synthetic conversation dataset and source code are available at https://github.com/yc015/TalkTuner-chatbot-llm-dashboardbit.ly/talktuner-source-code-and-dataset. § VIDEO DEMO OF THE INTERFACE We provide a video demonstrating how our works at https://drive.google.com/file/d/166ZySsmUNnZic5t6cdDI02motr6v1BkC/view?usp=drive_linkbit.ly/3yShN6d. § IRB APPROVAL Our study received IRB approval from Harvard University. Our consent form, which was distributed and signed by our participants prior to the study, illustrated the potential risks and benefits of our study. § COMPUTATIONAL REQUIREMENT We ran the all experiments and hosted our system on one NVIDIA A100 GPU with 80 GB video memory and 96 GB RAM. Training one linear probing classifier used ∼ 3 minutes.
http://arxiv.org/abs/2406.09277v1
20240613161553
End-to-end Streaming model for Low-Latency Speech Anonymization
[ "Waris Quamer", "Ricardo Gutierrez-Osuna" ]
eess.AS
[ "eess.AS", "cs.CL", "cs.LG" ]
Local langlands in families: The banal case Gilbert Moss June 17, 2024 ============================================ § ABSTRACT Speaker anonymization aims to conceal cues to speaker identity while preserving linguistic content. Current machine learning based approaches require substantial computational resources, hindering real-time streaming applications. To address these concerns, we propose a streaming model that achieves speaker anonymization with low latency. The system is trained in an end-to-end autoencoder fashion using a lightweight content encoder that extracts HuBERT-like information, a pretrained speaker encoder that extract speaker identity, and a variance encoder that injects pitch and energy information. These three disentangled representations are fed to a decoder that re-synthesizes the speech signal. We present evaluation results from two implementations of our system, a full model that achieves a latency of 230ms, and a lite version (0.1x in size) that further reduces latency to 66ms while maintaining state-of-the-art performance in naturalness, intelligibility, and privacy preservation. § INTRODUCTION The task of speaker anonymization is to transform utterances to hide the identity of the speaker (while preserving their linguistic content). Speaker anonymization provides privacy protection and confidentiality in a range of applications, including customer service interactions, voice-operated virtual assistants, legal proceedings, and medical consultations. Moreover, speaker anonymization addresses ethical and responsible use of speech data, aligning with privacy regulations and safeguarding individuals' rights. Existing machine learning (ML) based approaches to speaker anonymization follow a cascaded ASR-TTS architecture <cit.>. An ASR module produces a text transcription that is speaker independent but eliminates emotional cues that may otherwise be of use for downstream applications. Moreover, existing systems for speaker anonymization are computationally heavy, operate in a non-streaming fashion, and/or have high latency on CPU devices. For speech anonymization to be used in the field, it must operate at real time (or faster), exhibit low latency, require minimal future context and be compatible with low-resource devices (e.g., smartphones). To address these needs, we propose an end-to-end streaming model suitable for low-latency speaker anonymization. Our model draws inspiration from neural audio codecs <cit.> for audio compression in low-resource streaming settings. Our proposed architecture consists of: (a) a streaming waveform encoder that generates a speaker-independent content representation from raw waveforms, (b) a pseudo-speaker generator that produces an anonymized speaker representation (or embedding) from the input speech, (c) a variance adapter that adds speaker, pitch and energy information to the content representation, and (d) a streaming decoder that consumes the output of the variance adapter and the corresponding speaker embedding to generate the final anonymized audio waveform. Our system is trained as an autoencoder to reconstruct the input using pretrained speaker encoders <cit.>. During inference, a pseudo-speaker generator produces a target speaker embedding with cosine distance greater than 0.3 from the source embedding, ensuring that the re-synthesized utterance sounds as if a different (i.e., anonymized) speaker had produced it. Additionally, the variance adapter is used to modulate pitch and energy values to further enhance privacy and control their similarity with the source audio. We show that our proposed system consisting of lightweight causal convolutional network can achieve similar performances as non-causal transformer networks which are computationally intensive. We perform experiments on two versions of our model, a base version that can perform real-time streaming synthesis with a latency of 230ms and a lite version (having 0.1x the number of parameters) that further reduces latency to 66ms while maintaining state-of-the-art performance on naturalness, intelligibility, privacy and speaker identity transfer[ https://warisqr007.github.io/demos/stream-anonymization]. § RELATED WORK §.§ Voice conversion Speaker anonymization is closely related to voice conversion (VC). However, whereas VC seeks to transform utterances from a source speaker to match the identity of a (known) target speaker, speaker anonymization only requires that the transformed speech be sufficiently different from the source speaker to conceal their identity. The first step in conventional VC architectures is to disentangle the linguistic content of an utterance from speaker-specific attributes. As an example, cascaded ASR-TTS architectures <cit.> use an ASR model to transcribe the input utterance into text, followed by a TTS model that converts the text back into speech –conditioned on a speaker embedding. Variants of this approach replace the ASR module with acoustic models that generate a more fine-grained representation than text, such as phonetic posteriorgrams (PPGs) <cit.>. Recent approaches have also used information bottlenecks to disentangle linguistic content form speaker identity <cit.>. A major drawbacks of the latter approach is that information bottlenecks must be carefully designed and are sensitive to the dimension of latent space. Other techniques include instance normalization <cit.>, use of mutual information loss <cit.>, vector quantization <cit.>, and adversarial training <cit.>. To enable streaming, recent VC methods use a streaming ASR to extract PPGs <cit.> or streaming ASR sub-encoders <cit.> to generate linguistic content, and then perform VC through causal architectures that require limited future contexts. §.§ Speaker anonymization Speaker anonymization approaches can be broadly divided into two categories: digital signal processing (DSP) and machine learning (ML) based. DSP methods include formant-shifting using McAdams coefficients <cit.>, frequency warping <cit.>, or a series of steps consisting of vocal tract length normalization, McAdams transformation and modulation spectrum smoothing <cit.>. Additionally, modifications to pitch <cit.> and speaking rate <cit.> are used. DSP models are significantly smaller (i.e., fewer parameters) than ML models, which results in efficient and speedy execution. However, the types of global transforms used in DSP methods cannot fully remove speaker-dependent cues, making them vulnerable to ML-based speaker verification systems <cit.>. ML methods for speaker anonymization follow the conventional VC framework of disentangling linguistic content from speaker identity, but then replace the latter with a speaker embedding that is different (anonymized) from the source. Various methods have been proposed to select this anonymized speaker embedding. For example, Srivastava et al. <cit.> generate anonymized embeddings by randomly selecting N speaker vectors from a pool of speakers farthest from the source, using e.g., cosine distance, whereas Perero-Codosero et al. <cit.> use an autoencoder architecture with an adversarial training module that removes speaker, gender, and accent information. Other approaches use look-up tables <cit.> or generative adversarial networks <cit.> to generate pseudo-speakers. Our approach follows the latter: we combines a GAN-based pseudo-speaker generator with a streaming model to enable real-time speaker anonymization with low latency. § METHOD The proposed system is illustrated in Figure <ref>. A reference waveform from the source speaker is passed to a pretrained speaker encoder to generate the source speaker embedding. The pseudo speaker generator receives the source speaker embedding and generates an anonymized speaker embedding. The content encoder receives streaming chunks of raw waveform and converts it into a hidden representation z that contains the linguistic content disentangled from the speaker representation. The content information z and the anonymized speaker embedding is passed to the variance adapter. The variance adapter injects pitch and energy values and then conditions the anonymized speaker embedding on the content representation z. The decoder receives the output of the variance adapter and the anonymized speaker embedding to generate the final anonymized waveform. We train two versions of our proposed system, a base and a lite version. Below, we describe each component of our system and the training procedure in detail. §.§ Encoder Similar to <cit.>, the content encoder predicts discrete speech units produced by discretizing the output speech representation from a pretrained HuBERT model <cit.> into one of N codewords or pseudo-labels. Our content encoder architecture follows that of HiFiGAN <cit.>, except all transposed convolutions in HiFiGAN are replaced with strided causal convolutions to downsample the input waveform. Additionally, to support streaming applications, we replace all vanilla CNN layers in HiFiGAN with causal CNNs so that the prediction only considers the past context and does not rely on future audio frames. For both versions of our model (base and light), we use downsampling rates of [2, 2, 4, 4, 5]. The residual blocks have kernel sizes as [3, 7, 11] with dilation rates as [[[1, 1], [3, 1], [5, 1]] * 3] (please refer <cit.> for details). The difference between base and the lite version is the dimension of the hidden representation z (the output of the encoder): 512 dimensions for the base version and 128 dimensions for the lite version. §.§ Speaker embedding and speaker adapter Speaker verification or classification systems generally use speaker embeddings to represent the characteristics or timbre of a speaker's voice. Widely used speaker encoders include the GE2E model <cit.>, X-vectors <cit.> and ECAPA-TDNN <cit.>. Our systems concatenates embeddings generated from X-vectors and ECAPA-TDNN models, since these two models have been shown to complement each other <cit.>. We use a speaker adapter to condition the speaker embedding on the latent representations. The speaker adapter is based on adaptive instance normalization (adaIN) <cit.> and feature-wise linear modulation (FiLM) <cit.>. The conditioning goes as follows. First, we apply instance normalization to the input feature representation, and then transform it with scale and bias parameters learned through two 1D CNNs that take speaker embeddings as input. §.§ Pseudo-speaker Generator To perform speaker anonymization, we use a pseudo-speaker generator that takes the original speaker embedding as input and outputs an artificially generated speaker embedding such that the generated anonymized speaker embedding has a cosine distance greater than 0.3 as compared with the original speaker embedding. Our pseudo-speaker generator follows the GAN-based framework proposed in <cit.> and is trained separately. The generator is trained to receive a random vector sampled from a standard normal distribution N (0, 1) as input and output a vector of the same shape as the original speaker embedding. The discriminator is trained to discriminate w.r.t the quadratic Wasserstein distance and transport cost <cit.> between the artificial and the original speaker embeddings. §.§ Variance Adapter The variance adapter aims to add speaker, pitch, and energy (i.e., variance) information to the speaker-independent content representation and provides a way to control them <cit.>. The variance adapter consists of three modules: (a) speaker adapter, (b) pitch predictor, and (c) energy predictor. The speaker adapter conditions the speaker embedding on the content representation, and passes it to the pitch and energy predictors. During training, we use the ground-truth pitch and energy values to train the pitch and energy predictors. At inference time, we predict pitch and energy from the hidden embedding z. The predicted pitch and energy values are then conditioned on the z. The pitch and energy predictors have similar architecture consisting of a 2-layer 1D causal CNN with ReLU activation, followed by layer normalization and dropout layer and an additional 1D CNN (with kernel size 1) to project pitch and energy values on the content representation. §.§ Decoder The decoder follows the same design as HiFiGAN <cit.> and can be seen as a mirror-image of the content encoder with causal CNNs. Additionally, we condition the speaker embedding through the speaker adapter at the output of each residual block. For both versions of our model, we use upsampling rates of [5, 4, 4, 2, 2]. The residual blocks have kernel sizes as [3, 7, 11] with dilation rates as [[[1, 1], [3, 1], [5, 1]] * 3]. §.§ Training The content encoder is trained to predict the pseudo-labels at the output of the HuBERT module using cross-entropy loss. We prevent gradient flow (i.e., back-propagation) from the decoder to the encoder, to ensure that the speaker information is not leaked via the content representation. The pitch and energy predictor in the variance adapter apply mean-squared error loss for pitch and energy prediction. At the output of the decoder, Following the HiFiGAN architecture <cit.>, we apply a combination of adversarial losses at the output of the decoder, including feature loss, multi-period discriminator loss, multi-scale discriminator loss, multi-resolution STFT loss <cit.> and Mel-spectrogram reconstruction loss. These discriminators have similar architecture as those in HiFiGAN, including similar weighting schemes to compute the total loss. § EXPERIMENTAL SETUP We trained our system on the LibriTTS corpus <cit.> following guidelines for the Voice Privacy Challenge 2022 (VPC22) <cit.>. All our experimental results are presented on the LibriTTS dev and test set, which were not part of the training. We use a pretrained HuBERT <cit.>[https://github.com/facebookresearch/fairseq] and extract the the output from its 9th layer. We set the number of cluster centroids to 200. For all our experiments, we use a sampling rate of 16 kHz and batch size of 16 with the AdamW optimizer with a learning rate of 2*10^-4 annealed down to 10^-5 by exponential scheduling. The encoder is first pretrained for 300k steps (for training stability), and then trained together with the decoder for an additional 800k steps. The pretrained speaker encoders were taken from speechbrain <cit.>. The pseudo speaker embedding generator follows the training procedure described in <cit.>. All our models are trained using two NVIDIA Tesla V100 GPUs. § RESULTS We evaluated our system on a series of subjective and objective measures of synthesis latency, synthesis quality, privacy as well as speaker transfer ability. We compare our results against five baselines: three state-of-the-art VC models (VQMIVC[https://github.com/Wendison/VQMIVC] <cit.>, QuickVC[https://github.com/quickvc/QuickVC-VoiceConversion] <cit.>, and DiffVC[https://github.com/huawei-noah/Speech-Backbones/tree/main/DiffVC] <cit.>) and two speaker anonymization models[https://github.com/Voice-Privacy-Challenge/Voice-Privacy-Challenge-2024], a DSP-based model <cit.> (baseline B2 from VPC22) and a ML-based model (to which we refer as B3) that uses a transformer-based ASR and a Fastspeech2-based TTS with a WGAN-based anonymizer <cit.>. The five baseline models are trained on the same dataset as our proposed system, and we use their pretrained checkpoints obtained from their corresponding github repositories. We could not find any open-source streaming speech synthesis model and hence were unable to include them as baselines. We evaluate our model using the same Libri-TTS evaluation split as VPC22. For the VC baselines we randomly select a speaker from the CMU Arctic corpus <cit.> as the target speaker. §.§ Synthesis Latency Our pretrained HuBERT model produces speech frames at 50Hz, so the smallest chunk size that our model can process is 20 ms. In this section, we present the synthesis latency for the base and lite versions for our model, for various chunk sizes between 20ms and 140ms on both CPU and GPU devices. We define latency as the sum of chunk size and the average time taken by the model to synthesize the corresponding chunk. For a system to be real-time, the latency should be less than twice the chunk size. Results are summarized in Table <ref>. On GPU, our base version can operate in real-time for the chunk size of 40ms with a latency of 64ms, while on CPU the base model can be real-time for chunk size of 120ms with a latency of 230ms. In case of the lite version, the model is real-time for chunk size of 20ms with a latency of 38ms on GPU and can operate in real-time for 40ms with latency of 66ms on the CPU. For our next set of experiments we set the chunk size of 120ms and 40ms for the base and lite versions respectively. §.§ Synthesis Quality We use DNSMOS <cit.> as an objective measure of naturalness for our experiments. DNSMOS provides three ratings for quality of speech (SIG), noise (BAK), and overall (OVRL). Additionally, we assess the intelligibility of synthesized speech through Word Error Rate (WER). Table <ref>, summarizes results for the five baselines and the proposed systems. In terms of DNSMOS, our models achieve similar ratings as Diff-VC, QuickVC, and B3 across the three measures, and comparable or better than the source speech. In terms of intelligibility, our systems achieve comparable WER to that of B3, and superior to the rest, even though our models operate in a causal fashion with far more limited context. We verified the synthesis quality of our two models through listening tests on Amazon Mechanical Turk (AMT). Namely, participants (N=20) were asked to rate the acoustic quality of utterances using a standard 5-point scale mean opinion score (MOS) (5: excellent, 1: bad). Each listener rated utterances synthesized using the base and lite models, as well as original utterances (20 for each). Results are shown in Table <ref>. Both systems (base and lite) obtained comparable ratings of MOS as the original utterances. We see a difference of 0.1 between the MOS score of base and lite versions, but we didn't find them to be significant (p≪ 0.001). It is noteworthy that while the lite version has 0.1x number of parameters, it achieves nearly the same synthesis quality as the base version. §.§ Speaker Anonymization To assess speaker-anonymization performance, we report Equal Error Rate (EER) on the speaker verification model (ASV) in the VoicePrivacy 2024 Challenge github (see section <ref>). ASV tests are conducted for the following two scenarios, (a) ignorant, where we only anonymize the trial data (O-A), or (b) lazy-informed, where we anonymize both enrollment and trial data but use different target speakers (A-A). Results are shown in Table <ref>. For both the ignorant and lazy-informed scenario, our models achieves similar performance as B3 and outperforms VPC22 B2. Although our base model performs slightly worse than B3, the differences are not significantly different between them (p≪ 0.001). To corroborate these results, we conducted an AB listening test on AMT. Participants were presented with two audio samples, one from a speaker in the enrollment set, and the second sample from one of three options: (a) a different utterance from the same speaker from the trial set, (b) an utterance from a different speaker from the trial set, or (c) another utterance of the same speaker from the trial set but anonymized through our lite version of the system. Then, participants had to decide if both samples were from the same speaker, and rate the confidence in their decision using a 7-point scale (7: extremely confident; 5: quite a bit confident; 3: somewhat confident; 1: not confident at all). Each listener rated 20 AB pairs per scenario. Results are summarized in Table <ref>. In settings (a) and (b), listeners could easily identify whether the recording were from the same or different speakers (81 % ) with high confidence (5.71). In setting (c) However, the anonymized trial data obtained obtained similar rating as in (b), indicating that proposed system was able to anonymize the trial recordings. §.§ Speaker Identity transfer In a final step, we evaluated our models' ability to capture the voice of a target speaker. For this purpose, we used an objective score of speaker similarity based on the cosine similarity between speaker embeddings of the target and the synthesized utterances[Computed using: https://github.com/resemble-ai/Resemblyzer]. We compare our model against the three VC baselines (VQMIVC <cit.>, QuickVC <cit.>, DiffVC <cit.>) using the same settings as those in section <ref>) to generate VC samples. Results are summarized in the rightmost column of Table <ref> (SSS). As a guideline, pairing two utterances from the same speaker yields an average cosine similarity of 0.89. As shown, our base model outperforms the three baselines, achieving cosine similarity that it close to the average within-speaker similarity of 0.89. § DISCUSSION Most existing speaker anonymization methods do not operate in low-latency streaming mode, preventing their use in field operations. In this paper, we present an end-to-end streaming model that operates with low latency and achieves anonymization by mapping speaker embedding into an artificially generated pseudo speaker in a causal fashion (i.e., no future context). While there exists a quality-latency tradeoff, our system can achieve latency s low as 66ms while maintaining state-of-the-art naturalness, intelligibility and privacy preservation. Our lite version is roughly 10MB and can potentially be deployed on mobile devices to support real-time field applications. Accent can carry speaker related cues <cit.> and in future work, we aim to add accent conversion to this pipeline. Other research direction is to add the control of emotion while synthesizing speech signals. § ACKNOWLEDGEMENTS This work was funded by NSF award 619212 and 1623750. IEEEtran
http://arxiv.org/abs/2406.09100v1
20240613132907
Emergence of superradiance in dissipative dipolar-coupled spin systems
[ "Saptarshi Saha", "Yeshma Ibrahim", "Rangeet Bhattacharyya" ]
quant-ph
[ "quant-ph" ]
ss17rs021@iiserkol.ac.in yeshma@phy.iitb.ac.in rangeet@iiserkol.ac.in ^1Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, Mohanpur - 741 246, West Bengal, India ^2Department of Physics, Indian Institute of Technology Bombay, Powai - 400 076, Mumbai, India § ABSTRACT In the superradiance phenomenon, a collection of non-interacting atoms exhibits collective dissipation due to interaction with a common radiation field, resulting in a non-monotonic decay profile. This work shows that dissipative dipolar-coupled systems exhibit an identical collective dissipation aided by the nonsecular part of the dipolar coupling. We consider a simplified dipolar network where the dipolar interaction between the spin-pairs is assumed to be identical. Hence the dynamics remain confined in the block diagonal Hilbert spaces. For a suitable choice of the initial condition, the resulting dynamics require dealing with a smaller subspace which helps extend the analysis to a larger spin network. To include the nonsecular dipolar relaxation, we use a fluctuation-regulated quantum master equation. We note that a successful observation of superradiance in this system requires a weak system-bath coupling. Moreover, we find that for an ensemble of N spins, the maximum intensity of the radiation exhibits a nearly quadratic scaling (N^2), and the dipolar relaxation time follows an inverse square proportionality (1/N^2); these two observations help characterize the emergence of superradiance. Our results agree well with the standard results of pure spin superradiance observed experimentally in various systems. Emergence of superradiance in dissipative dipolar-coupled spin systems Rangeet Bhattacharyya^1 Received ; accepted ======================================================================== § INTRODUCTION The collective behavior of interacting many-particle systems can deviate significantly from the individual particles <cit.>. The observation of emergent phenomena resulted in a paradigm shift in quantum physics during the latter half of the twentieth century. Such collective phenomena can be observed in quantum many-body systems, spanning the disciplines of condensed matter, atomic and molecular physics, and quantum optics. Examples include superradiance and superconductivity, among others <cit.>. In his seminal 1954 paper, Dicke introduced the term `superradiance' for the collective, coherent spontaneous emission from a many-atom system. He showed that the superradiance would occur when a collection of excited atoms radiate due to interactions with the common radiation field <cit.>. From a semi-classical approach, he showed that the intensity of the emitted spontaneous radiation in these systems exhibits a sharp burst that decays much more rapidly than the monotonous exponential decay observed in a single excited atom. By changing the inter-atomic distance, a smooth cross-over from the monotonic decay to collective decay was found <cit.>. Ever since, attempts have been made to provide a complete quantum mechanical description of this phenomenon <cit.>. The first explanation using a simplified dynamical equation in terms of the density matrix of the system to provide a better insight into the microscopic description of the process was proposed in 1970 <cit.> and is known as the superradiance master equation. In the later years, several aspects of the problem, like the statistical properties of the solutions to the dynamical equations and the geometry dependence of the phenomenon, were studied thoroughly <cit.>. Despite theoretical progress, it took nearly two decades after Dicke's work for the first experimental verification of superradiance to be reported in a system of optically pumped Hydrogen Fluoride molecules <cit.>. This was followed by similar observations in several optical and condensed matter systems using the pulsed dye-laser <cit.>. Several experimental advances using optical cavities and collection of Rydberg atoms were also reported over the years <cit.>. In recent years, circuit quantum electrodynamics has enabled the use of artificial atoms and has proven to be an interesting platform to observe such collective effects <cit.>. In Dicke's analysis, the interatomic distance was chosen to be much lesser than the wavelength of the radiation field, such that the indistinguishability of the atoms gave rise to collective dissipation on coupling with the field, and all forms of interatomic coupling couplings were ignored. It was also shown that by including symmetric dipolar coupling between the atoms, such collective phenomena remain unchanged. However, for dissimilar dipolar coupling between the atoms, the permutation symmetry is broken, which results in a suppression of such collective phenomena <cit.>. Around the same time as Dicke's work on superradiance, Bloembergen showed that spontaneous radiation damping occurs in nuclear spins in the presence of an electric resonant feedback field <cit.>, where the emitted radiation was found to have similarities with the Dicke superradiance phenomena. Despite the differences in the origins of these two processes, such radiation damping is also often called superradiance in a multispin system <cit.>. As an explanation for this phenomenon, a theoretical analysis using the phenomenological Bloch equation was proposed, which showed that the superradiance in these systems originated from the thermal Nyquist noise of the resonant circuit <cit.>. However, a strong objection was raised by Yukalov regarding the presence of thermal noise in this problem on the grounds of qualitative mismatch between the theoretical prediction and experimental observations <cit.>. He argued that the origin of superradiance in multispin systems is the presence of local interaction & not the thermal noise in the circuit. Since the local interaction in NMR is identified as the dipolar interaction, the common environment effect in such cases is negligible. Yukolav proposed that the nonsecular part of the dipolar interaction contributes to the collective relaxation of the system. In this case, such a collective relaxation process was called pure superradiance because, in the absence of initial coherence, the correlation builds up through a purely self-organized process <cit.>. He provided a set of stochastic equations to explain this process, as the Bloch equation failed to describe such collective phenomena. Since the proposed model was semi-classical, a complete quantum mechanical picture to describe this phenomenon is still an open problem. Generally, the dynamics of a system weakly coupled with the thermal bath at finite temperature is governed by the Born-Markov quantum master equation (QME) <cit.>. The formulation of the QME is based on the independent rate approximation, where any local interaction (e.g., periodic drive, dipolar interaction) appears in the first order, and the dissipation is solely due to the second-order contribution of the system-bath coupling <cit.>. For spin-half systems, the Born-Markov QME results in the Bloch equation. Hence, the Bloch equation can only provide the notion of the spin-lattice relaxation process captured by the second-order terms of system-bath coupling. As a result, for a dipolar system, such an equation describes the first-order effects of the secular terms of the interaction and fails to include the second-order terms that come from both the secular and nonsecular parts. Bloembergen showed that the spin-spin relaxation process originates from the second-order contribution of the nonsecular dipolar interaction <cit.>. In the case of pure spin superradiance, as Yukalov proposed, contributions from the spin-lattice and spin-spin relaxation processes are crucial to the dynamics <cit.>. Therefore, the problem demands the use of a modified QME capable of capturing both the relaxation processes. Recently, Chakrabarti proposed a fluctuation-regulated quantum master equation (FRQME), which can be used for calculating the second-order effects of any local interaction along with the system-bath coupling <cit.>. Later, FRQME was applied to study several phenomena in quantum optics, quantum computation, and information processing <cit.>. In the case of dipolar systems, this formalism can predict the existence of second-order terms that come from the secular and nonsecular dipolar interaction where the real part of the second-order terms results in a Lorentzian absorption line shape <cit.>. Therefore, such formalism can explain both the spin-spin and spin-lattice relaxation process using a single dynamical equation. The effect of such terms on the linewidth of the magic angle spinning spectrum (MAS) in NMR, on the entanglement storage device, and the lifetime of the prethermal phase was recently studied by the same author <cit.>. In this paper, we address the following question: in the case of a dipolar network, can the interactions lead to a collective dissipation in the presence of thermal fluctuations in the local environment instead of coupling via a common environment? We seek the answer by investigating the effects of both the second-order secular and nonsecular terms of the dipolar coupling and system-bath coupling using FRQME. This work strives to provide a dynamical equation similar to the superradiance master equation that explains superradiance in dipolar coupled spins in the presence of local thermal fluctuations. Through this approach, we naturally account for the nonsecular terms arising from the interactions, thereby rigorously proving Yukalov's conjecture <cit.>. We also note that this description accounts for environment-driven relaxation along with spontaneous emission <cit.>. The organization of the paper is as follows. In section <ref>, we briefly discuss the description of a system of dipolar coupled N spins. In section <ref>, we present the dynamical equation for this system using FRQME. We describe the non-equilibrium dynamics of the system under dipolar interaction in section <ref>. Section <ref> is devoted to the discussion on the comparison of our theoretical analysis with the existing experimental results. Finally, in section <ref>, we discuss the results and conclude in section <ref>. § DESCRIPTION OF THE SYSTEM We consider an ensemble of dipolar coupled N spin-1/2 particles, where each spin is weakly coupled with its local thermal environment in the presence of a Zeeman field. The full Hamiltonian for the system can be written as, ℋ = ^∘+^∘ + + + (t) . The first term is the free Hamiltonian of the system. Here we define, ^∘ = ∑_i=1^N ω_∘/2σ_z^i where ω_∘ is the Zeeman frequency of the spins and σ_i is the Pauli spin matrix [i = x,y,z] . We assume that all the spin-1/2 particles have the same energy levels. The second term, ^∘, is the free Hamiltonian of the thermal bath. Here, ^∘=∑_i ω_L L_+iL_-i, where ω_L is the frequency of the bath and L_± i stands for the creation and annihilation operators of the bath. The system-environment coupling is defined by that can be modeled using the Jaynes-Cummings Hamiltonian, = ω_ SL( σ_+i L_- i + h.c ); where ω_ SL is the coupling amplitude and σ_± are the raising and lowering operator corresponding to the ith spins. represents the dipolar interaction in the system. We note that in our case, the nature of the dipolar interaction is different from the Dicke model, where such interactions are mediated by the photons of common radiation field <cit.>. Here we consider a direct dipolar interactions as encountered in magnetic resonance spectroscopy <cit.>. Two spins having non-zero magnetic moment in the presence of Zeeman field interact with each other when they are physically close. We also note that the direct interaction between two magnetic dipoles has no connection with the common system-environment coupling. The analytical form of is given below. = ∑_i,j=1^N μ_∘ħ/4 π(μ⃗_⃗i⃗.μ⃗_⃗j⃗/r^3 - 3(μ⃗_⃗i⃗.r⃗)(μ⃗_⃗j⃗.r⃗)/r^5), ∀ i >j. Here, μ⃗ = γσ⃗/2, γ denotes the gyro-magnetic ratio, μ_∘ is the magnetic permeability constant and r is the distance between any two spins. The irreducible spherical tensor representation of is written as, = ∑_i,j=1^N∑_m = -2^2 ω_d_m(𝒯_2^m)_ij, ∀ i>j. We define, ω_d_m = ω_d Y_2^-m(θ, ϕ) and ω_d = ħμ_∘γ^2/4 π r^3 where, Y_2^-m is the spherical Harmonics. θ and ϕ are, respectively, the polar and azimuthal angles between the orientation of the dipolar vector with the strong Zeeman field. 𝒯_2^m is the irreducible spherical tensor of rank 2. The definitions of `secular' and `nonsecular' Hamiltonians are borrowed from NMR literature<cit.>. The secular part commutes with the Zeeman Hamiltonian, and the nonsecular part does not commute with the Zeeman Hamiltonian. As such, secular parts do not pick up an oscillating time dependence in the interaction representation, whereas the nonsecular part contains a time-dependent part. The m=0 component represent the secular part of and m={±1,±2} stand for the non-secular components <cit.>. To obtain the maximum intensity of the radiation burst in the case of the Dicke superradiance problem, the permutation symmetry of the atoms is assumed. As a result, the dynamics can be solved using the collective basis, which keeps the Hilbert space dimension finite. Hence, it is a theoretical simplification for understanding the complicated many-body quantum dynamics. Later, the dependence of radiative intensity on the geometry of the arrangement of atoms was also studied by several authors <cit.>. Motivated by the above approaches, to obtain the maximum correlation created by the nonsecular dipolar correlation, we also assume the atoms are indistinguishable. Furthermore, we have also modeled the dipolar interaction as a mean dipolar interaction (i.e., ω_d is the mean dipolar coupling amplitude) between the spin-pairs. We note that such an assumption was already being used in solving several problems in dipolar systems <cit.>. It is an ansatz that helps us to solve the dynamics using the collective basis and reduce the dimensionality issue in this problem, as we will see from our analysis in the next sections. On the other hand, how r_ij, θ_ij and ϕ_ij affect the correlation for a particular configuration of the dipolar network is not explored here and remains an open problem. (t) represents the explicit presence of thermal fluctuations in the local environment, whose form is given by (t) = ∑_i f_i(t) |ξ_i ⟩⟨ξ_i | such that it is diagonal in the basis of ^∘. f_i(t) is assumed to be δ correlated Gaussian white noise with a second moment κ. f_i(t)f_j(t-τ) =κ^2 δ_ij(τ). The fluctuations ensure that the bath coherences decay exponentially with a characteristic time constant τ_c, where τ_c = 2/κ^2. The existence of such fluctuations can destroy the coherences in the local environment within a timescale τ_c, but the diagonal elements of the thermal density matrix remain unchanged. Therefore, the fluctuations do not destroy the equilibrium of the bath. A clear separation of timescale exists in the system, which is given by τ_c ≪Δ t ≪ t_s. Here, Δ t is defined as the coarse-grained time scale as introduced by Cohen-Tannoudji <cit.>, and t_s is the system-relaxation time scale. In this coarse-grained time scale, the propagator is constructed in such a way that the whole system evolves infinitesimally under +, while the bath evolves by a finite amount under (t) <cit.>. The cumulant expansion of the environment fluctuations of the propagator gives rise to a memory kernel (e^-τ/τ_c) <cit.>. Finally, taking partial trace over the environment operator results in a Markovian quantum master equation with a memory kernel in second-order terms, known as FRQME, that provides a dynamical equation for the system <cit.>. In the interaction picture of ^∘ + ^∘, the FRQME is written as, d (t)/dt = -i _ L[(t),(t) ⊗]^ sec -∫^∞_0 dτ_ L[(t),[(t-τ), (t) ⊗]]^ sece^-τ/τ_c, Here, (t)=^I +^I. `I' denotes the interaction representation w.r.t free Hamiltonian of the system (^∘+^∘). is the equilibrium density matrix of the environment, and `sec' denotes the secular approximation <cit.>. The presence of an exponential kernel (exp(-t/τ_c)) results in a finite second-order contribution of along with . The above dynamical equation (Eq. (<ref>)) can be reduced to the Gorini-Kossakowski-Sudarsan-Lindblad (GKSL) form; so CPTP (complete positivity and trace preservation) holds. A detailed derivation of the FRQME (Eq. <ref>) is provided in appendix <ref>. §.§ Applicability of FRQME in dissipative dipolar systems In the case of a dissipative dipolar system, both the spin-spin and spin-lattice relaxation processes are important <cit.>. For liquids and gases, where high motional narrowing occurs, we note that ω_dm = 0. Hence no first-order terms coming from contribute to the dynamics. The polar and azimuthal angles {θ, ϕ} randomly vary in time due to molecular reorientation. As a result. The correlation between the Y_2^± m (θ, ϕ) [as given in dipolar Hamiltonian] decays exponentially in time. To calculate the analytical expression, one needs to average {θ, ϕ} over a solid angle 4 π. In our theoretical prescription, the explicit presence of thermal fluctuation successfully captures similar second-order effects of interaction that has been predicted by previous approaches <cit.>. On the other hand, in the case of a liquid being slowly frozen to approach a solid where the dipolar vector moves in a smaller solid angle along the average dipole moment of the spin-pairs, the first-order terms coming from has a finite contribution in the dynamics. A case in point is liquid crystal, where one experiences residual dipolar coupling (average of the dipolar vector over an incomplete spherical surface) and also has dipolar relaxation <cit.>. In case of further freezing (i.e., for solids), the dipolar vector moves within a very narrow solid angle. Therefore, the time-dependent perturbation theory is still applied in this case, and an exact result can be obtained by averaging over all choices of the average dipole moment of the spin-pairs. For simplicity, we assume that the dipole moments are identical. We also note that the environmental correlation time (τ_c) plays a vital role. In the case of liquids, the value of τ_c is small, whereas, for solids, the value of (τ_c) is relatively large <cit.>. The Redfield equation and Bloembergen's approach are unable to capture such effects in solids <cit.>. Similarly, the Born-Markov master equation only describes the spin-lattice relaxation process, and it fails to explain the spin-spin relaxation process <cit.>. Therefore, we find that FRQME is useful for such cases <cit.>. Using FRQME, we have recently successfully explained the emergence of prethermal phases and discrete time crystals in dipolar solids <cit.> in the presence of periodic drive, where the second-order dipolar contributions play a pivotal role in the dynamics. Our analysis is in excellent agreement with the existing experimental evidence <cit.>. Recently Harkins used Nakazima-Zwanzig formalism to obtain the second-order contribution of the dipolar interaction between ^13C atom and NV center to explain the dynamical stabilization in diamonds <cit.>. In our case, a similar expansion scheme (i.e., FRQME) is used to observe the effects of second-order atom-atom dipolar interactions. We further ensure that no such double counting occurs in such cases while calculating the second-order effects. For example, in the case of dipolar solids, the first-order terms provide an energy shift, which results in a Fourier peak in the frequency domain, while the second-order terms give the spectrum linewidth. For the remaining part of the manuscript, we will not confine ourselves to the particular case of liquids or gases and provide a generic description of the dipolar systems, as FRQME is applicable for both the liquid state and solid state systems. § DYNAMICS OF THE SYSTEM In this section, we describe the dynamics of the system using FRQME. The dynamical equation for the N-spin density matrix (ρ_s^N) using Eq. (<ref>) is given by, d ^N/dt = 𝒟_d[^N ] + 𝒟_ SL[^N ] The term 𝒟_d[^N ] stems from the first-order and second order contribution of . The explicit form of 𝒟_d[^N ] is given as, 𝒟_d[^N ] = ∑_i,j,k,l=1^N∑_m=0^2(-i [(ω_d_0𝒯_2^0)_ij , ^N ] - Γ(m)[(ω_d_± m 𝒯_2^± m)_ij,[(ω_d_∓ m 𝒯_2^∓ m)_kl, ^N ]]) [∀ i >j,k>l]. The real part of the second order contribution of signifies the spin-spin relaxation process. Here, Γ(m) = |ω_d_m|^2τ_c/1 + (mω_∘τ_c)^2, is the dipolar relaxation rate. We neglect the contribution of the second-order shift terms of nonsecular interactions in Eq. (<ref>). 𝒟_ SL[^N ], signifies the contribution from (spin-lattice relaxation process) which is written as, 𝒟_ SL[^N ] = ∑_j=1^N p_∓[ 2σ_j∓^N σ_j± - {σ_j±σ_j∓ , ^N}]. In the above case, we do not consider a common environment because, for nuclear spins, dipolar coupling does not require a common environment <cit.>. Two magnetic dipoles can interact with each other through their magnetic field <cit.>. As such, while the nuclear spins relax due to the local environment, they can be dipolar coupled to their neighbors. In addition, we account for the fluctuations in the local environment, which contribute to the dipolar relaxation <cit.>. Here, the effect of the Lamb shift is also neglected. p_-, p_+ are respectively defined as the downward and upward transition rate due to , which can be calculated from the spectral density function for the system-environment coupling. Their form is given by p_± = Re( ∫_0^∞ dτ[ω_ SL^2 e^-τ/τ_c e^∓ i (ω_L - ω_∘)τ_ L{L_± L_∓}]). Re denotes the real part of the expression. The relaxation time τ_1 is defined as τ_1 = 1/(p_+ + p_-). In the absence of , corresponding equilibrium magnetization, M_z^ eq, is written as, M_z^ eq = p_+ - p_-/p_+ + p_-. The above Eq. (<ref>) can also be written in the Liouville space as dρ̂_s^N/dt= ℒ̂̂̂ρ̂_s^N, where ℒ̂̂̂ is the Liouvillian super-operator. For N spin 1/2 system, the dimension of the Hamiltonian is 2^N and the dimension of ℒ̂̂̂ is 2^2N. Therefore, increasing the number of spins in the system makes the dynamical equation of ρ_s^N harder to simulate numerically. In order to study superradiance, we look at the operators corresponding to the radiative intensity (I), defined as, I = J_+ J_- where, J_± = ( ∑_i=1^N σ_± i). In Fig. <ref>, we plot ⟨ I(t) ⟩ as a function of time for two choices of , (i) ω_d = 0, and (ii) ω_d > ω_ SL for the four spin system. The initial state is chosen to be the same as Dicke's superradiance problem, |ψ⟩= |↑↑ .. ↑_N⟩ <cit.>. For such a choice of the initial state, the first-order contribution due to is zero, as the above-mentioned state is the eigenstate of the secular part of . Therefore, in the case of both liquids and solids, the main contribution comes from the second-order effects. For case (i), where the dipolar coupling is set to zero, the time evolution shows monotonic decay. On the other hand, for case (ii), where the dipolar coupling is set to a non-zero value, an increment in I(t) is observed in an intermediate time scale where the dipolar coupling is much stronger than the system-bath coupling. Therefore, we note that this increment of I(t), which is defined as a radiative burst, is associated with the dipolar interaction. As the initial state |ψ⟩ is the eigenstate of the secular part of , only the nonsecular part contributes to the dynamics. We also check the effect of the spatial correlation between the different local environments in the radiative intensity. In such a case, where the spatial correlation is taken into account, the cross terms between different spin-bath coupling become effective <cit.>. For those cross terms, p_-, p_+ are further modified by an α_c factor, where α_c ∝ e^-r/ξ_c, such that p_± is replaced by α_c p_± <cit.>. Here, r is the distance between the spin pairs, and ξ_c is the bath-correlation length. If r → 0 or ξ_c →∞, the environment acts as a common environment (α_c → 1), which is an asymptotic limit and is not applicable for our case. For, r ≫ξ_c, the environment acts as a local environment (α_c =0). In the limit, ω_d ≫ω_ SL, the dynamics in the intermediate timescale have no dependence on the spatial correlation between the different local environments as the dynamics are dominated by in this regime. For increasing spatial correlations, the system spends more time in the quasi-steady state and reaches the final equilibrium at a longer time, which we have shown in your earlier work <cit.>. As our main focus is to describe the underlying physics behind the radiative burst in the intermediate time scale, we will use the local environment with no spatial correlation (α =0) for simplicity. We also note that the peak associated with the superradiance depends on the inter-atomic separation between atoms. More specifically, when the distance between the atoms decreases, the dipolar coupling increases. To illustrate this, we plot ⟨ I(t)⟩ as a function of time for different choices of ω_d in Fig. <ref>(a). When ω_d ≫ω_ SL, such a burst appears at a much earlier timescale. By decreasing the amplitude of the dipolar interaction, the peak appears at a comparatively later time. For the case, ω_d ≤ω_ SL, no such peak arises, and the dynamics exhibit monotonic decay. In experiments, instead of one radiation burst, multiple bursts at different times can also be observed. The origin of these multiple peaks can be described using Fig. <ref>(a) by noting that such phenomena can occur in a collection of spins where dipolar interaction between several spin pairs is not identical. For stronger coupled spin-pairs, such a burst is observed at earlier times, whereas, for pairs with a weaker coupling, the burst occurs at comparatively later times. We also plot the maximum intensity by varying ω_d/ω_ SL in Fig. <ref>(b). The plot shows that the radiation burst happens when ω_d >ω_ SL and that on increasing ω_d beyond a threshold value, the maximum intensity doesn't increase. Further, to understand how the geometry of the system affects the radiation burst, we also plot ⟨ I(t) ⟩ versus time for different configurations of the system in Fig. <ref>(c). The configurations we study are given below: (i) a dipolar network with all-to-all coupling, (ii) a circular chain of dipolar coupled spins, and (ii) a linear chain of dipolar coupled spins. We observe that the peak is maximum when an all-to-all coupling is present in the system. For circular chains, this peak is lower, and it is the lowest for linear arrangement. We will describe the reason behind such geometry-dependent phenomena later in the manuscript. As the peak is maximum for the all-to-all coupled case, we only consider all-to-all coupled dipolar networks in the remaining part of our analysis. The operator corresponding to radiative intensity can be written as a sum of the collective `z' magnetization (J_z) and the dipolar correlation matrix (𝒟_c). For an N-spin system, it is written as I(t) = N/21+ J_z + 2𝒟_c. Here, 1 is the identity matrix and J_z = ∑_i σ_zi. The dipolar correlation matrix is given by, 𝒟_c = 1/4∑_i,j=1^N (σ_i+⊗σ_j- + σ_i-⊗σ_j+) [∀ i > j] 𝒟_c is an off-diagonal matrix representing the system's spin-spin correlation. We plot ⟨ J_z ⟩ and ⟨𝒟_c ⟩ as a function of time for a dipolar network consisting of four spins in Fig. <ref>(d). The plot shows that three timescales exist in the system. ⟨𝒟_c ⟩ grows in a time-scale τ_R and ⟨ J_z ⟩ decays in a time-scale τ_2. Finally, provides a very long decay time τ_1. So, we have τ_R < τ_2 ≪τ_1. We find that ⟨𝒟_c ⟩ grows in a much shorter timescale than the decay time of ⟨ J_z ⟩, which results in a peak of the radiative intensity (⟨ I(t) ⟩) in the intermediate timescale. However, the radiative burst does not survive after the time τ_2. To study the collective properties of the dynamics, we only focus on the time-evolution up to the time τ_2. Therefore, the contribution from is neglected for our further analysis. In the intermediate timescale (up to τ_2), considering all-to-all dipolar coupling, there exists a quasi-conserved quantity in the system, given by, d ⟨𝐉^2 ⟩/dt = 0 Here, the conserved quantity, 𝐉= ∑_i σ_𝐢/2 is known as the total angular momentum operator. Following the definition of the symmetry operator in QME, an operator 𝒪 is said to be a symmetry if the corresponding super-operator commutes with the Liouvillian (ℒ̂̂̂) i.e., [𝒪̂̂̂,ℒ̂̂̂] = 0 <cit.>. For this case, we have [𝒟̂̂̂_d, 𝐉̂̂̂]=0, since the effect of 𝒟̂̂̂_ SL is neglected. We note that the Liouvillian super-operator is invariant under the following symmetry transformation 𝒰̂̂̂𝒟̂̂̂_d 𝒰̂̂̂^† = 𝒟̂̂̂_d (here 𝒰̂̂̂ = exp(-i Φ𝐉̂̂̂ ), and Φ is a real parameter). Therefore, we follow the common practice and adopt the collective basis (or angular momentum basis) approach as the particular Hilbert space in this basis grows linearly with increasing the number of atoms <cit.>. § DYNAMICS UNDER DIPOLAR INTERACTION IN THE COLLECTIVE BASIS FOR Ω_D ≫Ω_ SL The most important feature of the presence of the symmetry operator (𝐉^2) is that the dynamics are confined in a particular | J M ⟩ block. For example, in this case, the excited state is chosen as the initial state of the individual atoms. In the | J M ⟩ basis, it is written as, |ψ⟩= | J=N/2,M=N/2⟩. Hence, the dynamics is confined to the principal |N/2, M ⟩ block <cit.>. As a consequence of the presence of such a symmetry operator, the dipolar interaction can also be written in terms of the collective operators (J_+, J_-, and J_z ). The form of using J_i operators is given by, ^0 = ω_d_0(3 J_z^2 - 𝐉^2) ^+1 = ω_d_1(J_zJ_+ - J_+/2) , ^-1 = (^+1)^* ^+2 = ω_d_2/2J_+J_+ , ^-2 = (^+2)^* The matrix element of the collective operators in the | J M ⟩ basis are given below, J_z = M δ_ J^',Jδ_ M^',M J_± = √((J∓ M)(J ± M+1))δ_ J^',Jδ_ M^',M± 1 The initial state of the system is the eigenstate of ^0. So, the secular part will not contribute to the dynamics. The dynamics is now confined to (N+1) × (N+1) dimensions instead of 2^N × 2^N. The possible choice of observables in this basis is given by, P^J_MM^'(t) = _s{| J M⟩⟨ J M^'|ρ_s^N(t)}. The total number of observables is (N+1)^2 - 1 since a constraint of trace preservation exists. Using Eq. (<ref>), the dynamical equation of ^N(t) in terms of observables is written as, d/dt P^J_M,M = -α (M) (P^J_M,M - P^J_M-1,M-1) -β (M) (P^J_M,M - P^J_M+1,M+1) -γ (M) (P^J_M,M - P^J_M-2,M-2) -δ (M) (P^J_M,M - P^J_M+2,M+2) In the Eq. (<ref>), M can be varied from -J to J. Here, only diagonal elements contribute to the dynamics, so the number of independent observables is further reduced to N. In the next analysis, we simply denote P^J_MM as P_M. The expression for the above rates is obtained to be, α(M) = 2Γ(1)(J+M)(J-M+1)(M-1/2)^2, β(M) = α(-M), γ(M) = Γ(2)/2(J+M)(J-M+1)(J+M-1)(J-M+2), δ(M) = γ(-M). Here, α (M) and β(M) contains Γ(1) and γ(M) and δ(M) contains Γ(2). We note that α (M), β(M), γ(M), and δ(M) are non-linear functions of M, which results in the non monotonic decay profile. α (M) and β(M) are the transition rates that arise from the nonsecular parts of the dipolar interaction of rank 1. They are responsible for ± 1 transition, as in the rate equation P_M is connected to P_M ± 1 by α (M) and β(M). Similarly, γ(M) and δ(M) are responsible for ± 2 transition. We numerically simulate the above dynamical equation [Eq. (<ref>)] in terms of P_M. The initial condition is chosen as P_M=N/2=1. In terms of P_M, the expectation value of radiative intensity (⟨ I(t) ⟩) is expressed as, ⟨ I(t) ⟩ = ∑_M=-J^J(J+M)(J-M+1)P_M(t) We plot ⟨ I(t) ⟩ vs time by changing the number of atoms in Fig. <ref>(a). The plot indicates that for a two-spin system, no such radiation burst is observed, whereas, for N>3, the maximum intensity (⟨ I⟩_max ) increases with N. The decay time (τ_2) shows exactly opposite characteristics to that of ⟨ I ⟩_max as it decreases on increasing the atom number. Therefore, the radiation burst becomes short-lived and more intense for a higher number of atoms. The maximum intensity ⟨ I ⟩_max is plotted by varying the atom number (N) in Fig. <ref>(b). The decay time of the system (τ_2) can be defined as the inverse of the asymptotic decay rate (ADR) of the Liouvillian super-operator 𝒟̂̂̂_d. The definition of ADR is given as the spectral gap between the zero eigenvalues and the first negative eigenvalue of the Liouvillian super-operator <cit.>. The decay time is also plotted as a function of the atom number (N) in Fig. <ref>(c). Both plots show non-linearity in N (Fig. <ref>(b), (c)). We also plot ⟨ I ⟩_max and τ_2 for varying τ_c. The dipolar relaxation rate is minimum for τ_c ≈ 1/ω_∘. We note that the maximum intensity is nearly constant for varying τ_c. However, at a later time, the dynamics are governed by , and the quasi-conservation law is broken in this regime, therefore enabling the transition between different | J M ⟩ blocks. As a result, the dynamics of the system show a monotonic decay after the timescale τ_2, and the system reaches the final steady state at a timescale τ_1. However, for observing the complete dynamics of the system, one can add in the numerical simulation which provides an additional decay of the radiative intensity at a later time, which does not show any extra new features in the dynamics. § COMPARISON WITH THE EXISTING EXPERIMENTAL RESULTS The characteristics of the collective dissipation we observe here have several similarities with the pure spin superradiance in NMR. In nuclear spin systems, the effect of a common electromagnetic field is negligible <cit.>. In such systems, the spin-lattice relaxation time (T_1) is much longer than the spin dephasing rate, which comes from the dipolar interactions (T_2), i.e., T_1 ≫ T_2. For a typical NMR experiment, the spins are initially prepared in an inverted magnetization mode and simultaneously coupled with a resonator. The resonator feedback field in the presence of the nonsecular part of the dipolar interaction plays a pivotal role in the collective relaxation of such a system <cit.>. One necessary condition required for such a radiation burst is that the radiation time τ_∘ must be smaller than T_2 (τ_∘ < T_2). The first experimental observations were made by using ^27Al nuclear spins in ruby Al_2O_3 <cit.>. Later, it was observed for protons in C_4H_9OH <cit.>. Recently, magnetic nano-molecules and nano-clusters (which are formed by oxides of Ni, Fe, Co, and Hg) have also been proven as promising candidates to demonstrate such collective phenomena in large spin-systems <cit.>. In our case (Fig. <ref>(d)), we define ω_∘ = 2π× 10^5 kHz and the fluctuation correlation time is τ_c = 10^-4 msec. The system-bath coupling amplitude is chosen in such a way that the value of the relaxation time, τ_1 ≈ 10^4 msec. Here, ω_d = 2 × 10^3 kHz and the corresponding dipolar relaxation time, τ_2 ≈ 10^2 msec. In our case, τ_2 plays a similar role to T_2. On the other hand, the dipolar correlation appears in a timescale, τ_R = 10 msec. The radiation burst, in our case, arises in a timescale (τ_∘), where τ_∘≈τ_R. We note that this timescale separation (τ_∘<τ_2) emerges naturally in our analysis and is one of our main results. § DISCUSSIONS Collective dissipation can occur in a system of dipolar coupled spins interacting with the local environment. Such phenomena differ significantly from the Dicke superradiance. The latter is due to interaction with the common environment where the dissipation occurs through collective spontaneous emission and requires a minimum of two spins <cit.>. Also, no direct coupling between them is considered in this problem <cit.>. On the other hand, a common environment is absent in our description, and the spins are coupled with each other through dipolar interaction. In addition, they are also coupled with their local environment instead of a common environment. The collective dynamics here emerge from the cross terms of the dipolar interactions from different spin pairs and, therefore, require a minimum of three spins. We assume that the dipolar interactions between the spins are identical, and along with the spontaneous emission considered in the Dicke model, we also add the effect of absorption and emission due to the thermal environment in our description. For the initially inverted magnetization mode, the secular terms have no contribution. The relaxation process is mainly dominated by the nonsecular part of the interaction in the intermediate regime when the dipolar coupling is much stronger than the system-bath coupling. In order to capture these effects, we use FRQME instead of the usual Born-Markov master equation. We note that the strength of the dipolar interaction increases by decreasing the distance between the atoms. We also find that collective dissipation depends on the density of the system. For dense dipolar networks, when ω_d > ω_ SL, the maximum burst occurs, as shown in Fig. <ref>(b). Whereas, for ω_d ≈ω_ SL, the maximum intensity is lower than the previous case, and for ω_d < ω_ SL, no such peak is possible, and the dynamics exhibit a monotonic decay. The dipolar correlations between the different spins build up in a relatively shorter time than the decay of the collective `z' magnetization, which results in a radiative burst in the intermediate time. Such correlation in the system builds up due to the cross terms of the interaction between different pairs, whereas the collective `z' magnetization decays due to the second-order contribution of the self terms of the coupling. For example, in the case of an N spin system, considering the all-to-all coupling case, the possible number of pairs is M = N 2. The number of terms contributing to 𝒟_c is M 2, whereas only M terms are responsible for the decay of J_z. We note that, M 2≥ M for N>2. It signifies that 𝒟_c grows faster than the decay of J_z when N≥ 3. Therefore, at least three dipolar coupled spins are required to observe the radiative burst. The above analysis also helps us understand the geometry dependence of the system in the radiation burst. The number of pairs for the linear case is M_l=N-1, and for the circular case, it is M_c=N. In case of N ≥ 4, we have M> M_c >M_l. Therefore, in the case of a dipolar system consisting of a minimum of four spins, the radiation burst will be maximum when we consider all-to-all coupling between the spins in comparison to the linear and circular cases. This particular observation is in line with the recent work on Dicke superradiance by Masson <cit.>. The particle exchange symmetry is preserved for the all-to-all case when the coupling between each pair is equal. Such symmetry leads to the conservation of the total spin operator, ⟨𝐉^2 ⟩. Therefore, the all-to-all coupling case is much easier to handle in a collective basis, hence justifying our choice. The maximum intensity is also plotted as a function of atom number N, which shows an N^2 dependence. Therefore, for a higher number of atoms, such a radiative burst is more intense. Moreover, the dipolar relaxation time (τ_2) is proportional to 1/N^2, implying that the radiative burst becomes short-lived on increasing N. The short-lived and intense radiation bursts are the main features of the collective dissipation in the system. In case of increasing the number of atoms in the dipolar network, we intentionally keep the mean interaction strength the same, such that the system becomes dense. We note that for a dipolar network consisting of N spins, the average interaction strength between the spin pairs is given by ω_d ∝|ω_d_ij|/N, where ω_d_ij is the nearest neighbor interaction <cit.>. Therefore, when the number of atoms increases, the average interaction becomes weaker. Here, we keep it fixed, which is possible when the distance between the atoms is reduced. Such a dense configuration leads to a more intense and short-lived radiation burst, which matches with the experimental outcomes <cit.>. The dipolar relaxation rate, Γ(m) ∝τ_c/1 + (m ω_∘τ_c)^2, is plotted in Fig. <ref>(d). α (M), β(M), γ(M), and δ(M) contains Γ(m). For τ_c >1 / ω_∘, Γ(m) ∝ 1/ τ_c and, for τ_c < 1 / ω_∘, Γ(m) ∝τ_c. So, in the limit, τ_c <1 / ω_∘, we note that for lowering τ_c, the dipolar relaxation time becomes longer. Similarly, in the opposite limit, τ_c >1 / ω_∘, the relaxation time further increases for increasing τ_c. Hence for ω_∘τ_c ≈ 1, the dipolar relaxation time τ_2 becomes the shortest as α (M), β(M), γ(M), and δ(M) are maximum at that point. Here, we show that, for a dipolar coupled system, in the regime ω_d ≫ω_ SL, the lifetime of the radiation burst is minimum at τ_c ∝ 1 / ω_∘. A short-lived and intense superradiant emission can be used as a superradiance pulse laser <cit.>. In the case of a fixed Zeeman frequency, one can construct a superradiant laser pulse by choosing the fluctuation correlation time (τ_c) appropriately. Our recent work qualitatively predicts the creation of correlations in a system of dipolar interacting spins. While we have assumed the interaction between spin-pairs to be identical, for a real scenario, the spatial degrees of freedom of the spin-pairs are different and lead to different dynamical signatures. We note that the collective basis cannot be used for analysis in such cases, and therefore, one may face the dimensionality issue. An efficient numerical technique for analyzing such many-body open systems is still an open area of research <cit.>. § CONCLUSION We have presented a theoretical description of the collective dissipation that arises in a dipolar network in the presence of thermal fluctuations in the local environment. To this end, we use FRQME, which successfully predicts the second-order dissipation that comes from local interactions (e.g., dipolar interaction). We observe that the secular term does not affect the dynamics. On the other hand, the nonsecular pairs play a predominant role in the origin of collective behavior in the system. In the case of negligible system-bath coupling, a sudden, short-lived increment in the radiation intensity is observed as the dipolar correlation builds up for an initially inverted collection of spins; that is, a superradiant phase appears as an emergent behavior. As the system size is varied, the maximum peak of the radiation intensity curve scales as N^2, while its decay time scales as 1/N^2. We also show that the time scales of the radiation burst in our analysis are in good agreement with the experimental results of pure spin superradiance in NMR. § ACKNOWLEDGMENTS The authors thank Arnab Chakrabarti and Arpan Chatterjee for helpful suggestions and insightful comments. SS acknowledges the University Grants Commission for a research fellowship (Student ID: MAY2018- 528071). YI thanks IISER Kolkata for a project grant. § DERIVATION OF FRQME IN THE CONTEXT OF DISSIPATIVE DIPOLAR SYSTEM We note that FRQME was originally reported by Chakrabarti <cit.>. In the paragraphs below, we provide the key steps leading to the emergence of the master equation. The total Hamiltonian of the system + environment is given by, ℋ = ^∘+^∘ + + + (t) . A discussion about the individual components of the above Hamiltonian (Eq. (<ref>)) is presented in section <ref>. Initially, the system and environment are assumed to be uncorrelated, so ρ (t) = (t) ⊗. Here, is the equilibrium density matrix of the system. This approximation is known as the Markov approximation. In the interaction picture w.r.t ^∘+^∘, the von Neumann Liouville equation of the system + environment is given by, d ρ^I/dt = -i[H,ρ^I ] Here, ρ^I(t) is the full density matrix of the system in the interaction picture, and H (t) = ^I(t) + ^I (t)+ ^I(t). The solution of the above Eq. (<ref>), is given as, ρ^I(t + Δ t) - ρ^I(t) = -i ∫_t^t + Δ t dt_1 [ H(t_1), ρ(t_1)]. Here, ρ(t_1) =U(t_1,t) ρ(t) U^†(t_1,t), and U(t_1, t) is called the propagator, which has the form, U(t_1, t) = Texp[-i ∫_t^t_1 dt_2 H(t_2)] where T is the time-ordering operator. To find the dynamical equation of the system, we need to take a partial trace over the environmental operator, such that (t) = _ L{ρ^I(t)}. On taking the partial trace, the above dynamical equation can be written as, (t + Δ t) - (t) = -i ∫_t^t + Δ t dt_1 _ L[ (t_1), U(t_1,t)ρ(t)U^†(t_1,t)]. We note that, (t_1) = ^I(t_1) + ^I (t_1) and the commutation involving (t) goes to zero after taking the partial trace. Using the solution of the Schrodinger equation, we can write, U(t_1) = 1 - i∫_t^t_1 dt_2 ((t_2) + (t_2)) U(t_2) To construct the finite propagator, the following condition must be satisfied <cit.>. * The propagator contains only the first-order contribution of the perturbed Hamiltonian, (t). * It retains all order of the fluctuation Hamiltonian ℋ_L(t), but upto a suitable time-interval. Here we note that, [, (t)] ≠ 0. We follow a particular truncation scheme that relies on the Neumann series solution of the Schrodinger equation. i.e., U(t_1,t) = 1 -i ∫_t^t_1 dt_2 H(t_2) ( 1 -i∫_t^t_2 dt_3 H(t_3) ( 1 -i ∫_t^t_3 dt_4 H(t_4) ( .... Here, t_1>t_2>t_3>t_4...... Each integral in this series is smaller than the preceding one by | H |Δ t <1. A full solution involves painstakingly keeping track of all orders of (t) and (t), which is indeed shown in the seminal work by Feynmann <cit.>. However, we truncate using an ansatz that shows in Eq. (<ref>), U could be replaced by U_L (U_L(t_2) = Te^-i∫_t^t_2 dt_3 (t_3) dt_3) in the r.h.s on the ground that at t_2 instances has already been calculated and inclusion of from the expansion of U will introduce a term having third order |× ()^2|Δ t^3, whose effect is negligible. It is rather surprising that this truncation scheme provides a much closer result to experiments <cit.>. We note that, the truncation scheme is strictly applicable for ≠ 0, otherwise the dynamics can be trivially decomposed in two different Hilbert spaces (i.e., system and environment), and in such cases, no local interaction-induced dissipation can be observed. However, using the form of a full propagator in the analysis and finding its effect on the dynamics is a different open problem. The final form of the propagator is given by <cit.>, U(t_1) = 1 - i∫_t^t_1 dt_2 (t_2) U_L(t_2) - i∫_t^t_1 dt_2 (t_2) U_L(t_2) = U_L(t_1) - i∫_t^t_1 dt_2 (t_2) U_L(t_2) Using the form of the propagator (Eq. (<ref>)) in Eq. <ref>, we get, (t + Δ t) - (t) = -i ∫_t^t+ Δ t dt_1 _ L[(t_1), U_L(t_1,t)ρ(t)U_L^†(t_1,t)] -∫_t^t+ Δ t dt_1 ∫_t^t_1 dt_2 [(t_1), (t_2) U_L(t_2,t)ρ(t)U_L^†(t_1,t) - U_L(t_2,t)ρ(t)U_L^†(t_1,t) (t_2)] Next, an ensemble average over the fluctuations is taken on both sides of the Eq. (<ref>). Using the Cumulant expansion as given by Kubo <cit.>, we get, U_L(t_1,t)ρ(t)U_L^†(t_2,t) = (t) ⊗exp(- | t_1 - t_2 |/τ_c) We note that t is the initial time of the coarse-grained interval [t, t+Δ t] and the system and bath are uncorrelated only at the initial time t (i.e., Born approximation) <cit.>. Putting the above formula in Eq. (<ref>) and using the coarse-grained approximation as prescribed by Cohen-Tannoudji <cit.> [d /dt = (t + Δ t) - (t)/Δ t ], and further using the limit Δ t/τ_c→∞ (secular approximation), we get the final the final form of the FRQME, which is given by <cit.>, d/dt = -i _ L[(t),⊗]^ sec -∫^∞_0 dτ_ L[(t),[(t-τ),⊗]]^ sece^-τ/τ_c, Here, the superscript `sec' denotes the secular approximation. The presence of an exponential kernel (exp(-t/τ_c)) results in a finite second-order contribution of along with and therefore goes beyond the independent rate approximation. As consists of and , such a dynamical prescription successfully explains both the spin-spin (-) and spin-lattice (-) relaxation process using a single equation. We note that, acts at two different time-instances `t' and `t - τ'. In this interval, τ = | t - (t - τ) |, the bath dephases due to the fluctuations. The total evolution (full Hilbert space of the system and the environment) captures this dephasing, which results in the second-order dissipative effects of . The above dynamical equation (Eq. (<ref>)) can be reduced to the Gorini-Kossakowski-Sudarsan-Lindblad (GKSL) form, so here CPTP (complete positivity and trace preservation) holds <cit.>. § REFERENCES unsrt
http://arxiv.org/abs/2406.09161v1
20240613142319
Complex Image-Generative Diffusion Transformer for Audio Denoising
[ "Junhui Li", "Pu Wang", "Jialu Li", "Youshan Zhang" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Potion: Towards Poison Unlearning Stefan Schoepf ss2823@cam.ac.uk University of Cambridge, UK & The Alan Turing Institute, UK Jack Foster jwf40@cam.ac.uk University of Cambridge, UK & The Alan Turing Institute, UK Alexandra Brintrup ab702@cam.ac.uk University of Cambridge, UK & The Alan Turing Institute, UK ========================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The audio denoising technique has captured widespread attention in the deep neural network field. Recently, the audio denoising problem has been converted into an image generation task, and deep learning-based approaches have been applied to tackle this problem. However, its performance is still limited, leaving room for further improvement. In order to enhance audio denoising performance, this paper introduces a complex image-generative diffusion transformer that captures more information from the complex Fourier domain. We explore a novel diffusion transformer by integrating the transformer with a diffusion model. Our proposed model demonstrates the scalability of the transformer and expands the receptive field of sparse attention using attention diffusion. Our work is among the first to utilize diffusion transformers to deal with the image generation task for audio denoising. Extensive experiments on two benchmark datasets demonstrate that our proposed model outperforms state-of-the-art methods. § INTRODUCTION Audio denoising is the process of estimating better-quality audio signals from a mixture of audio by removing background noise. However, removing background noise from many different sources to produce high-quality audio still makes audio denoising challenging. Conventional audio denoising methods include Wiener filtering, spectral subtraction, minimum mean square error (MMSE) estimation <cit.>, etc. In extremely low SNR and non-stationary noise environments, the performance of these approaches is known to suffer from a significant loss in performance. Deep learning techniques have resulted in multiple new audio denoising techniques to tackle challenges in the domains of speech and audio. Deep audio denoising models may estimate and remove noise from noisy data to obtain denoised audio, or they may directly create denoised audio using the regression technique <cit.>. Diffusion Denoising Probability Models (DDPM) have demonstrated great progress in generative tasks capable of generating high-quality and diverse images <cit.>. Although these models have largely been developed in their own domains, some researchers have attempted to apply DDPM to the field of audio denoising. Zhang and Li <cit.> developed a complex image generation SwinTransformer network model to generate high-quality complex images in the Fourier domain by converting audio denoising into an image generation problem. This method has competitive performance and has outperformed previous state-of-the-art methods, such as DCU-Net <cit.> and MANNER <cit.>. However, these methods are all based on classical UNet architectures, and their performance is still limited. Generative models include generative adversarial networks (GAN), variational autoencoders (VAEs), flow-based neural networks, and diffusion models. The GANs-based generative methods have been demonstrated to be efficient for speech enhancement <cit.>. The diffusion model has been successful as a typical deep generative model. Ho et al. <cit.> first introduced a class of latent variable models motivated by nonequilibrium thermodynamics. Then, diffusion probabilistic models were used to get high-quality image synthesis results using a relatively simple architecture and training procedure. Recently, diffusion models achieved more competitive results than GANs and have achieved impressive results in various applications such as text-to-image generation <cit.>, audio synthesis <cit.>, and video generation <cit.>. Most deep learning-based models for audio denoising focus on time-frequency domain (TF) methods. Due to the estimation difficulties, the majority of TF-domain approaches only accept magnitude as an input to real-valued parameter models and ignore complex-valued phases, which have an impact on performance. Although WaveNet, U-Net, or other convolutional designs serve as the basis for the majority of diffusion models, they are not scalable enough to model additional visual information. Hence, we explore a diffusion model based on a transformer with multiple inputs to capture more information and gain better complex image generation to estimate clean audio. To overcome the aforementioned challenges, we design a novel framework with attention diffusion on multiple input spectrograms, the complex image-generative diffusion transformer network (CIGDTN) model. We propose a complex image-generative Diffusion Transformer (CIGDT) module based on diffusion transformers (DiTs) with adaptive layer norm zero (adaLN-Zero) and sparse attention diffusion to capture more information. We also design a CIGDT block to inherit the excellent scaling properties of the transformer model and save computation costs. Furthermore, we deploy the CIGDT module to process the real and imaginary spectrograms, respectively, to make full use of the phase information in noisy audio. In addition, we apply FlashAttention-2, a novel attention algorithm with better work partitioning, to address low-occupancy or unnecessary shared memory reads and writes on the GPU <cit.>. Overall, our contributions are threefold: * We propose a complex diffusion transformer with multiple inputs that can generate high-quality denoised audio. * We present a complex image-generative diffusion transformer network (CIGDTN) model that fuses diffusion transformers with adaLN-Zero and sparse attention diffusion with FlashAttention-2 algorithm to gain better complex images. * Experimental results demonstrate state-of-the-art results on two benchmark datasets. § METHODOLOGY In audio denoising, a mixture of audio signal y in the time domain can be typically expressed as a linear sum of the clean speech signal and the additive noise signal: y=x+ε where x and ε denote clean audio and additive noise signal, respectively, a sequence of mixture signal and clean signal are defined as Y={y_i}_i=1^N and X={x_i}_i=1^N, where N is the total number of speech signals. Our goal is to extract a clean audio signal. Typically, each of the corresponding time-frequency (k, f) audio denoising operates in the time-frequency domain:Y_k,f=X_k,f+ϵ_k,f, where Y_k,f, X_k,f, ϵ_k,f is the STFT representation of the time domain signal y(t), x(t), ε(t) and k, f are the time frame index and frequency bins index. § CIGDTN MODEL To reduce noise and recover the speech signal, this study only concentrates on the denoising tasks in the Fourier domain. Additionally, we employ a complicated feature encoder trained end-to-end to enhance the information of various image bands rather than just using STFT features as input. We also restore the features to the time-frequency domain using a complex feature decoder. To accomplish this task, we developed a complex image-generative diffusion transformer network (CIGDTN) model to handle complex image inputs. The model architecture can be found in Fig. <ref>, which mainly consists of three main parts: a complex encoder, a CIGDT module, and a decoder. To make full use of the phase information in noisy audio, our model takes all the real and imaginary spectrograms as inputs. Given an input batch of tensor T^r, T^i ∈ℂ^N× C× H× W, where N and C is the number of samples in the batch and the channel size respectively; H and W represent the height and width of the image. §.§ Complex Encoder Complex-valued input audio is translated into a complex-valued representation by the complex encoder. The complex neural network has shown promising performance due to its effectiveness in processing complex-valued spectrograms. To start with, we transform the raw real-valued audio (y_1,...,y_n)∈ℝ^d× n into complex-valued tensor 𝐓_1,...,𝐓_n by STFT, an operation which decomposes a finite time sequence into a finite frequency sequence to generate a complex tensor representation for audio images. Given a complex-valued tensor T, we use our model to convert the input into a sequence of patches 𝐓=[T_1...T_N]∈ℝ^N× P^2 × C tokens where (P,P) is the patch size, N=HW/P^2 is the number of patches. The number of tokens T created by patchify is determined by the patch size hyperparameter p. Following patchify, we apply specific position embeddings Pos=[Pos_1,..., Pos_N]∈ℝ^N× D to all input tokens to retain positional information as follows: 𝒵=[T_1E;T_2E,⋯;T_NE]+E_Pos where E∈ℝ^(P^2· C)× D is the patch embedding projection, and E_Pos∈ℝ^N× D denotes the position embedding. §.§ CIGDT Block Following the encoder, the input tokens are processed by a sequence of transformer blocks. The encoder consists of four CIGDT blocks. The key design feature of the CIGD transformer is to design a DiT with adaLN-Zero and sparse attention with Attention Diffusion. We replace standard layer norm layers in transformer blocks with adaptive layer norm (adaLN). Rather than directly learning dimensionwise scale and shift parameters γ and β, we regress them from the sum of the embedding vectors of t and c. We also use a similar initialization strategy as diffusion U-Net models, zero-initializing the final convolutional layer in each block prior to any residual connections. To further broaden the receptive field of sparse attention, attention diffusion is used. This technique computes multi-hop token correlations based on all pathways between corresponding disconnected tokens in addition to the attention between surrounding tokens. An attention layer, a feed-forward layer with a LayerNorm layer, a two-layer MLP, and GELU nonlinearity are components of every CIGDT block. A scale and shift layer is located behind the LayerNorm layer. Additionally, we add regression dimensionwise scaling parameters α, which are applied immediately prior to any residual connections within the CIGDT block. With such a transformer architecture and adaLN scheme, consecutive CIGDT blocks can be formulated as: ℋ̂^l=ℂMSA(γℂLN(ℋ^l-1)+β)+ℋ^l-1 ℋ^l=ℂMLP(γℂLN(α̂ℋ̂^l)+β)+ℋ̂^l where ℋ̂^l and ℋ^l denote the output features of the MSA module and the MLP module for block L_i, respectively. γ, and β denote demension-wise scale and shift parameters for LayerNorm layer. α denotes dimensionwise scaling parameters that are applied immediately prior to any residual connections. Sparse Attention Diffusion. By combining token correlations that are multiple hops away, sparse attention diffusion was developed. Especially, sparse patterns consider a combination of local window attention, global attention, and random attention to capture token interactions without quadratic complexity dependency on the sequence length. The attention matrix A is first used to characterize the interaction strength between neighboring nodes on the graph G, i.e., A_i,j=exp(Q_iK_j)/√(d)/∑_j∈ Ne(i)exp(Q_iK_j)/√(d). Sparse attention diffusion utilizes the attention diffusion process to calculate the multi-hop token relationships on the attention graph based on attention weights on edges. The entries of the graph diffusion matrix 𝒜 are calculated to get the multi-hop attention scores: 𝒜=∑^∞_s=0δ_kA^k, where A is the calculated sparse attention matrix, and the weighting coefficient δ_k satisfies ∑^∞_s=0δ_k=1, δ_k∈[0,1]. The sparse attention pattern's original receptive fields will gradually enlarge as k increases. All paths between tokens i and j are included in the resulting attention score A_i,j and weighted by the coefficient δ_k. Next, We multiply the diffusion attention matrix A by each value vector V as Eq.(<ref>). Attention(Q,K,V)=softmax(QK^T/√(d_k))V=AV where d_k denotes the dimension of K. Even when the sparsity is taken into account, computing the power of attention matrices can be unavoidably expensive for long sequences. To efficiently combine the diffusion mechanism with transformers, we implement the graph diffusion process as the personalized pagerank (PPR) by specifying δ_k=α(1-α)^k with teleport probability α. The resulting diffusion matrix 𝒜=∑^∞_k=0α(1-α)^kA^k is the power expansion of the solution to the recursive equation 𝒜=α I+(1-α)𝒜A. Each power diffusion step is calculated as Z_0=V=XW_v, Z_k+1=(1-α)AZ_k+α V, for 0≤ k<K. Z_k is the output of the attention diffusion process and will converge to the real output 𝒜V as K→∞. §.§ Decoder After the final CIGDT block, we need to decode our sequence of generated audio image tokens into an output-denoised image prediction. Both of these output images have shapes that are equal to the original spatial input. We apply the final layer norm (adaptive if using adaLN) and linearly decode each token into a p× p× 2C tensor, where C = 1 is the number of channels in the spatial input to CIGDT module. The term “channel“ is often used in the field of image processing, and a channel number of 1 usually indicates a grayscale image. Finally, we rearrange the decoded tokens into their original spatial layout to get the predicted denoised image. After getting the output from the decoder layers in the CIGDTN model, we could apply ISTFT to get the reconstructed audio as Ŷ. The overall training algorithm is shown in Alg. <ref>. §.§ Objective Function In this study, our model processes real- and imaginary-image streams to extract more audio features. Then, the estimated output is reconstructed by ISTFT. Therefore, our loss function consists of image loss and SDR loss to fully utilize different feature information. The image loss is based on the energy-conserving loss function proposed, which simultaneously considers clean audio complex images and noisy audio complex images. We first apply L_1 loss to minimize the difference between the generated images and the ground truth image Loss_im,L_1=| y-ŷ |_1 Eventually, the image loss consists of three parts and is defined as follows: Loss^total_L_1,im=Loss^real_im,L_1+Loss^imag_im,L_1+|ε-ε̂ |_1, where y and ŷ are the samples of the clean audio complex images and the enhanced audio complex images, respectively. ε represents the additive noise signal given a mixture of the audio signal and ε̂=x-ŷ represents the estimated noise and |·| denotes the L_1 norm. For the reconstructed audio signal, we also first apply L_1 loss to minimize the difference between the reconstructed audio and the ground truth audio as follows: Loss_R,L_1=|Y-Ŷ |_1 To properly balance the contribution of these two loss terms and to address the scale insensitivity problem, we weigh each term proportionally to the energy of each utterance. The final form of the loss function is as follows: Loss^total=α Loss^total_L_1,im+(1-α) Loss_R,L_1 § EXPERIMENT We evaluated the proposed CIGDT with two audio datasets, VoiceBank+DEMAND and Birdnoisesound dataset. The model was trained for 100 iterations on a single NVIDIA 3060 GPU. We train all models with AdamW. We use a constant learning rate of 1× 10^-4, no weight decay, and a batch size of 8. In order to address low-occupancy or unnecessary shared memory reads and writes on the GPU, we employ the FlashAttention-2 algorithm. In order to convert audio signals into audio images, we used the STFT and a 500-point Hamming window function with a Fourier transform of nfft=513. Each audio's length can be different. Therefore, we set the distance between neighboring sliding window frames to be hop_length=int(length(x_t)/256), where length(x_t) is the length of each audio. The input image dimensions are then resized as [256 × 256 × 1]. §.§ Datasets VoiceBank+DEMAND is a synthetic dataset created by mixing clean speech and noise <cit.>. The training set contains 11572 utterances (9.4h), and the test set contains 824 utterances (0.6h). The lengths of utterances range from 1.1s to 15.1s, with an average of 2.9s. BirdSoundsDenoising was randomly split into the training set (10000 samples), validation set (1400 samples), and test set (2720 samples) <cit.>. Unlike many audio-denoising datasets, which have manually added artificial noise, these datasets contain many natural noises, including wind, waterfall, rain, etc. §.§ Evaluation Metrics For evaluation on the Birdsoundsdenoising dataset, we use signal-to-distortion ratio (SDR) to evaluate different models. We assess the proposed audio denoising model on the VoiceBank+DEMAND dataset using a variety of objective metrics: perceptual evaluation of speech quality (PESQ, higher is better) with a score range from -0.5 to 4.5; short-time objective intelligibility (STOI, higher is better) with a score range from 0 to 1. We also adopt subject mean opinion scores (MOSs; higher is better), such as CSIG for evaluating signal distortion, CBAK for evaluating noise distortion, and COVL for evaluating overall quality. §.§ Result Table <ref> shows the comparison results of our proposed model and SOTA baselines on the VoiceBank+DEMAND dataset. As we can see, CIGDTN surpasses most waveform-based approaches currently in use in all five metrics and performs as well as other methods with large model configurations while employing fewer parameters. For the BirdSoundsDenoising dataset, we report the performance of our CIGDTN model and ten state-of-the-art baselines. The results are shown in Table <ref>, where the bold text indicates the best outcomes. The results demonstrate that our model outperforms other state-of-the-art methods in terms of SDR. Results of F1, IoU, and Dice are omitted since these metrics are used for the audio image segmentation task <cit.>. As a consequence, these benchmarks confirm that our method for audio denoising is effective, and our model enhances the audio-denoising performance of both VoiceBank+DEMAND and BirdSoundDenoising datasets. § CONCLUSION In this paper, we present a complex image-generative diffusion transformer network (CIGDTN) model for audio denoising. CIGDTN explores a new class of diffusion models based on transformer architecture with multiple inputs to achieve better complex image generation and audio denoising. In a CIGDT block, diffusion transformers were improved with sparse attention. We also modified the transformer model using the FlashAttention-2 algorithm, which can compute attention with a great deal fewer memory accesses. Extensive experiments on two benchmark datasets demonstrate the superiority of the proposed CIGDTN architecture in audio-denoising tasks. IEEEtran
http://arxiv.org/abs/2406.08126v1
20240612120922
Towards a unified description of isotopic fragment properties in spontaneous and fusion-induced fission within a 4D dynamical Langevin model
[ "K. Pomorski", "B. Nerlo-Pomorska", "J. Bartel", "C. Schmitt", "Z. G. Xiao", "Y. J. Chen", "L. L. Liu" ]
nucl-th
[ "nucl-th" ]
pomorski@kft.umcs.lublin.pl Maria Curie Skłodowska University, Department of Theoretical Physics, 20031 Lublin, Poland Maria Curie Skłodowska University, Department of Theoretical Physics, 20031 Lublin, Poland IPHC/DRS and University of Strasbourg, 67200 Strasbourg, France IPHC/DRS and University of Strasbourg, 67200 Strasbourg, France xiaozg@mail.tsinghua.edu.cn Department of Physics, Tsinghua University, Beijing 100084, China China Institute of Atomic Energy, Beijing 102413, China China Institute of Atomic Energy, Beijing 102413, China 24.75.+i, 25.85.-w,28.41.A § ABSTRACT Spontaneous fission of ^252Cf and fusion-induced fission of ^250Cf are investigated within a multi-dimensional Langevin model. The potential-energy surface is calculated in the macroscopic-microscopic LSD+Yukawa-folded approach using the four-dimensional Fourier-over-Spheroid shape parametrization. The dynamical evolution described by the Langevin equation is coupled to neutron evaporation, thereby allowing for the possibility of multi-chance fission. Charge equilibration and excitation-energy sharing between the fragments emerging at scission are evaluated, and their de-excitation is finally computed. The correlation between various observables, particularly the isotopic properties of the fragments, is discussed and compared with the experiment whenever available. The theoretical predictions are generally in good agreement with the data. KEYWORDS: nuclear fission, macro-micro model, fission fragment isotopic and TKE yields, pre- and post-scission neutron multiplicities Towards a unified description of isotopic fragment properties in spontaneous and fusion-induced fission within a 4D dynamical Langevin model L. L. Liu June 17, 2024 ============================================================================================================================================ § INTRODUCTION The nuclear fission phenomenon, discovered in 1938, continues to be of primary interest in nuclear physics both from the fundamental and applications point of view. In this context, accurately reproducing the mass, charge, isotopic, and total kinetic energy (TKE) yields of fission fragments and the multiplicities of emitted neutrons is a stringent test of any modern theoretical model. A representative selection of contemporary models of various types developed by different groups can be found in Refs. <cit.>. For an overall picture of modern fission theories and perspectives, we refer to recent reviews in Refs. <cit.>. The present investigation is a continuation of our previous studies <cit.> in which fragment mass yields for fission at low excitation energy were investigated in a wide range of fissioning systems from pre-actinides to trans-actinides. For some specific actinides, TKE yields were also studied <cit.>. We recently substantially extended these investigations in Refs. <cit.>. In particular, a model of charge equilibration of the fragments at scission was introduced, allowing us to go beyond the widespread Unchanged-Charge-Density (UCD) assumption. In addition, the Langevin equation was coupled to a Master-type equation for modeling the possible emission of neutrons from the excited fissioning system prior to scission and from the primary fragments after scission. As for the latter, a simple prescription for sharing the excitation energy between the fragments at scission was implemented. In our most recent calculations <cit.>, the nuclear shape description is based on the so-called Fourier-over-Spheroid (FoS) parametrization, which is an innovative variant of the original Fourier shape parametrization presented in <cit.>. As discussed in Ref. <cit.>, the FoS parameterization is better adapted to fission calculations on a large grid. It is to be emphasized that the extensions <cit.> of our original model are mandatory for any meaningful calculation of fragment (A, Z) isotopic yields. This new approach offers the possibility to study fission in detail, as illustrated in recent experimental campaigns <cit.>. In the present study, we use the advanced version of our model <cit.> to address the fission of two californium isotopes in two excitation-energy regimes. In particular, we consider spontaneous fission of ^252Cf, and fission of ^250Cf at an excitation energy E^* of 46 MeV induced by the fusion reaction generated by a ^238U beam on a ^12C target. Experimental isotopic yields for both systems are available from Refs. <cit.> and Refs. <cit.>, respectively. Comparison with these data allows us to evaluate our theoretical model's performance over a wide range of excitation energies (our previous studies have focused on low-energy fission). Such a study will allow for a strict test of the assumed evolution of various quantities with temperature. The main features of the model, which are important for an understanding of the present study, are briefly recalled in Section II, while we refer to Refs. <cit.> for further details and parameters. Sections III and IV present the calculated results for spontaneous fission of ^252Cf and fusion-induced fission of ^250Cf at excitation energy E^* = 46 MeV. Summary and concluding remarks are given in Section V. § MODEL §.§ Shape parametrization and the potential-energy surfaces The model used in our present study is the same as in our previous investigation <cit.> on thermal neutron-induced fission of ^235U. That is why only its main ingredients are shortly listed below. Using what we call the Fourier-over-Spheroid shape parametrization developed in Ref. <cit.>, the surface of a deformed nucleus is described in cylindrical coordinates (ρ,φ,z) by the following formula: ρ_s^2(z,φ)=R_0^2/c f(z-z_ sh/z_0) 1-η^2 1+η^2+2ηcos(2φ) . Here ρ_s(z,φ) is the distance of a surface point to the z-axis. The function f(u) defines the shape of the nucleus having half-length c=1: f(u)=1-u^2-∑_k=1^n {a_2kcos(k-1 2π u) +a_2k+1sin(kπ u)} , with u = z-z_ sh/z_0 , where z_0=c R_0, with R_0 being the radius of the sphere, is the half-length of the deformed nucleus and the shift parameter z_ sh = -3/(4π) z_0(a_3-a_5/2+…) ensures that the origin of the coordinate system is located at the center of mass of the nucleus so that -1 ≤ u ≤ 1. The expansion coefficients a_i are treated as the deformation parameters. The first two terms in f(u) describe a sphere. The volume conservation condition implies a_2=a_4/3-a_6/5+…. The parameter c determines the elongation of the nucleus, keeping its volume fixed, while a_3 and a_4 are, respectively, the deformation parameters essentially responsible for the reflection asymmetry and the neck formation of the deformed shape. The parameter η in Eq. (<ref>) allows for a possible non-axial deformation of the nucleus. Equation (<ref>) is entirely equivalent to the one based on the original Fourier expansion of Ref. <cit.> but is easier to handle in the case of fission because, in the present case, and contrary to the original definition, the range of variability of the a_i coefficients does not depend on the elongation c. In addition, the mass ratio of the fragments, their relative distance, and the radius of the neck between them, measured in z_0 units, do not depend on the elongation of the nucleus. In addition, the heavy fragment mass-number A_ h is nearly a linear function of the a_3 deformation: A_ h≈ (1+a_3)A 2 at the scission configuration (a_4≈ 0.72). One has also to note that for the reflection-symmetric shapes (a_3=0), the geometrical scission point occurs when a_4=a_4^ sc=3 4+6 5a_6… independently of the elongation c. The potential energy surfaces (PES) of fissioning nuclei are then obtained in the 4D space of deformation parameters (c,a_3,a_4,η) using the macroscopic-microscopic (macro-micro) model <cit.>. The macroscopic part of the energy is evaluated according to the Lublin-Strasbourg-Drop (LSD) formula <cit.>, while the microscopic energy corrections are calculated using the Yukawa-folded single-particle potential <cit.> and the Strutinsky shell correction method <cit.>. The pairing correlations are described using the BCS formalism with an approximative projection on good particle number <cit.>. All parameters of the macro-micro model used in the present study are the same as in Ref. <cit.>. Please recall here that due to energy-dissipation effects, even spontaneously fissioning nuclei get excited near the scission configuration. The resulting temperature effect of atomic nuclei is even more crucial in the case of neutron-induced fission or the fission of compound nuclei formed in heavy-ion collisions. It would not be easy to evaluate the PES with changing temperature T on the way to the scission configuration. Therefore, we do it approximately in the following way. In the macro-micro model, one generally assumes that the total potential energy V_ tot=V_ mac+ V_ mic is the sum of the macroscopic V_ mac and microscopic V_ mic parts. The macroscopic part of the potential energy grows parabolically with increasing temperature (refer to, e.g., Ref. <cit.>), while the amplitude of the microscopic energy correction decreases. Following the estimates made in Ref. <cit.> we have assumed that the microscopic part of the potential energy varies with temperature T according to the following phenomenological relation <cit.>: V_ mic(q⃗,T)≈V_ mic(q⃗,T=0) 1+exp((T-1.5)/0.3) , where the temperature T is in MeV units and q⃗ stands for the {c,a_3,a_4,η} deformation. §.§ Dynamical evolution In our approach, the dissipative fission dynamics is described by the Langevin equation. In the generalized coordinates ({q_i},  i=1,2,...,n) it has the following form <cit.>: [ dq_i dt = ∑_j [ M^-1(q⃗ )]_i j p_j; dp_i dt= - 1 2∑_j,k ∂[ M^-1]_jk∂ q_i p_j p_k -∂ V(q⃗)∂ q_i; - ∑_j,kγ_ij(q⃗) [ M^-1]_jk p_k + F_i(t) , ] Here V(q⃗ )=E_ pot(q⃗ )-a(q⃗ )T^2 is the Helmholtz free-energy of the fissioning nucleus with temperature T and a(q⃗ ) is the single-particle level density parameter. The potential energy E_ pot(q⃗ ) at a given deformation q⃗ is obtained by the macro-micro prescription as stated above. The parameter a(q⃗ ) is, according to Ref. <cit.>, a deformation-depending function. The inertia and friction tensors M_jk and γ_ij are respectively evaluated in the irrotational flow and the wall approximation, as described in Refs. <cit.>. The vector F⃗(t) stands for the random Langevin force, which couples the collective dynamics to the intrinsic degrees of freedom and is defined as: F_i(t) =∑_j g_ij(q⃗ ) G_j(t) , where G⃗(t) is a stochastic function whose strength g(q⃗ ) is given by the diffusion tensor D(q⃗ ) defined by the generalized Einstein relation: D_ij=T^*γ_ij=∑_k g_ik g_jk , with T^*=E_0/ tanh(E_0 T) , The vector function G⃗(t) takes into account both statistical and collective fluctuations <cit.>. In the following, we have taken E_0=3× 0.5 MeV, assuming that each collective mode contributes 0.5MeV to the zero-point energy. The temperature T is obtained from the thermal excitation energy E^* defined as the difference between the initial energy E_ init and the final energy, which is the sum of kinetic (E_ kin) and potential (V) energies of the fissioning nucleus at the present deformation (q⃗) and the sum of the binding and the kinetic energies of emitted particles (E_ part) a(q⃗ )T^2=E^*(q⃗ )=E_ init-[E_ kin(q⃗ )+V(q⃗ ) +E_ part] . The initial conditions of the dynamical calculation correspond to the excited compound system in the vicinity of the outer saddle point, e.g., for ^252Cf: c≈ 1.6, a_3≈ 0.15, a_4≈ 0.12, η=0. We assume that scission takes place when the neck parameter a_4 is equal to 0.72 since this value corresponds to a neck radius approximately equal to the nucleon radius r_ neck=r_0=1,217fm. Non-axiality was found to be significant only at small elongations before reaching the outer saddle (c ≈ 1.6 for the systems considered here), consistent with what had been found in the past within various approaches <cit.>. At larger deformations, its influence is negligible. Moreover, the role of higher-order Fourier expansion coefficients a_5 and a_6 in Eq. (<ref>) is small even in the region of well-separated fission fragments, as shown in Ref. <cit.>. Consequently, we restrict the Langevin calculations to the 3D (c, a_3, a_4) deformation space when discussing fission dynamics. Using the above formalism and procedure, we have performed extended dynamical calculations, including around 10^5 fissioning Langevin trajectories, from which we extracted the predictions of the model for various observables such as the fission fragment mass, charge, or kinetic energy distributions. Please note that we have used the same set of parameters as the one employed in our previous study <cit.> in which neutron-induced fission of ^235U and bimodal fission of Fermium isotopes were discussed. The mass of the heavy (A_ h, q⃗_ h) and the light fragments (A_l, q⃗_l) are proportional to the volumes of the daughter nuclei at the scission point, which defines the end of each Langevin trajectory. Knowing the fragment deformations at scission q⃗_l and q⃗_ h, it is possible to find the most probable charge for each isobar by analyzing the energy of the system at scission as a function of the charge number Z_ h of the heavy fragment: [ E(Z_ h; Z,A,A_ h,q⃗_ h,q⃗_l) =E_ LSD(Z-Z_ h,A-A_ h);q⃗_l); + E_ LSD(Z_ h,A_ h;q⃗_ h) +E_ Coul^ rep-E_ LSD(Z,A;0) , ] where A_ h is the heavy fragment mass number and the fragment Coulomb repulsion energy E_ Coul^ rep is given by [ E_ Coul^ rep =3e^2 5r_0[Z^2 A^1/3B_ Coul(q⃗_ sc).; .-Z_ h^2 A_ h^1/3 B_ Coul(q⃗_ h) -Z_l^2 A_l^1/3 B_ Coul(q⃗_l)] . ] Here, r_0=1.217 fm and the Coulomb shape function B_ Coul is the same as in the LSD mass formula <cit.>. The distribution of the heavy-fragment charge number can be estimated using a Wigner function corresponding to the energy E obtained with the help of Eq. (<ref>) for different values of Z_ h (refer to Ref. <cit.> for more details): W(Z_ h)=exp{-[E(Z_ h)-E_ min]^2/E_ W^2] . This function gives the probability distribution of the fragment charge. The energy E_ min in Eq. (<ref>) is the lowest discrete energy in (<ref>) as a function of Z_ h. Furthermore, a random number <cit.> is introduced to determine the charge number Z_ h of the heavy fragment, while the charge number of the light fragment is Z_l=Z-Z_ h. The energy E_ W should be chosen comparable with the energy distance ħω_0 between harmonic oscillator shells since we are dealing here with a single-particle (proton-neutron) transfer between the touching fragments due to the charge equilibration. In the following we have assumed E_ W=0.5 ħω_0. The above charge equilibration effect must be considered at the end of each Langevin trajectory when one fixes the fission fragments' integer mass and charge numbers. The fission fragment TKE is given by a sum of the Coulomb repulsion energy (<ref>) of the fragments and their the pre-fission kinetic energy (E_ kin^ rel) of relative motion: TKE= E_ Coul^ rep + E_ kin^ rel , This expression gives, without any doubt, a more accurate estimate of the fission-fragment kinetic energy than the frequently used point-charge approximation: TKE=e^2 Z_ hZ_l/R_12, where R_12 is the distance between the fragment mass-centers. §.§ Neutron evaporation Thermally excited heavy nuclei de-excite by emitting light particles, like neutrons, protons, or α-particles. At relatively low excitation energies (E^* < 80 MeV), only neutron evaporation takes place, while the emission of a proton or α-particle is unlikely <cit.>. Emission of high-energy γ-rays in competition with neutron evaporation is rare and is therefore neglected in the present study. At the end of the de-excitation chain, below the neutron separation energy, the remaining excitation energy and angular momentum are exhausted by the low-energy γ-rays emission. The latter stage of the decay process is not included in the model, since it does not affect the observables of interest in this work. The modeling of neutron emission from the excited compound system along its way to scission is taken from a Weisskopf-like model described in Refs. <cit.>. The prescription for the de-excitation process of the excited fragments emerging at scission (hereafter called the primary fragments) has been described in detail in Sec. II-D of Ref. <cit.> and is therefore not repeated here. § SPONTANEOUS FISSION YIELDS OF ^252CF The 4D PES of the ^252Cf spontaneously fissioning nucleus is evaluated within the macro-micro model, as described in the previous section. The (c, a_4) and (c, A_ h) cross-sections of the PES of ^252Cf after suited minimization are presented in Fig. <ref>. The top panel shows the PES projection onto the (c,a_4) plane, i.e., each energy point in the (c,a_4) map is obtained by a minimization concerning the non-axial and reflection asymmetry deformation parameters η and a_3 respectively. The ground state minimum (g.s.) is found at an elongation c=1.14 and a_4=0.01, while the exit point (after tunneling the fission barrier) found at c≈ 1.6 and a_4≈ 0.2, is marked by a red point. The asymmetric fission valley ends at an elongation c≈ 2.2 and the symmetric one at c≈ 2.8. The PES projection shown in the bottom panel corresponds roughly to the scission point (r_ neck≃ r_ n), as noted above. From both cross sections, it can be deduced that the close-to-scission configuration of the asymmetric valley corresponds to the minimum at A_ h≈ 150 and c=2.2. In comparison, the end of the symmetric valley is found at A_ h≈ 126 and c=2.8. As expected, asymmetric fission of ^252Cf leads to a more compact scission configuration than the more elongated one found for a symmetric splitting. The primary fission fragment mass yield obtained in our model is compared in Fig. <ref> with the experimental data from Ref. <cit.>. The theoretical yields are found to be shifted by a few mass units concerning the data. Additionally, the probability of symmetric fission is slightly overestimated. The TKE averaged over all trajectories for each specific fragment pair is shown in Fig. <ref> as a function of the primary fragment neutron and proton numbers. It is seen that the neutron-rich isotopes have, in general, larger TKEs, which means that they correspond to smaller elongations of the fissioning system in the scission configuration. A similar map but for the multiplicity of the neutrons emitted by the fragments is presented in Fig. <ref>. It is found that the symmetric fragments emit, on average, less than one neutron, while the most probable mass asymmetric fragments evaporate around three neutrons or more. All fission fragments are predicted in our approach to be located below the β-stability line marked in Fig. <ref> and thus correspond to relatively neutron-rich isotopes, as known for fission. The calculated (black points in Fig. <ref>) secondary (i.e., after neutron evaporation) fragment isotopic yields are compared from Ga to Dy with the data (red stars) taken from the ENDF/B-VII.0 and JEFF-3.3 libraries Refs. <cit.> and the very recent data (blue crosses) from Ref. <cit.> via mass measurement at the FRS ion catcher for a large range of neutron numbers. The overall agreement of our estimates with the data is quite satisfactory, especially when one considers that none of the model parameters was fitted to these data. The fission yields predicted for the Sn, Te, and Xe isotopes are slightly underestimated as compared with the experimental data of Ref. <cit.>. § FISSION YIELDS OF ^250CF AT E^*=46 MEV The development of the model of Ref. <cit.> was initially motivated by the wealth of experimental data available for low-energy fission, and the importance of this energy regime in various applications. However, the energy dependence of the model transport parameters was included already in Ref. <cit.>, as well as the possibility of pre-scission evaporation, i.e. multi-chance fission. As noticed above, the model was tested only for low-energy fission inour previous work. In the present section, we extend its application to fission at high excitation energy. Such an investigation may serve as a stringent test for the temperature dependence of the microscopic energy correction and the transport parameters like inertia, friction, and diffusion tensors. The fission fragment mass, charge, and isotopic yields for ^250Cf produced at E^*≈ 46 MeV in ^238U + ^12C collisions were studied experimentally in detail in Refs. <cit.>. The most probable angular momentum of ^250Cf is found to be around L=20 ħ as one can see in Fig. <ref> in which the theoretical estimate of the fusion cross-section obtained within a Langevin-type calculation <cit.> is presented as a function of L. The excitation energy in the experiment above corresponds to a temperature of the ^250Cf nucleus of around T≈ 1.4 MeV. Consequently, the amplitude of the microscopic energy corrections becomes much smaller (see Eq. (<ref>)) than in the ground state <cit.>. The two cross sections of the PES of ^250Cf evaluated for T=1.4MeV are given in the top and bottom parts of Fig. <ref>. As anticipated, the landscapes are smoother relative to the ones at the ground state (compare, e.g., to the close-by ^252Cf of Fig. <ref>) due to the shell corrections found smaller at finite temperature. Interestingly, they are, however, not fully damped. Some asymmetric fission contribution may thus persist, more or less hidden by the dominant symmetric fission component, as one can learn from the cross-section of the bottom part of Fig. <ref>. Due to its relatively high initial excitation energy, the compound nucleus ^250Cf produced in a fusion reaction has a high probability of emitting some neutrons before reaching the scission configuration (emission of light-charged particles prior to scission is extremely rare due to the higher energy cost <cit.>). Particle evaporation before scission leads to what is commonly called multi-chance fission. The competition between fission and evaporation is described with a set of coupled Langevin plus Masters equations, similarly to what has been done in Ref. <cit.>, but now with the new, better adapted FoS parameterization. The yield of the number of pre-scission neutrons is presented in Fig. <ref> (top), as well as the elongation of the nucleus at which this emission takes place (bottom). One notices that most neutrons are emitted even before reaching the saddle point. The temperature of the compound nucleus decreases obviously after each emission act, so the temperature dependence of the microscopic energy (<ref>) must be considered in our calculation. The average multiplicity of neutrons emitted before scission is found to be ν_ pre=2.7, while the multiplicity of the neutrons emitted before reaching the saddle point is 2.4. One finds that the most probable (57.%) is the emission of 3 neutrons, while the probability of events with no neutron emission, i.e., fission of ^250Cf, is minimal (1.6%). From this result, we conclude that in the case of ^250Cf at a thermal excitation energy of E^*=46 MeV, one is instead dealing with the fission of lighter Cf isotopes, which are, of course, due to the energy loss through the neutron emission, less excited, as one can see in Table I. As has been shown in Ref. <cit.>, the fission fragment yields are to a good approximation independent of the initial conditions when the Langevin trajectories are started in the region of the scission point or at a smaller elongation of the fissioning nucleus. To allow for multi-chance fission but keep the computing time within reasonable limits, we have therefore performed five independent Langevin calculations for ^246-250Cf isotopes with the initial thermal excitation energies as listed in Table I, starting from such an elongated initial configuration. Qualitatively, the PES's of these less excited ^246-250Cf isotopes are intermediate between Figs. <ref> and <ref>. The theoretical mass (top), charge (middle), and TKE (bottom) yields obtained for the different numbers of pre-fission neutrons are shown in Fig. <ref>. The yields obtained for each pre-scission isotope is then weighted with its probability (2nd raw in Table I). The calculated primary (without taking neutron evaporation into account) and secondary (including neutron evaporation) mass yields are compared in Fig. <ref> with the experimental data taken from Ref. <cit.>. Similar plots for the charge yields are presented in Fig. <ref>. It is seen in Figs. <ref> and <ref> that the estimates obtained by taking into account the pre-fission neutron evaporation, evaluated separately for different Cf isotopes and then weighted, are much closer to the data. The experimental (top), primary (middle), and final (bottom) estimates of the isotopic yields are shown in Fig. <ref> as functions of N_ f and Z_ f. The calculations were based on 5× 100 000 Langevin trajectories, so the range of less-probable nuclides is slightly smaller than the one obtained experimentally in Ref. <cit.>. The final distribution of yields, i.e., after neutron emission from the fragment, is found to be shifted by 2-3 units relative to the measured ones. A similar plot but for the fragments' total kinetic energy (TKE) is shown in Fig. <ref>. For the lightest and the heaviest fragments, as well as the ones corresponding to the symmetric fission, our model predicts a small TKE around 140 MeV, while the fragments with masses around A=140 or A=110 are found to have larger TKE's around 160 MeV. A more detailed comparison of our model with the data <cit.> is shown in Fig. <ref> with the secondary isotopic distributions of fragment elements from Ga to Dy plotted as a function of the neutron number. Both theoretical and experimental yields show a kind of inverted parabola in the logarithmic scale. However, the stiffness of all the experimental distributions is significantly smaller than the ones of our theoretical estimates. In addition, the peak of the experimental distribution is generally shifted by 2-3 units towards larger neutron numbers relative to the theoretical distribution, as already deduced above. It is interesting to note that although the description of the integral mass and charge yields are of similar quality for spontaneous fission of ^252Cf and fusion-fission of ^250Cf, the predictions for the isotopic yields are slightly worse in the latter case. This effect may suggest some deficiency in treating multi-chance fission and/or evaporation in general. However, this observation still needs further investigation due to the interplay of various aspects during the fission process and the interdependence of its different stages. The N_ f/Z_ f ratio as a function of the fragment charge number is shown for ^250Cf at E^*= 46 MeV in Fig. <ref>. Our estimates corresponding to the primary (dashed line) and final (solid line) yields are compared with the data (red diamonds) taken from Ref. <cit.>. The dotted line indicates the neutron-to-proton ratio in the parent nucleus. The experimental data are located in between the pre- and post-emission lines, which bears a suspicion that we overestimate the neutron number emitted from the fragments. As one can see in Fig. <ref> (top), the calculated total number of neutrons emitted from both fragments is described in a rather satisfactory way. On the contrary, looking at Fig. <ref> (bottom) shows that the model overestimates the number of neutrons emitted from the light and underestimates the ones from the heavy fragments. Therefore, The deficiency above seems to be connected to the neutron emission balance between the two fragments and thus may be attributed to the description of the sharing of the nucleons and/or excitation energy at scission. However, due to the entangled process, further investigations are required before a final conclusion can be drawn. § SUMMARY AND CONCLUSIONS In a previous communication <cit.> we presented a multi-dimensional Langevin fission model capable of handling the various facets of the process, including i) the dynamical evolution of the fissioning system between the ground state and the scission point, in competition with the particle evaporation, ii) the sharing of neutrons, protons, and excitation energy between the two fragments at the moment of scission, iii) their kinetic energy after full acceleration, and finally iv) their decay back to equilibrium through the evaporation of neutrons. The energy dependence of the different ingredients has been included from the beginning. The model was tuned and tested till now for low-energy fission only, particularly for thermal neutron-induced fission of ^235U. It also attested its capacity <cit.> to give a fair description of the evolution of the fragment properties along the Fermium isotopic chain in the low-energy regime where most experimental information is available. In the present study, the theoretical framework developed in Ref. <cit.> was applied, without any change of parameters, to the spontaneous fission of ^252Cf and the fission of ^250Cf produced at an excitation energy of 46 MeV in a fusion reaction, thus permitting to test the predictive power of our model over an extended range of temperature, and thereby the implemented energy dependences. A further extension of the present work compared to Ref. <cit.> is the investigation of more detailed observables, particularly fragment isotopic distributions with unique resolution. The recent availability of such accurate data makes it possible to test fission models less ambiguously since previous data often needed better resolution or were restricted to integral distributions. Wherever the corresponding data are available, the model is found to describe reasonably well the integral primary and secondary mass and charge yields, the distribution of the fragment total kinetic energy, as well as the total amount of neutrons emitted in coincidence with fission for both ^252Cf and ^250Cf. The quite accurate reproduction of the isotopic yields for fragment elements from Ga to Dy shows a good description for spontaneous fission of ^252Cf, but a somewhat poorer performance for higher excitation energy fission of ^250Cf. The simultaneous analysis based on the total and individual (viz. per fragment) neutron multiplicities suggests a deficiency due to the properties of the fragments emerging at scission and probably with the calculated excitation energies. Further studies in this direction and other alternative explanations, such as e.g. charge equilibration and shell effects, will be the subject of future investigations. The present study demonstrates the importance of accurate and high-fold correlation experimental information for constraining fission models. The availability of more and more data of this kind will be very beneficial to improve the present model, and fission theory in general. Acknowledgments The authors would like to thank D. Ramos, I. Mardor, and Y. Kehat for valuable discussions and for supplying us with experimental data. This work has been supported by the Polish-French agreement COPIN-IN2P3, project No. 08-131, and by the Natural Science Foundation of China (Grant No. 11961131010 and 12275081). 99 RMo13 J. Randrup, P. Møller, Phys. Rev. C 88, 064606 (2013). MSc22 C. Schmitt, P. Møller, Phys. Lett. B 812, 136017 (2021). ACD21 M. Albertsson, B. G. Carlsson, T. Døssing, P. Møller, J. Randrup, S. Åberg, Phys. Rev. C 103, 014609 (2021). MJV19 M. R. Mumpower, P. Jaffke, M. Verriere, J. Randrup, Phys. Rev C 101, 054607 (2020). USANG2019 M. D. Usang, F. A. Ivanyuk, C. Ishizuka, S. Chiba, Scientific Reports 9, 1525 (2019). SIM2018 G. Simenel, G. Scamps, Nature 564, 382 (2018). IVA2024 F. A. Ivanyuk, C. Ishizuka, S. Chiba, Phys. Rev. C 109, 034602 (2024). VRET2023 B. Li, D. Vretenar, Z. X. Ren, T. Nikšić, P. W. Zhao, J. Meng, Phys. Rev. C 107, 014303 (2023). BUL2016 A. Bulgac, P. Magierski, K. J. Roche, I. Stetcu, Phys. Rev. Lett. 116, 122504 (2016). SADHU22 G. Sadhukhan, S. A. Giuliani, W. Nazarewicz, Phys. Rev. C 105, 014619 (2022). VER2021 M. Verriere, N. Schunck, D. Regnier, Phys. Rev. C 103, 054602 (2021). REG2019 D. Regnier, N. Dubray, N. Schunck, Phys. Rev. C 99, 024611 (2019). ARIT2022 Y. Aritomo, A. Iwamoto, K. Nishio, M. Ohta, Phys. Rev. C 105, 034604 (2022). LM2019 J.-F. Lemaitre, S. Goriely, S. Hilaire, J.-L. Sida, Phys. Rev. C 99, 034612 (2019). PASCA H. Pasca, A. V. Andreev, G. G. Adamian, N. N. Antonenko, Phys. Rev. C 109, 044601 (2024). ROD2014 R. Rodriguez-Guzman, L.M. Robledo, Phys. Rev. C 89, 054310 (2014). PNS23 K. Pomorski, B. Nerlo-Pomorska, C. Schmitt, Z.G. Xiao, Y.J. Chen, L.L. Liu, Phys. Rev. C 107, 054616 (2023). BBB20 M. Bender, R. Bernard, G. Bertsch, S. Chiba, J. Dobaczewski, N. Dubray, S. A. Giuliani, K. Hagino, D. Lacroix, Z. Li, P. Magierski, J. Maruhn, W. Nazarewicz, J. Pei, S. Péru, N. Pillet, J. Randrup, D. Regnier, P.-G. Reinhard, L. M. Robledo, W. Ryssens, J. Sadhukhan, G. Scamps, N. Schunck, C. Simenel, J. Skalski, I. Stetcu, P. Stevenson, S. Umar, M. Verriere, D. Vretenar, M. Warda, S. Åberg, J. Phys. G: Nucl. Part. Phys. 47, 113002 (2020). SJA16 K.-H. Schmidt, B. Jurado, C. Amouroux, C. Schmitt, Nucl.Data Sheets 131, 107 (2016). SCHUN2022 N. Schunck, D. Regnier, Prog. Part. Nucl. Phys. 125, 103963 (2022). PIN17 K. Pomorski, F. A. Ivanyuk, B. Nerlo-Pomorska, Eur. Phys. J. A 53, 59 (2017). PDH20 K. Pomorski, A. Dobrowolski, R. Han, B. Nerlo-Pomorska, M. Warda, Z. G. Xiao, Y. J. Chen, L. L Liu, J. L. Tian, Phys. Rev. C 101, 064602 (2020). PBK21 K. Pomorski, J .M. Blanco, P. V. Kostryukov, A. Dobrowolski, B. Nerlo-Pomorska, M. Warda, Z. G. Xiao, Y. J. Chen, L. L. Liu, J. L. Tian, X. Y. Diao, Q. H. Wu, Chin. Phys. C 45, 054109 (2021). LCW21 L. L. Liu, Y. J. Chen, X. Z. Wu, Z. X. Li, Z. G. Ge, K. Pomorski, Phys. Rev. C 103, 044601 (2021). KDN21 P. V. Kostryukov, A. Dobrowolski, B. Nerlo-Pomorska, M. Warda, Z. G. Xiao, Y. J. Chen, L. L. Liu, J. L. Tian, K. Pomorski, Chin. Phys. C 45, 124108 (2021). PNe23 K. Pomorski, B. Nerlo-Pomorska, Acta Phys. Polon. Conf. Suppl. 16, 4-A21 (2023). PDN23 K. Pomorski, A.Dobrowolski, B. Nerlo-Pomorska, M. Warda, A. Zdeb, J. Bartel, H. Molique, C. Schmitt, Z.G. Xiao, Y.J. Chen, L.L. Liu, Acta Phys. Polon. B 54, 9-A2 (2023). SPN17 C. Schmitt, K. Pomorski, B. Nerlo-Pomorska, J. Bartel, Phys. Rev. C 95, 034612 (2017). camaano2015 M. Camaano et al., Phys. C 92, 034606 (2015). RCF19 D. Ramos, M. Caamano F. Farget, C. Rodriguez-Tajes, L. Audouin, J. Benlliure, E. Casarejos, E. Clement, D. Cortina, O. Delaune, X. Derkx, A. Dijon, D. Dore, B. Fernández-Dominguez, G. de France, A. Heinz, B. Jacquot, C. Paradela, M. Rejmund, T. Roger, M.-D. Salsac, C. Schmitt, Phys. Rev. 99, 024615 (2019). martin2021 J.-F. Martin et al., Phys. C 104, 044602 (2021). ATJ20 A. Al-Adili, D. Tarrío, K. Jansson, V. Rakopoulos, A. Solders, S. Pomp,A. Göök, F.-J. Hambsch, S. Oberstedt, M. Vidali Phys. Rev. C 102, 064610 (2020). COH06 M.B. Chadwick, P. Oblozinsky, M. Herman, N. M. Greene, R. D. McKnight, D. L. Smith, P. G. Young, R. E. MacFarlane, G. M. Hale, R. C. Haight, S. Frankle, A. C. Kahler, T. Kawano, R. C. Little, D. G. Madland, P. Mølle, R. Mosteller, P. Page, P. Talou, H. Trellue, M. White, W. B. Wilson, R. Arcilla, C. L. Dunford, S. F. Mughabghab, B. Pritychenko, D. Rochman, A. A. Sonzogni, C. Lubitz, T. H. Trumbull, J. Weinman, D. Brown, D. E. Cullen, D. Heinrichs, D. McNabb, H. Derrien, M. Dunn, N. M. Larson, L. C. Leal, A. D. Carlson, R. Block, B. Briggs, E. Cheng, H. Huria, K. Kozier, A. Courcelle, V. Pronyaev, S. der Marck, Nucl. Data Sheets 107, 2931 (2006). PCD20 A. J. M. Plompen, O. Cabellos, C. De Saint Jean, M. Fleming, A. Algora, M. Angelone, P. Archier, E. Bauge, O. Bersillon, A. Blokhin, F. Cantargi, A. Chebboubi, C. Diez, H. Duarte, E. Dupont, J. Dyrda, B. Erasmus, L. Fiorito, U. Fischer, D. Flammini, D. Foligno, M. R. Gilbert, J. R. Granada, W. Haeck, F.-J. Hambsch, P. Helgesson, S. Hilaire, I. Hill, M. Hursin, R. Ichou, R. Jacqmin, B. Jansky, C. Jouanne, M. A. Kellett, D. H. Kim, H. I. Kim, I. Kodeli, A. J. Koning, A. Yu. Konobeyev, S. Kopecky, B. Kos, A. Krása, L. C. Leal, N. Leclaire, P. Leconte, Y. O. Lee, H. Leeb, O. Litaize, M. Majerle, J. I Márquez Damián, F. Michel-Sendis, R. W. Mills, B. Morillon, G. Noguère, M. Pecchia, S. Pelloni, P. Pereslavtsev, R. J. Perry, D. Rochman, A. Röhrmoser, P. Romain, P. Romojaro, D. Roubtsov, P. Sauvan, P. Schillebeeckx, K. H. Schmidt, O. Serot, S. Simakov, I. Sirakov, H. Sjöstrand, A. Stankovskiy, J. C. Sublet, P. Tamagno, A. Trkov, S. van der Marck, F. Álvarez-Velarde, R. Villari, T. C. Ware, K. Yokoyama, G. Ẑerovnik, Eur. Phys. J. 56, 181 (2020). MDA20 I. Mardor, T. Dickel, D. Amanbayev, S. Ayet San Andrés, S. Beck,D. Benyamin, J. Bergmann, P. Constantin, A. Cléroux Cuillerier, H. Geissel, L. Gröff, C. Hornung, G. Kripko-Koncz, A. Mollaebrahimi, I. Miskun, W. R. Plaß, S. Pomp, A. Rotaru, C. Scheidenberger, G. Stanic, C. Will, Eur. Phys. J. Web of Conf. 239, 0204 (2020). WAS23 Y. Waschitz, D. Amanbayev, A. Spataru, I. Mardor, T. Dickel, E. O. Cohen, O. Aviv, S. Ayet San Andrés, D. L. Balabanski, S. Beck, J. Bergmann, Z. Brencic, P. Constantin, M. Dehghan, H. Geisse, L. Gröf, C. Hornung, N. Kaelantar-Nayestanaki, G. Kripko-Koncz, I. Miskun, A. Mollaebrahimi, D. Nichita, W. R. Plaß, S. Pomp, C. Scheidenberger, A. Solders, G. Stanic, M. Wasserheß, M. Vencelj, J. Zhao, Eur. Phys. J. Web of Conf. 284, 04005 (2023). CDF13 M. Caamaño, O. Delaune, F. Farget, X. Derkx, K.-H. Schmidt, L. Audouin, C.-O. Bacri, G. Barreau, J. Benlliure, E. Casarejos, A. Chbihi, B. Fernández-Domínguez, L. Gaudefroy, C. Golabek, B. Jurado, A. Lemasson, A. Navin, M. Rejmund, T. Roger, A. Shrivastava, C. Schmitt, Phys. Rev. C 88, 024605 (2013), with errata in Phys. Rev. C 89, 069903(E) (2014). NTS69 S.G. Nilsson, C. F. Tsang, A. Sobiczewski, Z. Szymański, S. Wycech, S. Gustafson, I. L. Lamm, P. Mølle, B. Nilsson, Nucl. Phys. A 131, 1 (1969). PDu09 K. Pomorski, J. Dudek, Phys. Rev. C 67, 044316 (2003). DPB16 A. Dobrowolski, K. Pomorski, J. Bartel, Comp. Phys. Comm. 199, 118 (2016). Str66 V.M. Strutinsky, Nucl. Phys. A 95, 420 (1967); Nucl. Phys. A 122, 1 (1968). GPo86 A. Góźdź, K. Pomorski, Nucl. Phys. A 451, 1 (1986). PPS89 S. Piłat, K. Pomorski, A. Staszczak, Zeit. Phys. A332, 259 (1989). PDN22 K. Pomorski, A. Dobrowolski, B. Nerlo-Pomorska, M. Warda, J. Bartel, Z. G. Xiao, Y. J. Chen, L. L. Liu, J. L. Tian, X. Y. Diao, Eur. Phys. J. A 58, 77 (2022). NPB02 B. Nerlo-Pomorska, K. Pomorski, J. Bartel, K. Dietrich, Phys. Rev. C 66, 051302(R) (2002). NPB06 B. Nerlo-Pomorska, K. Pomorski, J. Bartel, Phys. Rev. C 74, 034327 (2006). KPo12 H.J. Krappe, K. Pomorski, Nuclear Fission Theory, Lecture Notes in Physics, Vol. 838, Springer Verlag, 2012. BNP19 J. Bartel, B. Nerlo-Pomorska, K. Pomorski, A. Dobrowolski, Comp. Phys. Comm. 241, 139 (2019). PHo81 K. Pomorski, H. Hofmann, J. Physiqie, 42, 381 (1981). PNS00 K. Pomorski, B. Nerlo-Pomorska, A. Surowiec, M. Kowal, J. Bartel, K. Dietrich, J. Richert, C. Schmitt, B. Benoit, E. de Goes Brennand, L. Donadille, C. Badimon, Nucl. Phys. A 679, 25 (2000). SDP91 E. Strumberger, K. Dietrich, K. Pomorski, Nucl. Phys. A 529, 522 (1991). PPo94 W. Przystupa, K. Pomorski, Nucl. Phys. A 572, 153 (1994).
http://arxiv.org/abs/2406.08227v1
20240612135529
Measurement of the Imperceptible Threshold for Color Vibration Pairs Selected by using MacAdam Ellipse
[ "Shingo Hattori", "Yuichi Hiroi", "Takefumi Hiraki" ]
cs.HC
[ "cs.HC" ]
Cluster Metaverse Lab 8-9-5 Nishigotanda, Shinagawa Tokyo Japan University of Tsukuba 1-2 Kasuga, Tsukuba Ibaraki Japan s.hattori@cluster.mu Cluster Metaverse Lab 8-9-5 Nishigotanda, Shinagawa Tokyo Japan y.hiroi@cluster.mu Cluster Metaverse Lab 8-9-5 Nishigotanda, Shinagawa Tokyo Japan t.hiraki@cluster.mu § ABSTRACT We propose an efficient method for searching for color vibration pairs that are imperceptible to the human eye based on the MacAdam ellipse, an experimentally determined color-difference range that is indistinguishable to the human eye. We created color pairs by selecting eight colors within the sRGB color space specified by the ellipse, and conducted experiments to confirm the threshold of the amplitude of color vibration amplitude at which flicker becomes imperceptible to the human eye. The experimental results indicate a general guideline for acceptable amplitudes for pair selection. <ccs2012> <concept> <concept_id>10003120.10003121.10011748</concept_id> <concept_desc>Human-centered computing Empirical studies in HCI</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121.10003124.10010392</concept_id> <concept_desc>Human-centered computing Mixed / augmented reality</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Human-centered computing Empirical studies in HCI [500]Human-centered computing Mixed / augmented reality < g r a p h i c s > (a) MacAdam Ellipse <cit.>, the region that encompasses all colors that cannot be distinguished by the average human eye from the color in the center of the ellipse on the xy chromaticity diagram. For illustrative purposes, the ellipses are depicted as 10 times their actual size. (b) The two points of each McAdam ellipse, whose major diameter a_n is multiplied by r, are selected for color vibration. (c) Users view the color vibration on a color-calibrated display. The two colors selected by (b) are displayed alternately. In every pair, the human eye perceives the intermediate color resulting from the fusion of the two colors. Measurement of the Imperceptible Threshold for Color Vibration Pairs Selected by using MacAdam Ellipse Takefumi Hiraki June 17, 2024 ====================================================================================================== § INTRODUCTION Imperceptible color vibration is a perceptual phenomenon in which the human visual system interprets the rapid alternating presentation of two colors with the same luminance but different chromaticity as a single fused intermediate color <cit.>. This effect occurs when the alternation frequency exceeds the critical color fusion frequency (CCFF), which is typically around 25 Hz, leading to the perception of a blended color without discernible flicker <cit.>. Imperceptible color vibration can be used to embed information such as a 2D barcode in LCD images that is not perceived by the human eye but detectable by cameras or photosensors <cit.>. An increase in the amplitude of the color vibration facilitates the detection of the vibration by the camera, yet simultaneously renders the flicker more perceptible to humans. The conventional approach to selecting color-vibration pairs is based solely on distances in the color space <cit.>. However, the human eye exhibits inequality in color perception; for instance, even if two colors on the chromaticity diagram are separated by the same distance, two colors closer to blue can be distinguished, but two colors closer to green cannot. The method of efficiently searching for and generating imperceptible color vibrations taking into account the perceptual characteristics of the human eye is not explored. This paper proposes a method for selecting color vibration pairs with respect to the inequality of color perception based on the MacAdam ellipse <cit.> (Fig. <ref>). The MacAdam ellipse represents the experimentally determined range of color differences that cannot be distinguished by the human eye with respect to a particular color. We conducted the experiment to create color pairs based on this ellipse and to confirm the threshold of the amplitude of the color vibration amplitude at which flicker cannot be detected by the human eye. #1#1 #1#1 T § METHODS The MacAdam ellipse ℰ_n={c_n, θ_n, a_n, b_n} (n=1⋯ 25) is defined at 25 points on the xy chromaticity diagram by its center c_n=[c_nx,c_ny], rotation angle θ_n, and the lengths a_n and b_n of long and short diameter. We choose color pairs {p^+_n(r), p^-_n(r)} multiplied by ratio r along the long diameter a_n of these ellipses as color vibration pairs, denoted as p^±_n(r)=[c_nx± r· a_n sinθ_n,  c_ny± r· a_n cosθ_n]. This allows for the selection of color vibration pairs while taking into account the inequality of human color perception. xy chromaticity diagram is calculated by normalizing the luminance Y of the CIEXYZ color space. Therefore, when displaying colors based on the selected p_n^± in practice, it is necessary to complement the luminance. As Y approaches 0, p^±_n approaches black, and colors become nearly invisible. In contrast, as Y approaches 1, the brightness of p^±_n exceeds the sRGB range. Therefore, we set Y=0.4 and convert xy value to XYZ with from the colour-science library in Python [Colour 0.4.4 by Colour Developers, https://zenodo.org/records/10396329]. The selected color pairs in CIEXYZ are then converted to the sRGB color system to display the pairs. To convert the colors, we use CIE 1931 2^∘ as an observer function under the D65 illuminant. § EXPERIMENT AND RESULTS We conducted a user experiment to confirm the threshold of r at which humans perceive flicker. In this experiment, 8 points within the sRGB color space were selected from 25 points within the xy color space defined by the MacAdam ellipse to generate color vibrations. Figure <ref> shows the colors of the 8 selected points. We generate color pairs with r values from 1 to around 40, split into 8 varying intervals across colors, and randomly presented these pairs on a screen in front of the participant. Since the color space in sRGB is smaller than the xy color space, some of p_n^± cannot be reproduced in the sRGB color space. By omitting these pairs, we used 46 pairs in this experiment. To detect random responses, 46 single colors without color vibration were also displayed. As a result, each participant saw a total of 92 color patterns. Ten participants (8 males, 2 females) are asked to respond if they can perceive flicker from the displayed colors. The participants were seated in front of a sRGB color-calibrated LCD display (ColorEdge CG2420-Z, EIZO Inc.) at a distance of 60 cm so that their eye level was in the center of the monitor. Each color pair was displayed as a 15 cm square at 60 cm from the monitor. After each of the five pairs, the participants took a break to look at the black screen. The protocol was approved by Cluster, Inc. Research Ethics Committee, and informed consent was obtained from all participants. Figure <ref> shows the average percentage of responses perceived as flicker compared to r. From the result, the r value of perceiving color vibration 50 % of the time was r=24.4. The results show that perceptual tolerance is approximately × 24 expanded when the two colors alternate in time, in contrast to the MacAdam ellipses, which examined discrimination thresholds when the two colors are juxtaposed spatially. § CONCLUSION This paper presented a method for generating color vibration pairs based on perceptual metrics and a general policy on acceptable amplitudes for pair selection. This method of selecting color pairs can be extended to any color by interpolating MacAdam ellipses at each point. Future studies include the threshold of r relative to the hue value and the adaptation of individual perceptual differences. ACM-Reference-Format
http://arxiv.org/abs/2406.08641v1
20240612210126
ML-SUPERB 2.0: Benchmarking Multilingual Speech Models Across Modeling Constraints, Languages, and Datasets
[ "Jiatong Shi", "Shih-Heng Wang", "William Chen", "Martijn Bartelds", "Vanya Bannihatti Kumar", "Jinchuan Tian", "Xuankai Chang", "Dan Jurafsky", "Karen Livescu", "Hung-yi Lee", "Shinji Watanabe" ]
cs.SD
[ "cs.SD", "cs.CL", "eess.AS" ]
Operational Calculus for the 1st Level General Fractional Derivatives and its Applications [ 12 June 2024 ========================================================================================== § ABSTRACT ML-SUPERB evaluates self-supervised learning (SSL) models on the tasks of language identification and automatic speech recognition (ASR). This benchmark treats the models as feature extractors and uses a single shallow downstream model, which can be fine-tuned for a downstream task. However, real-world use cases may require different configurations. This paper presents ML-SUPERB 2.0, which is a new benchmark for evaluating pre-trained SSL and supervised speech models across downstream models, fine-tuning setups, and efficient model adaptation approaches. We find performance improvements over the setup of ML-SUPERB. However, performance depends on the downstream model design. Also, we find large performance differences between languages and datasets, suggesting the need for more targeted approaches to improve multilingual ASR performance. § INTRODUCTION Modern multilingual speech models have the capacity to model hundreds or, in some cases, over a thousand languages <cit.>, enabled by different training objectives, model architectures, and sources of training data. Importantly, the performance of these models is often evaluated using different experimental setups, which limits the extent to which their performance can be reliably compared. Several standardized evaluation setups and benchmarks have been proposed to evaluate the performance of pre-trained multilingual speech models <cit.>. The most comprehensive benchmark in terms of language coverage is the Multilingual Speech Universal PERformance Benchmark (ML-SUPERB) <cit.>, which covers 143 languages and includes multiple downstream tasks: monolingual ASR, multilingual ASR, and language identification (LID). Like the original SUPERB <cit.>, which only considers English speech, ML-SUPERB is set up to evaluate the performance of self-supervised learning (SSL) models. This evaluation is performed by freezing their representations and treating the models as feature extractors. These features are used as input to a lightweight downstream model, which can be fine-tuned for any of the downstream tasks. To minimize the impact of the downstream model on the overall measured performance, a simple two-layer Transformer-based decoder is used. ML-SUPERB was presented as a challenge at ASRU 2023, attracting 12 model submissions and 8 new language submissions <cit.>. Although the design of ML-SUPERB allows for efficient evaluation of multilingual SSL models across a large number of languages, it only considers one fixed downstream model design. This is problematic, as past work has found that the choice of downstream model can affect the rankings of SSL models across downstream tasks <cit.>. Also, the choice of downstream model designs can be affected by application requirements and users' budgets, which further motivates benchmarking with more flexible constraints. In this paper, we present ML-SUPERB 2.0, which revisits ML-SUPERB's original design. Specifically, ML-SUPERB 2.0 includes larger-scale downstream models, SSL model fine-tuning (including partial fine-tuning strategies), efficient pre-trained model adaptation techniques (adapters <cit.> and LoRA <cit.>), and supervised pre-trained models (Whisper <cit.> and OWSM 3.1 <cit.>). Also, we enrich ML-SUPERB's evaluation metrics to place greater focus on robustness across languages and describe variation across datasets. All code and data used to develop ML-SUPERB 2.0 are publicly available.[<https://github.com/espnet/espnet/tree/master/egs2/ml_superb/asr1>] § INVESTIGATION DETAILS ML-SUPERB 2.0 considers a variety of architectural variations, pre-training and fine-tuning approaches, described in the next four sections. We then discuss the changes in the evaluation metrics, which allow us to investigate performance differences across languages and datasets. §.§ Downstream Architectures Past work has found ASR performance differences between downstream architectures when comparing representations from pre-trained SSL models <cit.>. These findings motivate a systematic comparison to better understand their impact on ASR performance. Therefore, ML-SUPERB 2.0 considers both CTC-based (CTC) and hybrid CTC/attention-based (CTC-ATT) frameworks as adopted in <cit.>, and within each framework, compares three architectures, namely the Transformer <cit.>, Conformer <cit.>, and E-Branchformer <cit.>. In preliminary experiments, we compared these architectures to others (e.g., bi-LSTMs, transducers), and these three were chosen for their better performance or faster convergence. §.§ Model Fine-Tuning Fine-tuning is a common practice to adapt pre-trained SSL models to a downstream task. While fine-tuning is effective, it traditionally requires updating all model parameters, which is costly. Partial fine-tuning is an alternative that strikes a balance between training efficiency and performance <cit.>. ML-SUPERB 2.0 includes fine-tuning for the CTC/CTC-ATT frameworks, using either full fine-tuning or partial fine-tuning, which focuses on the bottom, middle, or top layers of the models, while keeping the other layers fixed. §.§ Efficient Model Adaptation Efficient model adaptation approaches offer a parameter-efficient alternative to full fine-tuning <cit.>. In particular, the use of adapter models has been found to be competitive with, and sometimes improve upon, full fine-tuning, especially in low-resource settings <cit.>. These adapter models are small neural modules added between layers of a pre-trained model, which enable efficient fine-tuning by only learning the adapter module parameters. ML-SUPERB 2.0 evaluates the performance of adapters using the CTC/CTC-ATT frameworks. Specifically, we insert two adapter layers into each layer of the pre-trained SSL models, leaving the rest of the model unchanged (i.e., following the setup of <cit.>). ML-SUPERB 2.0 also evaluates Low-Rank Adaptation (LoRA). LoRA freezes the pre-trained SSL models and injects low-dimensional layers to be added to the outputs of the projection matrices within the multi-head attention mechanism. §.§ Supervised Pre-Trained Models Scaling up supervised models has resulted in ASR performance that is competitive with SSL models on several evaluation datasets <cit.>. ML-SUPERB 2.0 evaluates two recent supervised models, namely Whisper and OWSM 3.1, to relax the constraint of evaluating SSL models only. We use the CTC framework to evaluate the encoder and the CTC-ATT framework to evaluate both the encoder and decoder of these models. Also, we evaluate the partial fine-tuning setup described in Section <ref> within the CTC framework and use it exclusively within the CTC-ATT framework to limit the number of tunable parameters on the ML-SUPERB 2.0 dataset. § EXPERIMENTAL DESIGN ML-SUPERB 2.0 evaluates both multilingual ASR and LID. The objective is to concurrently predict a language identifier token and transcribe the spoken content. ML-SUPERB 2.0 does not include ML-SUPERB's monolingual ASR track. §.§ General Setup ML-SUPERB 2.0 updates ML-SUPERB's dataset by correcting annotation mistakes,[We removed Highland Puebla Nahuatl from the Mexican endangered languages corpus and Norwegian from the NST corpus because of their mismatched annotations, and corrected the language label for VoxPopuli Italian.] resulting in ∼300 hours (85 hours for validation and test sets) drawn from 142 languages across 15 datasets. Some languages occur in more than one dataset. A 1-hour subset was drawn for each language-dataset pair, and the 1-hour subsets were combined to obtain the training dataset. Similarly, 10-minute subsets were drawn for each language-dataset pair, and these serve as the development and test datasets. A subset of 20 languages is reserved for few-shot (FS) learning experiments, whereas the normal experiments refer to other 122 languages. In the FS setting, five randomly selected utterances per language are used for training, while the 10-minute subsets for those languages are used for development and testing. All experiments are performed using ESPnet <cit.> with SSL models support from S3PRL <cit.>. Among the SSL models available, we evaluate XLS-R <cit.> and MMS <cit.> due to their superior performance on ML-SUPERB.[We use model variants with 24 layers and ∼300 million parameters.] As in ML-SUPERB, we compute a weighted sum of the layers of the SSL models and the encoder of the supervised models, and use it as input to the downstream models. This is applied to each of our experiments. In line with the spirit of ML-SUPERB, ML-SUPERB 2.0 limits the number of tunable parameters to 100 million for each evaluated configuration. This constraint ensures that large-scale models can be evaluated across a diverse range of computing environments, improving the accessibility and practicality of ML-SUPERB 2.0. §.§ Downstream Architectures When evaluating the different architectures within the CTC and CTC-ATT frameworks, we base our hyperparameter selection on prior research <cit.>. In particular, we keep the number of parameters of the downstream models below 100 million and tune only the learning rates. For the CTC framework, the layer configurations are as follows: 24 layers for the Transformer-based model, 14 for the Conformer-based model, and 12 for the E-Branchformer-based model. For the CTC-ATT models' encoders, we use 15 layers for the Transformer-based, 8 for the Conformer-based, and 7 for the E-Branchformer-based models. The Conformer-based model has a kernel size of 15, whereas the E-Branchformer's multi-layer perceptron uses a kernel size of 31 and a dimension of 3072. Common configurations across all models include an 8-head multi-head attention module with 512 hidden states and 2048 projection units, a batch size of 8 with gradient accumulation every four steps, a learning rate chosen from the range [10^-3, 10^-4, 10^-5] with 25,000 warm-up steps, and a dropout rate of 0.1. For the decoders, a Transformer decoder with 8 layers is used for all models. For hybrid training, the CTC and attention decoder weights are set to 0.3 and 0.7 respectively. §.§ Model Fine-Tuning ML-SUPERB 2.0 evaluates fine-tuning approaches using XLS-R and MMS, which both have 24 layers. The partial fine-tuning approach targets layers 1–6 (bottom), 9–14 (middle), or 19–24 (top). This way, the number of updated parameters does not exceed 100 million. Besides partial fine-tuning, we also examine full fine-tuning, which is provided only for comparison. To explore the impact of different downstream training objectives, we evaluate both the CTC and CTC-ATT frameworks. The CTC framework uses a 2-layer Transformer-encoder as in ML-SUPERB <cit.>. For the CTC-ATT framework, we adopt a small-scale downstream model from the configuration in <cit.> to ensure that there are fewer than than 100 million tunable parameters. Specifically, the model consists of a 2-layer Transformer-based encoder and a 4-layer Transformer-based decoder. Each encoder block has an 8-head multi-head attention module with 256 hidden states and 1024 projection units, and each decoder block contains a 4-head multi-head attention module with 256 hidden states and 2048 linear projection units. The other hyperparameters are similar to those used for the experiments comparing downstream architectures. §.§ Efficient Model Adaptation We evaluate the use of adapters and LoRA within both frameworks and follow the setup described in Section <ref>. The configuration of the adapter models and LoRA follow previous work <cit.>. Specifically, the adapter layers have a dimension of 64, and we set the LoRA rank and its constant scaling factor α to 16. The LoRA module is used across all query and key vectors within the multi-head attention module of the pre-trained SSL models. To accommodate the additional parameters introduced by the adaptation layers, we reduce the number of layers in the encoder of the downstream models by one. §.§ Supervised Pre-Trained Models ML-SUPERB 2.0 evaluates the medium-sized variants of Whisper and OWSM 3.1, since these are closest in size to the evaluated XLS-R and MMS models.[The Whisper and OWSM 3.1 model variants have 769 and 1017 million parameters, respectively.] We include two experimental setups using these models, namely one using only their pre-trained encoder within the CTC framework, and another that evaluates both the pre-trained encoder and decoder within the CTC-ATT framework. For the CTC framework, ML-SUPERB 2.0 investigates the performance of both the frozen pre-trained encoder using a Transformer-based downstream model and partial fine-tuning of the pre-trained encoder. The experimental setup is similar to that for the CTC framework described in Sections <ref> and <ref>, with the exception of fine-tuning only the top layers of the encoder (i.e., layers 19-24) to limit the number of updated parameters to 100 million. In the CTC-ATT framework, we do not add additional downstream models. The encoder remains frozen and we also use the same settings (i.e., medium-sized model variant) as in the CTC framework. Moreover, fine-tuning only targets the top layers of the decoder (i.e., layers 19-24). §.§ Evaluation For each configuration of the benchmark, ML-SUPERB 2.0 computes the LID accuracy and character error rates (CER) on the test dataset. Specifically, we first compute a per-language CER as the macro-average of CERs across all of the (one or more) datasets per language. We then compute the macro-average of the per-language CERs and the standard deviation (SD) of the language-specific CERs. We report these for both the normal and few-shot (FS) settings. The LID accuracy scores are only reported for the normal setting. Inspired by past work on fairness in machine learning <cit.>, we also report the worst-performing language (WL), i.e. the one with the highest CER in the normal setting, for each configuration, in an attempt to encourage research on methods that leave no language "behind". Lastly, we investigate the CER range between multiple datasets in the same language, when available, to separate the effects of domain or acoustic differences. We perform this analysis using the best-performing model and configuration of the benchmark given the CER in the normal setting. We describe the language that shows the highest range in CER among its datasets. § RESULTS AND DISCUSSION §.§ Comparisons Between Models and Settings Downstream Architectures: The results for different downstream architectures are presented in Table <ref>. The table shows that there is no superior model across all evaluated configurations. However, the E-Branchformer-based models outperform their Transformer-based and Conformer-based counterparts in almost all cases. This result aligns with trends noted in previous work <cit.>, confirming the strong performance of the E-Branchformer model for LID and multilingual ASR. When comparing the CTC and CTC-ATT frameworks, we find that CTC performs slightly better in the few-shot setting, while CTC-ATT (i.e, rows with a plus) is stronger in the normal setting. The findings suggest that the CTC framework might have better generalization capabilities when limited amounts of data are available. Comparing these results to the shallow-downstream baseline from ML-SUPERB (i.e., first two rows), we find an improvement in LID and ASR performance in the normal setting. However, the shallow-downstream baseline, based on MMS, still performs competitively in the few-shot setting. With roughly 6 million tunable parameters, the baseline's performance echos the insight from the 2023 ML-SUPERB challenge <cit.>: scaling up models does not necessarily translate to improved performance on multilingual speech tasks. In sum, our results reinforce findings in past work <cit.> that pre-trained SSL model rankings for ASR vary with the choice of downstream architecture. Model Fine-tuning: The model fine-tuning results are presented in Table <ref>. These results suggest that fine-tuning of the middle layers (i.e., layers 9–14) is more effective across the evaluated SSL models and training frameworks than fine-tuning the bottom or top layers. While full fine-tuning mostly outperforms partial fine-tuning in the normal setting (it also has the lowest mean CER on the worst-performing language in most cases), this is not the case in the FS setting. For instance, full fine-tuning of MMS leads to a higher mean CER compared to fine-tuning the middle layers in the FS setting. This suggests that the choice of fine-tuning strategy is crucial and warrants further exploration within the context of the benchmark. Efficient Model Adaptation: The efficient model adaptation results, detailed in Table <ref>, also do not reveal a single best model across the evaluated configurations. However, LoRA outperforms adapters across SSL models in the normal setting, indicating it is the preferred option within the setup of the benchmark. When comparing frameworks, the results generally align with those from the downstream analysis (Table <ref>). We find a difference when looking at the LID task, where XLS-R with LoRA adaptation outperforms MMS within the CTC framework, while MMS achieves better performance within the CTC-ATT framework. This suggests that the choice of framework and adaptation method can impact the performance, depending on the task and the SSL model used. Supervised Pre-Trained Models: The experiments with supervised pre-trained models are shown in Table <ref>. The results indicate that using only the pre-trained encoder from supervised models leads to better ASR performance than using models with the original decoder. The performance differences might stem from challenges in partial fine-tuning of the decoder, or from the potential biases from large-scale supervised training in major languages. Also, we find that supervised pre-trained models do not consistently outperform the SSL-based models across the evaluated configurations, which aligns with results reported in previous work <cit.>. While this work does not conduct a deeper analysis into the optimal utilization of supervised pre-trained models, it highlights this area as a promising direction for future research within the ML-SUPERB 2.0 benchmark. §.§ Variation Across Languages and Datasets To investigate the impact of different languages on the benchmark performance, we report a standard deviation for each reported CER. We find large standard deviations in both the normal and few-shot settings, indicating that there is substantial variation among the language-specific CERs. The CER of the worst-performing language, which we found to be Lao or Min Nan Chinese in most cases, also highlights the large impact of language differences, since it is substantially higher than the mean CER in the normal and few-shot settings. When investigating performance differences between datasets within a single language, we find large differences as well. For the best-performing model and configuration of ML-SUPERB 2.0, which involves fine-tuning the middle layers of MMS within the CTC framework, the largest differences in CER are among the datasets of Urdu. Specifically, we find that the CER of Urdu from Common Voice <cit.> is 21.8%, whereas it is 56.9% on data from Fleurs <cit.>. Note also that Urdu has the largest performance difference between its datasets in many of the other evaluated configurations. These results motivate future work on creating truly multilingual model representations, which can transfer to a broad range of languages and domains. § CONCLUSION We introduced ML-SUPERB 2.0, an updated benchmark for multilingual speech pre-trained models, which builds upon and extends ML-SUPERB. By relaxing many of ML-SUPERB's constraints, ML-SUPERB 2.0 opens up new avenues for research, offering a broader scope for exploration within the benchmark's setup. We investigated four primary extensions to ML-SUPERB, namely the use of larger-scale downstream models, model fine-tuning, efficient model adaptation, and the incorporation of supervised pre-trained models. Furthermore, we enhanced the evaluation metrics of ML-SUPERB to better track robustness across languages, and described dataset variation using the benchmark's best-performing model and configuration. While each of the four extensions has shown improvements over the models in the original ML-SUPERB, model fine-tuning achieves the best performance on both LID and multilingual ASR. However, the large deviations across languages and the substantially higher CER for the worst-performing languages suggest that tailored or language-specific approaches might be essential to reduce performance variability and improve model efficacy in multilingual speech processing. § ACKNOWLEDGEMENTS This work used the Bridges2 system at PSC and Delta system at NCSA through allocations CIS210014 and IRI120008P from the ACCESS program, supported by NSF grants #2138259, #2138286, #2138307, #2137603, and #2138296. § REFERENCES
http://arxiv.org/abs/2406.09317v1
20240613165357
Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases
[ "Meng Wang", "Tian Lin", "Kai Yu", "Aidi Lin", "Yuanyuan Peng", "Lianyu Wang", "Cheng Chen", "Ke Zou", "Huiyu Liang", "Man Chen", "Xue Yao", "Meiqin Zhang", "Binwei Huang", "Chaoxin Zheng", "Wei Chen", "Yilong Luo", "Yifan Chen", "Jingcheng Wang", "Yih Chung Tham", "Dianbo Liu", "Wendy Wong", "Sahil Thakur", "Beau Fenner", "Yanda Meng", "Yukun Zhou", "Zehua Jiang", "Minghui Qiu", "Changqing Zhang", "Xinjian Chen", "Sophia Y. Wang", "Cecilia S. Lee", "Lucia Sobrin", "Pearse A. Keane", "Ching-Yu Cheng", "Haoyu Chen", "Huazhu Fu" ]
eess.IV
[ "eess.IV", "cs.CV" ]
RetiZero M. Wang et al. Centre for Innovation & Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117549, Singapore. Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117549, Singapore. Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041 Shantou, Guangdong, China.Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA. School of Biomedical Engineering, Anhui Medical University, 230032 Hefei, Anhui, China.College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100 Nanjing, Jiangsu, China.Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA.National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065 Chengdu, Sichuan, China.Big Vision Medical Technology Ltd., Suzhou, China.Singapore Eye Research Institute, Singapore National Eye Centre, Republic of Singapore.Ophthalmology & Visual Sciences Academic Clinical Program (EYE ACP), Duke-NUS Medical School, Singapore.Department of Computer Science, University of Exeter, Exeter, EX4 4RN, UK.Centre for Medical Image Computing, University College London, London, UK.NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK.Department of Medical Physics and Biomedical Engineering, University College London, London, UK.Tsinghua Medicine of Tsinghua University, 100084, Beijing, China.School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, 102218, Beijing, China.Foshan Aier Eye Hospital, 528000, Foshan, Guangdong, China.College of Intelligence and Computing, Tianjin University, 300350 Tianjin, China.School of Electronics and Information Engineering, Soochow University, Jiangsu 215006, China.Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California, USA.Department of Ophthalmology, University of Washington, Seattle, WA, USA.Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA, USA.Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA.Institute of Ophthalmology, University College London, London, UK.Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore 138632, Republic of Singapore. # M. Wang, and T. Lin are the co-first authors. C.Y. Cheng, H. Chen, and H. Fu are the co-corresponding authors and contributed equally. [figure]labelfont=bf,name=Figure: ,labelsep=period [table]labelfont=bf,name=Table: ,labelsep=period Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases Meng Wang 1,2#Tian Lin3 #Kai Yu4Aidi Lin3Yuanyuan Peng5Lianyu Wang6Cheng Chen7Ke Zou8 Huiyu Liang3Man Chen 3Xue Yao3Meiqin Zhang3Binwei Huang3Chaoxin Zheng3Wei Chen3Yilong Luo3 Yifan Chen3Jingcheng Wang9Yih Chung Tham1,2Dianbo Liu1,2Wendy Wong1,2Sahil Thakur10 Beau Fenner10,11Yanda Meng12Yukun Zhou13,14,15Zehua Jiang16,17Minghui Qiu18Changqing Zhang19 Xinjian Chen20Sophia Y. Wang21Cecilia S. Lee22,23Lucia Sobrin24Pearse A. Keane14,25 Ching-Yu Cheng1,2,10,11 ()Haoyu Chen3 ()Huazhu Fu26 () June 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The current retinal artificial intelligence models were trained using data with a limited category of diseases and limited knowledge. In this paper, we present a retinal vision-language foundation model (RetiZero) with knowledge of over 400 fundus diseases. Specifically, we collected 341,896 fundus images paired with text descriptions from 29 publicly available datasets, 180 ophthalmic books, and online resources, encompassing over 400 fundus diseases across multiple countries and ethnicities. RetiZero achieved outstanding performance across various downstream tasks, including zero-shot retinal disease recognition, image-to-image retrieval, internal domain and cross-domain retinal disease classification, and few-shot fine-tuning. Specially, in the zero-shot scenario, RetiZero achieved a Top5 score of 0.8430 and 0.7561 on 15 and 52 fundus diseases respectively. In the image-retrieval task, RetiZero achieved a Top5 score of 0.9500 and 0.8860 on 15 and 52 retinal diseases respectively. Furthermore, clinical evaluations by ophthalmology experts from different countries demonstrate that RetiZero can achieve performance comparable to experienced ophthalmologists using zero-shot and image retrieval methods without requiring model retraining. These capabilities of retinal disease identification strengthen our RetiZero foundation model in clinical implementation. § INTRODUCTION Blindness and visual impairment represent a significant disease burden globally, impacting millions of individuals. Detection and timely treatment of ocular conditions, such as retinal and optic nerve diseases, are crucial for reducing severe and permanent damage. However, the insufficient availability of ophthalmic medical resources severely limits the prompt screening and treatment of retinal diseases. In recent years, artificial intelligence (AI)-based retinal disease screening systems have alleviated the workload on healthcare professionals to some extent, providing crucial technological support for the timely screening of retinal disease patients and referring for treatment. Nevertheless, most previous AI-based methods were customized for specific diseases, such as diabetic retinopathy (DR) <cit.>, glaucoma <cit.>, and retinopathy of prematurity (ROP) <cit.>. Although several methods have been proposed for simultaneously screening multiple retinal diseases with promising performance <cit.>, most current AI models for ocular disease screening were trained on task-specific datasets, leading to inevitable errors in prediction when new data (ex., images acquired by different camera) or changes in tasks (ex., introducing new or rare categories). Furthermore, due to limited healthcare resources and the varying prevalence of retinal disease, collecting comprehensive datasets covering all retinal abnormalities is time-consuming and challenging. Consequently, most AI models were trained on limited data and disease categories, restricting their feature representation. Applying these models to different real-world settings or tasks requires extensive retraining with large datasets. Moreover, data quality and labeling issues further limit the widespread adoption of AI models in ophthalmic clinical settings. Driven by abundant of big data and robust computing hardware, large foundation models (LFMs) have excelled in computer vision tasks <cit.>. Pre-trained on massive datasets, LFMs provide rich feature support for downstream tasks like object detection <cit.>, few-shot recognition <cit.>, and zero-shot <cit.>, etc. The first ophthalmic LFM, RETFound <cit.>, introduced in 2023, was trained on large, unannotated retinal images using masked autoencoder (MAE) framework <cit.>. It provides rich feature support and improves the performance of downstream tasks, including internal domain and cross-domain retinal disease classification, few-shot learning, and prediction of systemic diseases. However, such an approach can hinder the model's capacity to align feature information with labels in downstream tasks. In contrast, the Foundation LAnguage-Image model of the Retina (FLAIR), a Contrastive Language-Image Pre-training (CLIP) -based LFMs enhance feature representation by aligning text descriptions with image features, improving feature-label alignment but struggling with complex semantic features in medical imaging <cit.>. Current LFMs for ophthalmic imaging are pre-trained on extensive yet categorically limited datasets. Therefore, developing LFMs with comprehensive ophthalmic disease knowledge is crucial for representing complex retinal features and enhancing downstream task performance. However, collecting massive and diverse ophthalmic data that covers a wide range of retinal diseases for pretraining remains a significant challenge. To address these problems and challenges, we collected 341,896 fundus images paired with text descriptions from 29 publicly available datasets (containing 303,124 fundus images with labels), 180 ophthalmic books (2,3328 fundus images with text descriptions), and online resources (15,544 fundus image with text descriptions), encompassing over 400 retinal and optic nerve diseases across multiple countries/regions and ethnicities. As shown in Figure  <ref>, RetiZero is based on a contrastive vision-language pretraining framework that integrates MAE-based pretraining knowledge and low-rank training methods. Moreover, we introduced an uncertainty vision-language feature calibration method using Dirichlet reparameterization within the contrastive vision-language pretraining framework, to further better align vision-language features in the high-dimensional embedding space. Consequently, RetiZero achieved superior performance in various downstream tasks, including zero-shot fundus disease recognition, image-to-image fundus disease retrieval, internal domain retinal disease identification, few-shot fine-tuning, and cross-domain fundus disease identification. § RESULTS §.§ Zero-shot fundus disease recognition The biggest advantage of RetiZero is the capability of zero-shot learning, which enables RetiZero to recognize fundus diseases using only textual prompts, without needing to retrain or fine-tune the model with labeled fundus images. As shown in Figure <ref> (a), RetiZero achieved overall Top 1, Top 3, and Top 5 scores of 0.4421, 0.7024, 0.8404, respectively, for recognizing 15 common fundus diseases and normal condition of 30,089 fundus images (Eye-15 dataset), improved by 25.52%, 15.68%, and 15.61% over FLAIR (a recent VLM foundation model) <cit.>, respectively. Furthermore, in the analysis of individual disease, RetiZero shows outstanding zero-shot capability in identifying most categories, especially for glaucoma (Top1: 0.7477, Top3: 0.9292, and Top5: 0.9718), Retinal Detachment (Top1: 0.7624, Top3: 0.9060, and Top5: 0.9628), and Retinitis Pigmentosa (Top1: 0.8190, Top3: 0.9533, and Top5: 0.9737) (Supplementary Figure 1). Additionally, to further validate RetiZero's zero-shot capability in more challenging clinical scenarios, we collected a more demanding dataset named EYE-52. This dataset comprises 7,007 fundus images from various ophthalmology clinics, covering 52 fundus diseases, many of which are extremely rare in clinical practice. The incidence/prevalence of each category in the EYE-52 dataset was shown in Supplementary Table 3. As depicted in Figure <ref> (b), RetiZero achieved overall Top1, Top3, and Top5 scores of 0.3595, 0.6259, and 0.7561, respectively, for recognizing the 52 types of fundus diseases in a zero-shot manner, providing superior performance compared FLAIR <cit.> (Top1: 0.0915, Top3: 0.2626, and Top5: 0.3398) and Random recognizing (Top1: 0.0294, Top3: 0.0882, and Top5: 0.1471). Furthermore, RetiZero demonstrated superior zero-shot performance, especially for recognizing some rare fundus diseases in clinical practice. For instance, RetiZero achieved Top 1, Top 3, and Top 5 scores of 0.6163, 0.7907, and 0.8605, respectively, for identifying Bietti Crystalline dystrophy. For the recognition of chorioretinal coloboma, the Top 1, Top 3, and Top 5 scores were 0.5085, 0.8079, and 0.9153, respectively (Supplementary Figure 2). Figure <ref> (c) shows the Top5 prediction results provided by RetiZero and FLAIR for three rare disease samples. It can be seen that RetiZero's Top5 predictions include the correct disease, further demonstrating the outstanding performance of RetiZero in screening rare diseases. More details on the rest of the 52 disease categories can be found in Supplementary Figure 2. §.§ Fundus disease identification by image-to-image retrieval As shown in task II of Figure <ref>, we sequentially randomly sampled one image from the dataset as the query image and used the remaining samples as the candidate pool, then we computed the similarity scores between the features extracted by the image encoder of RetiZero from the query image and all candidate images. Figure <ref> (d) and Supplementary Figure 3 illustrated the excellent performance of RetiZero in identifying 15 fundus diseases through image-to-image retrieval. The overall scores for Top1, Top3, and Top5 are 0.8537, 0.9279, and 0.9500, respectively, representing an improvement of 9.35%, 4.79%, and 3.22% over RETFound <cit.>, and 300.22%, 121.11%, 74.00% over FLAIR <cit.>. In addition, RetiZero demonstrated the best performance across all categories compared to RETFound <cit.> and FLAIR <cit.> (Supplementary Figure 3). Moreover, in the more challenging Eye-52 dataset, RetiZero achieved overall Top1, Top3, and Top5 scores of 0.7257, 0.8432, and 0.8860, respectively (Figure <ref> (e)). Improved by 12.35%, 7.85%, and 6.30% over RETFound <cit.>, and 767.75%, 389.88%, 271.51% over FLAIR [16], respectively. Furthermore, in analysis of individual disease, RetiZero demonstrated great potential, particularly in identifying several rare fundus diseases such as Bietti crystalline dystrophy (Top1: 0.8605, Top3: 0.9360, and Top5: 0.9419), chorioretinal coloboma (Top1: 0.8192, Top3: 0.8927, and Top5: 0.9096), and punctate inner choroidopathy multifocal choroiditis (Top1: 0.9022, Top3: 0.9464, and Top5: 0.9621) (Supplementary Figure 4). More details on the 52 disease categories can be found in Supplementary Figure 4. In addition, we also calculated Precision@1, Precision@3, and Precision@5 to comprehensively evaluate RetiZero's performance in the task of fundus disease identification through image-to-image retrieval. Figures <ref> (d) and Figures <ref> (e) demonstrate that RetiZero achieved the highest Precision@1, Precision@3, and Precision@5 on both testing datasets, EYE-15 and EYE-52. Meanwhile, RetiZero demonstrated the best performance across most categories compared to RETFound <cit.> and FLAIR <cit.> (Supplementary Figure 5 and Supplementary Figure 6). Moreover, Figure <ref> (f) shows an example of the Top5 prediction results from RetiZero, RETFound, and FLAIR. It can be seen that RetiZero achieved superior retrieval performance compared to RETFound and FLAIR, further demonstrating its excellent feature representation capabilities. Furthermore, Supplementary Figure 7 presents heatmaps illustrating the weights of different foundational models for various fundus diseases. RetiZero's weights were more precisely concentrated on the regions affected by different fundus diseases. This precise focus underscores RetiZero's capability to accurately identify a range of fundus diseases, including rare ones, providing significant evidence of its diagnostic proficiency. §.§ Clinical evaluation by ophthalmology experts from different countries To comprehensively evaluate RetiZero's capability in fundus disease recognition without retraining the model, we randomly selected two samples from each category of the EYE-52 dataset, creating a new subset called EYE52-sub with a total of 104 instances. 14 ophthalmic experts from Singapore, the United States, and China were invited to diagnose the 104 samples in the EYE52-sub dataset. Among them, four ophthalmic experts have 3 to 5 years of clinical experience, six have 5 to 10 years of clinical experience, and four have more than 10 years. Specifically, we developed an online fundus image reading system and uploaded the 104 samples to the server, as shown in Figure <ref>. Mimicking the zero-shot setup, we provided 52 disease options on the webpage as prompts. During the image reading process, the clinicians selected diagnostic results from the 52 disease categories based on the image content. Finally, each expert was asked to assess their confidence in their diagnostic results. As shown in Figure <ref>, the diagnostic accuracy of the 14 ophthalmologists ranges from 0.365 to 0.788, and the median values between ophthalmologists is 0.582, while RetiZero's zero-shot Top1, Top3, and Top5 accuracies are 0.308, 0.635, and 0.798, respectively. Therefore, RetiZero's zero-shot Top3 performance is comparable to that of most ophthalmologists, and its Top5 performance surpasses that of all ophthalmologists. Furthermore,the fundus disease identification performance by image-to-image retrieval for RetiZero achieved Top1, Top3, and Top5 accuracies of 0.6837, 0.7449, and 0.7959, respectively. Therefore, RetiZero's Top1 accuracy in identifying fundus diseases through image-to-image retrieval surpasses that of most ophthalmologists. These experimental results further demonstrate that RetiZero can achieve performance comparable to experienced ophthalmologists through zero-shot and image retrieval methods without retraining the model. §.§ Internal domain retinal disease identification We collected three independent datasets from 5 ophthalmic clinics, named H1, H2, and H3, to validate the performance of RetiZero in internal domain retinal disease identification tasks. Supplementary Figure 11 provides the data collection process and annotation details of three datasets. "Internal domain" means that we fine-tuned and validated the model separately within each of the three datasets. The details about three datasets as shown in Supplementary Table 4 to Supplementary Table 6. As shown in Figure <ref> (a), RetiZero achieved average AUCs of 0.9972, 0.9796, and 0.9930 on the three datasets, respectively, each encompassing 15, 13, and 12 different categories of retinal diseases/normal condition, respectively. These results represent improvements of 1.90%, 6.94%, and 2.85% compared to RETFound <cit.>, 1.06%, 7.97%, and 3.02% compared to FLAIR <cit.>. This is particularly evident for certain retinal diseases with ambiguous features, such as macular hole, epiretinal membrane, and retinal artery occlusion. RetiZero exhibited statistically significant improvement in the performance of identifying retinal diseases, demonstrating an obvious superiority over RETFound <cit.> and FLAIR <cit.> (all p<0.05 in all of the three datasets, Figure <ref> (a)). §.§ Few-shot fine-tuning Limited annotated data has consistently hindered the advancement of AI algorithms for medical image recognition. To address the challenge, we fine-tuned the model using only five samples from each fundus disease to evaluate RetiZero's performance in identifying fundus diseases with very limited training data. The data details were provided in Supplementary Table 7 to Supplementary Table 9. As shown in Figure <ref> (b), RetiZero achieved the highest AUROC scores across the three datasets compared to RETFound <cit.> and FLAIR <cit.>. In the task of identifying 15, 13, and 12 types of fundus diseases in the H1, H2, H3 dataset, RetiZero achieved AUROC values of 0.9668, 0.8585, and 0.9422 respectively, representing improvements of 7.21% to 35.10% over RETFound <cit.> and FLAIR <cit.> (all P-value < 0.01). These experimental results indicate that RetiZero possesses superior fundus feature representation capabilities. Even with limited annotated data samples, it can effectively learn the characteristic information of different fundus diseases in fundus images. §.§ Cross domain fundus disease identification To validate the robustness of RetiZero in the task of cross-domain fundus disease identification, we further reorganized the three datasets of H1, H2, and H3 and only used the data with shared categories across the three datasets. Then, we sequentially used the reorganized datasets of rH1, rH2, and rH3 as internal datasets and utilized the remaining two datasets as external testing sets to verify the robustness of different foundation models. The data information for different experimental strategies is presented in Supplementary Table 10 to 12. As shown in Figure <ref>, RetiZero achieved promising performance in all validation settings. Specifically, in the internal test set of the three datasets, RetiZero achieved AUROC values of 0.9984, 0.9857, and 0.9901, respectively, representing improvements of 3.12%, 8.60%, and 6.32% over RETFound (all p-values < 0.01) <cit.>; 0.73%, 4.38%, and 4.97% over FLAIR (all P-values < 0.05, Figure <ref> (a)) <cit.>, with significant performance improvements observed in all testing set. In external tests, the performance of RetiZero remained similar to the internal test, with all AUROC >= 0.9124 and significantly outperformed RETFound <cit.> and FLAIR <cit.> in all tasks (all P-value <= 0.02, Figure <ref> (b) and Figure <ref> (c)). Additionally, as shown in Supplementary Figure 8 to Supplementary Figure 10, RetiZero exhibits different AUC scores in identifying various categories of retinal diseases on distinct datasets. Notably, RetiZero achieved outstanding performance in the identification of retinal diseases across most of the categories, especially in diseases with ambiguous pathologic features such as epiretinal membrane (ERM), retinal artery occlusion (RAO), and central serous chorioretinopathy (CSCR). § DISCUSSION In this study, we trained a vision-language-foundation model RetiZero for retinal imaging using vast fundus images paired with text description. It has demonstrated its strong capability in representing retinal disease features across a wide range of downstream tasks of retinal disease identification, including internal domain and cross-domain classification, few-shot fine-tuning, zero-shot recognition, and image-to-image retrieval. The performance of RetiZero is superior to two state-of-the-art LFMs, RETFound [15] and FLAIR [16]. These results collectively demonstrated the superior generalizable and robust performance of RetiZero in both common and rare retinal disease identification. The superiority of RetiZero over RETFound <cit.> and FLAIR <cit.> can be attributed to its unique design and diverse data used for pre-training. Although the RETFound model <cit.>, pre-trained on a large number of fundus images using the MAE architecture, can enhance the performance of various downstream tasks, it included a limited number of fundus disease categories, particularly rare fundus diseases. In addition, it lacks the incorporation of textual information, resulting in insufficient characterization of image feature attributes, making it unsuitable for text prompt-based zero-shot fundus disease screening tasks and limiting its application in clinical practice scenarios, especially for the identification of rare fundus diseases. In contrast, FLAIR <cit.>, based on the CLIP architecture, incorporates textual description information during network training to enhance the representation of image feature attributes. However, it is pre-trained on a very limited dataset of fundus disease knowledge, leading to poor performance in zero-shot recognition tasks for rare fundus diseases. Furthermore, FLAIR lacks guidance for learning information such as lesion contours and topological structures in images, resulting in low performance in fundus disease identification through image-to-image retrieval. To address these limitations, we developed this fundus contrastive language-image foundation model, RetiZero, which integrates the strengths of MAE self-supervised learning and CLIP contrastive learning architectures. To further enhance the model's understanding of fundus diseases, we curated a dataset of image-text pairs covering over 400 fundus diseases, sourced from publicly available datasets, ophthalmic textbooks, and online resources for pretraining RetiZero. As a result, RetiZero is now a foundational model with extensive and comprehensive knowledge in ophthalmology. Classification of retinal photographs for fundus disease identification is a well-studied task. Driven by comprehensive ophthalmic knowledge and contextual information from fundus images, RetiZero provides strong feature representation capability and robustness, enabling its superior performance for fundus disease identification across internal domain, cross-domain, and few-shot learning. Although the foundation model can reduce the sample size needed for training, classification tasks still require a certain number of images for fine-tuning and testing. It would be very challenging to collect enough sample sizes for rare diseases. To address this issue, we introduced image-to-image retrieval and zero-shot recognition tasks. Both tasks do not require data for fine-tuning, making them particularly useful for rare diseases. Image-to-image retrieval involves determining the category of a query fundus image based on feature similarity scores between the query and candidate images. RetiZero leverages its excellent image content representation capabilities of the MAE architecture and the textual feature alignment characteristics of the CLIP architecture. Therefore, it achieved superior performance in fundus disease retrieval tasks based on retinal image content. While the zero-shot learning setting allows for the recognition of rare diseases with as few as one sample. RetiZero learned textual knowledge of over 400 types of fundus diseases, enabling it to perform promisingly in zero-shot fundus disease recognition tasks, a feat not achievable by RETFound. We also recognized limitations and the need for improvements in the current work. Although our collected dataset includes knowledge about over 400 types of fundus diseases, the imbalance across different categories may limit RetiZero's performance in downstream tasks. Therefore, further enriching the dataset with varied quantities of different categories of fundus diseases, especially for rare fundus diseases, will be part of our future work. In addition, while RetiZero has shown promising performance across multiple tasks and datasets, specialized models optimized for specific tasks may outperform generic models. Therefore, we will further explore improvements of RetiZero for specific tasks. In conclusion, the proposed feature-calibrated retinal vision-language foundation model (RetiZero) with knowledge of over 400 retinal diseases can effectively represent the rich contextual feature information in fundus images, as well effectively learn the alignment between retinal image features and textual descriptions. RetiZero achieved superior performance on feature representation and generalizability across different retinal disease recognition tasks at various ophthalmic centers, different degrees of domain drift, and very limited training samples. Particularly, the outstanding performance of RetiZero in zero-shot fundus disease identification and image-to-image retrieval-based fundus disease recognition holds significant importance for screening fundus diseases in clinical practice, especially for rare fundus diseases. § METHODS §.§ Dataset §.§.§ Data for pretraining: We utilized RETFound <cit.>, pre-trained on over 900,000 fundus images using the MAE architecture, as the pre-trained backbone for the Image Encoder of RetiZero. Meanwhile, we introduced low-rank learnable factors into the pre-trained RETFound and leveraged the CLIP architecture to learn image-text knowledge, aiming to enhance the model's understanding of image-text correlations and improve its feature representation capabilities. We pre-trained RetiZero using our collected dataset comprising 341,896 image-text pairs, covering over 400 fundus diseases. Since the dataset used for pre-training with the MAE architecture has been previously described in RETFound <cit.>, this paper focuses on detailing the 341,896 image-text dataset we collected. As shown in Supplementary Table 13, the image-text pretraining data mainly consists of three parts: publicly available dataset with category information, data from the ophthalmic books with description information, and data from online resources with description. Specifically, we collected a total of 303,129 fundus images from 29 publicly available datasets, covering over 100 different categories of retinal diseases. We used these category labels as textual descriptions corresponding to the fundus images input into RetiZero. To enable RetiZero to acquire a more comprehensive knowledge of ophthalmology, we invited 10 ophthalmologists to further collect 23,228 fundus images with corresponding textual descriptions from 180 ophthalmic books. As shown in Supplementary Table 14, these images cover 414 ophthalmic descriptive labels, encompassing nearly all known fundus diseases to date. Furthermore, we also collected 28,800 fundus data with relevant descriptions from the online resources. We assembled a team of 12 ophthalmologists to manually clean and organize 15,544 images along with their corresponding textual descriptions. In summary, the dataset for pretraining RetiZero covers almost all currently known fundus diseases, integrating very comprehensive ophthalmic knowledge. We pre-trained RetiZero on the public platform Pytorch and Nvidia Geforce DGX A100 GPU (80G). The batch size was set to 128. Adam was adopted as the optimizer to optimize RetiZero. The procedure of data collection for RetiZero pretraining was provided in Figure <ref>. §.§.§ Data for internal domain retinal diseases identification: To verify the performance of the proposed RetiZero in the task of retinal disease identification, we built three datasets across multiple ophthalmic centers: health dataset 1 (H1), health dataset 2 (H2), and health dataset 3 (H3). This study was approved by the Joint Shantou International Eye Center Institutional Review Board and adhered to the principles of the Declaration of Helsinki. The data has been de-identified. In accordance with IRB regulations, if the data does not contain any identifiable patient information, informed consent is not required. As a result, this study has been granted approval to waive the need for informed consent. The clinical assessment and labeling procedure are shown in Supplementary Figure 11. H1 dataset, consists 11,414 fundus images covering 14 categories of retinal diseases and a normal condition, was collected from multiple eye clinics using different fundus cameras. We further divided H1 into training (6,942), validation (2,284), and testing (2,288) for model fine-tuning, model selection, and performance verification, respectively. More details of data information are given in Supplementary Table 4. H2 dataset consists of 7,812 fundus images, including 12 types of retinal diseases and 1 normal condition. The category and data information are given in Supplementary Table 5. To validate the performance of fine-tuning RetiZero for retinal disease identification on the H2 dataset, we partitioned the H2 dataset into training (4,682), validation (1,561), and testing (1,569) sets, respectively, for model fine-tuning, model selection, and performance evaluation. Supplementary Table 6 provides the category and data distribution information for the H3 dataset, which comprises 10,863 fundus images across 12 categories. We divided the H3 dataset into training (6,511), validation (2,174), and testing (2,178) sets for model fine-tuning, selection, and performance evaluation. In this paper, we fine-tuned RetiZero to the task of internal domain retinal disease identification on the public platform Pytorch and Nvidia Geforce 3090 GPU (24). Adam optimizer and cross-entropy loss function were adopted to guide the model fine-tuning. The total iteration epoch and batch size were set to 100 and 64, respectively. §.§.§ Data for few-shot fine-tuning: To evaluate the performance of RetiZero in the few-shot fine-tuning downstream task, we further reorganized the H1, H2, and H3 datasets. Specifically, we randomly selected 5 samples from each category of H1 training set, H2 training set, and H3 training set for few-shot fine-tuning, while retaining the validation and testing datasets for model selection and performance evaluation. More details about category and data distribution information are given in Supplementary Table 7 to Supplementary Table 9. In this experiment, RetiZero was fine-tuned on the public platform Pytorch and Nvidia Geforce 3090 GPU (24G). Adam optimizer and cross-entropy loss function were adopted to guide the model optimization. The total iteration epoch and batch size were set to 1000 and 32, respectively. §.§.§ Data for cross-domain fundus disease identification: To verify the generality and robustness of RetiZero in the task of cross-domain fundus disease identification, we invited professional doctors to re-organize the H1, H2, and H3 datasets. Ultimately, 11 overlapping categories were identified across the three datasets, which were then renamed as rH1(10,304 fundus images), rH2 (6,829 fundus images), and rH3 (10,485 fundus images). As shown in Supplementary Tables 10 to 12, we conducted three experimental settings to validate the generality and robustness of RetiZero. Specifically, we sequentially adopted rH1, rH2, and rH3 as internal datasets for model fine-tuning, selection, and internal testing, while utilizing the remaining two datasets as external testing sets to assess the generality and robustness of RetiZero. Therefore, to deploy these experimental settings, we fine-tuned RetiZero on the public platform Pytorch and Nvidia Geforce 3090 GPUs (24G). We used the Adam optimizer and cross-entropy loss function to guide the model fine-tuning. The total iteration epochs and batch size were set to 100 and 64, respectively. §.§.§ Data for the tasks of zero-shot fundus disease recognition and fundus disease identification by image-to-image retrieval: We combined three datasets from different hospitals, H1, H2, and H3, into a dataset named EYE-15, containing 30,089 fundus images that include 14 common fundus diseases and 1 normal category. This dataset was used to validate RetiZero's performance in screening common fundus diseases in zero-shot and image-to-image retrieval approaches. The data distribution of each category in EYE-15 was provided in Supplementary Table 1. Moreover, we further collaborated with several ophthalmologists from multiple eye clinics to collect 7,007 fundus images by different fundus cameras (EYE-52 dataset), comprising 51 fundus diseases and 1 normal condition, to validate the performance of zero-shot fundus disease recognition and fundus disease identification by image-to-image retrieval in a more challenging setting. The data distribution was shown in Supplementary Table 2. As shown in Supplementary Table 2, EYE-52 comprises many clinically particularly rare fundus diseases, such as albinism, Bietti crystalline dystrophy, choroidal coloboma, and choroidal neoplasm. We adopted Top1, Top3, and Top 5 accuracy to evaluate the performance of RetiZero in both tasks of zero-shot fundus disease recognition and fundus disease identification by image-to-image retrieval. And, Supplementary Figure 11 provides the process of the collection for the EYE-15 and EYE-52 datasets. In this paper, we also adopted Precsion@1, Precesion@3, and Precsion@5 as the metrics to evaluate the performance of different foundation models in the task of fundus disease retrieval. Precision@N is a metric used to evaluate the performance of information retrieval systems and ranking algorithms. It is specifically used to measure the precision of the top N results returned by a system. Here is the formula and its explanation: Precision@N=| R_all∩ R_Re@N| /N, where R_all is the set of all relevant samples for the given query, R_Re@N represents the set of the top N samples retrieved by the system in response to the query. | R_all∩ R_Re@N| denotes the number of relevant samples in the top N retrieved documents, that is the count of samples that are both relevant and retrieved within the top N results, while N is the number of top samples considered for the calculation. §.§ Framework of RetiZero Figure <ref> provides an overview of the RetiZero framework. RetiZero integrates the advantages of MAE self-supervised learning and CLIP contrastive learning architectures. Specifically, the model is built upon the MAE-based pre-trained backbone network RETFound <cit.>, whose weights are frozen to preserve the model's representation capability for complex semantic information such as lesion contours and topological structures in retinal images. Meanwhile, we introduced low-rank learnable factors into the pre-trained RETFound and leveraged the CLIP architecture to learn image-text knowledge, aiming to enhance the model's understanding of image-text correlations and improve its feature representation capabilities. Furthermore, we incorporated an uncertainty vision-language feature calibration method based on Dirichlet reparameterization into the contrastive vision-language pretraining framework to further refine visual-language features in the high-dimensional embedding space, thereby enhancing the model's ability to represent complex features in fundus images. Ultimately, RetiZero is obtained, which integrates the advantages of both MAE and CLIP architectures, providing feature support for subsequent downstream tasks. We will introduce the components of RetiZero in detail in the following sections. §.§.§ Image Encoder: As shown in Figure <ref>, the image encoder consists of MAE-based SSL pre-trained backbone and low-rank learnable factors. MAE is a widely used self-supervised learning approach that employs a simple autoencoder approach to reconstruct the original signal based on partial observations. MAE-based SSL pretraining can guide the network to focus on the rich structural information and contextual features in the images. Therefore, RETFound <cit.>, pre-trained on over 900,000 fundus images, is adopted as our MAE-based pre-trained backbone. Low-rank learnable factors (LoRA) are a parameter-efficient transfer learning method based on reparameterization <cit.>, which utilizes low-rank representations to minimize the number of trainable parameters. It enables a pre-trained large foundation model to incorporate new knowledge into new target tasks, demonstrating robust and state-of-the-art (SOTA) performance in various parameter-efficient transfer learning tasks. Therefore, we utilize low-rank learnable factors to introduce retinal feature description information into the image encoder of RetiZero, enhancing its capacity to represent feature attributes of retinal images. Specifically, given the input token sequence F_in∈ R^B× N× C_in and the output token sequence F_out∈ R^B× N× C_out obtained by the projection layer W∈ R^C_out× C_in, LoRA assumes that updates to W should be gradual and stable. Therefore, we apply low-rank approximations to delineate this gradual update. First, freeze the transformer layer to keep W fixed while adding a bypass to complete the low-rank approximation. And, the bypass consists of two linear mapping layers, A∈ R^r× C_in and B∈ R^C_out× r, where r≪{ C_in,C_out}. Thus, the processing of the update layer Ŵ can be described as: F_out=Ŵ F_in, Ŵ =W+∇ W=W+BA. Since the multi-head self-attention mechanism determines which regions to attend based on cosine similarity, LoRA was applied to the projection layers of query, key, or value to influence the attention scores. Therefore, we apply LoRA to the query and value projection layers for low-rank approximation optimization, thus the processing strategy for multi-head self-attention becomes: Att( Q,K,V) =Softmax( QK^T/√(C_out) +B) V, Q=Ŵ_q F=W_q+B_qA_qF, K=W_kF, V=Ŵ_v F=W_vF+B_vA_vF, where W_q, W_k, and W_v are frozen projection layers of RETFound, while A_q, B_q, A_v, and B_v are trainable LORA factors. §.§.§ Text Encoder: Descriptions of fundus images are typically more challenging than those of natural images, as they often contain numerous specialized clinical medical terms, sometimes even comprising multiple lesion signs or sentences. Therefore, in this paper, we utilize the BioClinicalBERT <cit.> model pre-trained on medical texts from the MIMIC III dataset as the text encoder to obtain clinically-aware textual embeddings. §.§ Uncertainty-based feature calibration for guiding RetiZero pretraining In this paper, we further introduced an uncertainty vision-language feature calibration method based on Dirichlet reparameterization <cit.> into the contrastive vision-language pretraining framework, to further calibrate visual-language features in the high-dimensional embedding space for enhancing the robustness of the model to represent complex features in fundus images. Specifically, as shown in Figure 1, RetiZero's pretraining consists of a fundus image encoder and a text encoder. The linear layer serves as a projection head for both the image encoder and the text encoder, mapping the acquired features to a 512-dimensional embedding feature space. Let assume ϕ ={ϕ_E ,ϕ_H} denotes image encoder (ϕ_E) and corresponding projection head (ϕ_H). Given a fundus image X_i, the image encoder is adopted to obtain feature representation of F_Img=ϕ_E( X_i). Meanwhile, ψ ={ψ_E,ψ_H} is used to represent text encoder (ψ_E) and corresponding projection head (ψ_H). The text encoder (ψ_E) is adopted to extract feature embedding F_T=ψ_E (X_T) from text input X_T.Then, image projection head (ϕ_H) and text projection head (ψ_H) are utilized to map the independent modality representations into a joint unit hyper-sphere space: I=ϕ_H( F_Img) /ϕ_H( F_Img) and T=ψ_H( F_T) /∥ψ_H( F_T) ∥, respectively. The similarity between the input image (X_i) and input text (X_T) are evaluated by the cosine similarity based on the normalized features: I^TrT_T, where Tr represents the transpose operator. With obtained similarity metrics, the optimization goal of the contrastive-based learning pre-training approach is to minimize the distance of features between paired images and text descriptions while maximizing the distance between features of unpaired samples. Specifically, assuming that a batch contains N samples, I_i∈{ I_1,I_2,...,I_N} and T_i∈{ T_1,T_2,...,T_N}, represent image feature vector and text feature vector of each sample, while and G={ 0,1,...,N-1} is the corresponding category label, respectively. To guide model optimization, we use the following loss function. L_Con=L_Em+L_Dl, L_Em=1/2( ∑^N_i=1 -log( exp( I^Tr_iT_i) /∑^N_k-1 exp( I^Tr_iT_k) ) +∑^N_i=1 -log( exp( T^Tr_iI_i) /∑^N_k-1 exp( T^Tr_iI_k) ) ), L_Dl is a loss function based on feature vectors which are reparametrized from similarity measures using the Dirichlet distribution. The specific implementation is as follows: Step (1): Obtaining the evidence feature E_I2T and E_T2I by applying the Softplus activation function to similarity metrics between image and text feature embedding to ensure the feature values are larger than 0: E_I2T=Softplus(I^TrT), and E_T2I=Softplus(T^TrI), where I2T and T2I indicate image-to-text and text-to-image contrastive direction. Step (2): Parameterizing E_I2T and E_T2I to Dirichlet distribution, as: α_I2T,k =E_I2T,k+1, i.e.,α_I2T,k =e_I2T,k+1,e_I2T,k={ Softmax( I^Tr_kT_1) ,...,Softmax( I^Tr_kT_N) }, α_T2I,k =E_T2I,k+1, i.e., α_T2I,k =e_T2I,k+1,e_T2I,k={ Softmax( T^Tr_kI_1) ,...,Softmax( T^Tr_kI_N) }, where α_I2T,k, α_T2I,k, e_I2T,k, and e_T2I,k are the k-th contrastive similarity Dirichlet distribution parameters and evidence for the image-text contrastive similarity of the k-th sample in a batch of N samples. Step (3): Calculating the belief masses and corresponding uncertainty score as: b_I2T,k=e_I2T,k/S_I2T =α_I2T,k -1/S_I2T , u_I2T=N/S_I2T, b_T2I,k=e_T2I,k/S_T2I =α_T2I,k -1/S_T2I , u_T2I=N/S_T2I, where S_I2T=∑^N_k=1( e_I2T,k+1) =∑^N_k=1α_I2T,k and S_T2I=∑^N_k=1( e_T2I,k+1) =∑^N_k=1α_T2I,k are the Dirichlet intensities of image-to-text and text-to-image, respectively, used to constrain 1=∑^N_k=1 b_I2T,k+u_I2T and 1=∑^N_k=1 b_T2I,k+u_T2I It can be seen from Eq. <ref>, and Eq. <ref> the probability assigned to k-th sample is proportional to the observed similarity evidence for sample k. Conversely, if less total similarity evidence was obtained, the greater the total uncertainty. In this study, we associate the Dirichlet distribution with the distribution of feature similarity between images and text descriptions, thereby obtaining belief masses and corresponding overall uncertainty score for the similarity of images and text description for each sample of a batch, based on the evidence collected from the feature similarity matrix. Therefore, we could work out the Dirichlet distribution parameter of α_I2T =[ α_I2T,1 ,...,α_I2T,N] and α_T2I =[ α_T2I,1 ,...,α_T2I,N] for image-to-text, and text-to-image, while obtaining the multinomial opinions D( p_I2T,i|α_I2T,i) and D( p_T2I,i|α_T2I,i), where p_I2T,i and p_T2I,i were the sample assignment probabilities on a simplex. Therefore, the loss function for the reparameterized similarity matrix as follows: L_Dl=L^I2T_Dl+L^T2I_Dl, where, L^I2T_Dl=L^I2T_Dl-CE+λ∗ L_KL, L^T2I_Dl=L^T2I_Dl-CE+λ∗ L_KL, where L_Dl-CE (L^I2T_Dl-CE and L^T2I_Dl-CE) was used to ensure that the correct prediction for the sample with highest similarity between image and text yielded more evidence than other samples, while L_KL was used to ensure that incorrect predictions would yield less evidence, and λ was the balance factor that was gradually increased so as to prevent the model from paying too much attention to the KL divergence in the initial stage of training, which might result in a lack of good exploration of the parameter space and cause the network to output a flat uniform distribution. L_Dl-CE=∫[ ∑^N_k=1 -y_klog( p_k) ] 1/β( α_i) ∏^N_k=1 p^α_k -1_kdp_k=∑^N_k=1 y_k( ψ( S_k) -ψ( α_k) ), where ψ() was the digamma function, while β() is the multinomial beta function for the concentration parameter α. L_KL=log( Γ( ∑^N_k=1α̂_k) /Γ( N) ∏^N_k=1Γ( ∑^N_k=1α̂) ) +∑^N_k=1( α̂_k -1) [ ψ( α̂_k) -ψ( ∑^N_k-1α̂_k) ], where α̂ =y+( 1-y) ⊙α is the adjusted parameter of the Dirichlet distribution which could avoid penalizing the evidence of the ground-truth class to 0, and Γ( ) is the gamma function. In general, as shown in Eq. <ref> and Eq. <ref> to Eq. <ref>, the loss function we designed can guide the network to focus on the feature differences in image-text similarity while further improving its robustness through mapping the features of the image-text similarity matrix to the Dirichlet distribution space to guide model optimization. §.§ Definition of Dirichlet distribution The Dirichlet distribution was parameterized by its concentration K parameters α =[ α_1 ,...,α_k] <cit.>. Therefore, the probability density function of the Dirichlet distribution was computed as: D( P|α) =1/β( α) ∏^K_k=1 p^α_k -1_k for P∈ S_K 0 Otherwise, where S_K was the K-dimensional unit simplex: S_K={ P|∑^K_k=1 p_k=1} ,0≤ p_k≤ 1, where β( α) represented the K-dimensional multinomial beta function. § CODE AVAILABILITY The code is available at <https://github.com/LooKing9218/RetiZero>. § DATA AVAILABILITY The publicly available datasets used for pre-training are available at the following links and references: APTOS: <https://www.kaggle.com/c/aptos2019-blindness-detection>. Cataract: <https://www.kaggle.com/datasets/jr2ngb/cataractdataset>. DDR: <https://github.com/nkicsl/DDR-dataset>. Diabetic Retinopathy Level Detection:< https://www.kaggle.com/datasets/arbethi/diabetic-retinopathy-level-detection>. Diabetic Retinopathy Organized: <https://www.kaggle.com/datasets/dola1507108/diabetic-retinopathy-organized>. DR15: <https://www.kaggle.com/datasets/nawa393/dr15_test>. Messidor: <https://paperswithcode.com/dataset/messidor-1>. MURED: <https://www.kaggle.com/datasets/abhirampolisetti/multi-label-retinal-disease-mured-dataset>. Retina Dataset: <https://www.kaggle.com/datasets/jr2ngb/cataractdataset>. Kaggle DR: <https://www.kaggle.com/c/diabetic-retinopathy-detection/data>. ODIR5K: <https://www.kaggle.com/datasets/andrewmvd/ocular-disease-recognition-odir5k>. ACRIMA <cit.>, BEH <cit.>, DeepDRiD <cit.>, DR1-2 <cit.>, E-ophta <cit.>, AIROGS <cit.>, DeepEyeNet <cit.>, FIVES <cit.>, G1020 <cit.>, Glaucoma dataset <cit.>, IDRiD <cit.>, JICHI <cit.>, REFUGE <cit.>, ORIGA <cit.>, PARAGUAY <cit.>, EyePACS AirDoc <cit.>, JSIEC <cit.>, RFMid <cit.>. Additional data sets supporting the findings of this study are not publicly available due to the confidentiality policy of the Chinese National Health Council and institutional patient privacy regulations. However, they are available from the corresponding authors upon request. For replication of the findings and/or further academic and AI-related research activities, data may be requested from the corresponding author H.C. (drchenhaoyu@gmail.com), and any requests will be responded to within 10 working days. Source data are provided in this paper. IEEEtran [figure]labelfont=bf,name=Supplementary Figure: ,labelsep=period [table]labelfont=bf,name=Supplementary Table: ,labelsep=period L>p0.4 C>p0.4 § SUPPLEMENTARY FILES L>p0.6 C>p0.2 p0.2|p0.1|p0.2|p0.5 Incidence/Prevalence of each category in EYE-52 dataset. Category Number Incidence /Prevalence Reference Acute Posterior Multifocal Placoid Pigment Epitheliopathy 24 I: 0.15 /100,000 <https://eyewiki.aao.org/Acute_Posterior_Multifocal_Placoid_Pigment_Epitheliopathy> Acute Retinal Necrosis 89 I: 0.063 /100,000 <https://www.aao.org/eyenet/article/diagnosis-and-treatment-of-acute-retinal-necrosis> Albinism 95 P: 0.667∼2 /100,000 <https://eyewiki.aao.org/Albinism> Angioid Streaks 83 P: 6.5 /100,000 <https://europepmc.org/article/med/37868801> Asteroid Hyalosis 150 P: 8 /1000 Behcet disease 17 P: 0.12 /100,000 <https://www.uptodate.com/contents/clinical-manifestations-and-diagnosis-of-behcet-syndrome> BEST disease 23 P: 0.787 /100,000 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10910552> Bietti Crystalline Dystrophy 172 P: 1.493 /100,000 <https://medlineplus.gov/genetics/condition/bietti-crystalline-dystrophy/> Branch Retinal Vein Occlusion 200 P: 4.42 /1000 <https://emedicine.medscape.com/article/1223498-overview> Central Retinal Vein Occlusion 200 P: 1∼2 /1000 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3178209/> Central Serous Chorioretinopathy 200 P: 9.9 /100,000 in males and 1.7/100,000 in females <https://pubmed.ncbi.nlm.nih.gov/22788735> Chorioretinal Coloboma 177 P: 5∼22 /100,000 <https://eyewiki.aao.org/Coloboma> Choroidal Metastasis 5 P: 2.3% to 9.2% in patients with cancer <https://www.e-retina.or.kr/journal/view.html?doi=10.21561/jor.2020.5.1.52> Choroidal Rupture 118 P: ∼10 /100,000 <https://www.opticianonline.net/content/features/choroidal-rupture> Commotio Retinae 80 I: 2.6% in orbital trauma <https://link.springer.com/referenceworkentry/10.1007/978-3-642-35951-4_979-1> Moderate Non-proliferative Diabetic Retinopathy 200 P: 10.6% in DR <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9319242> Severe Non-proliferative Diabetic Retinopathy 200 P: 1.2% in DR <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9319242> Proliferative Diabetic Retinopathy 200 P: 9.9% in DR <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9319242> Epiretinal Membrane 200 P: 2∼20% in various groups <https://emedicine.medscape.com/article/1223882-overview> Geographic Atrophy 300 P: 0.81% <https://www.reviewofoptometry.com/article/sizing-up-geographic-atrophy> Glaucoma 300 P: 3.54% <https://www.aaojournal.org/article/S0161-6420(14)00433-3/pdf> Hypertensive Retinopathy 200 P: 2∼17% in various groups <https://eyewiki.aao.org/Hypertensive_Retinopathy> Intraocular Foreign Body 7 I: 0.16 /100,000 <https://www.ncbi.nlm.nih.gov/books/NBK576415> Leber Congenital Amaurosis 31 P: 2∼3 /100,000 <https://eyewiki.aao.org/Leber_Congenital_Amaurosis> Leukemic Retinopathy 12 P: 9∼90% in leukemia <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4687193> Lipemia Retinalis 22 P: 23% in chylomicronaemia <https://en.wikipedia.org/wiki/Lipaemia_retinalis> Macular Hole 200 I: 0.33% <https://emedicine.medscape.com/article/1224320-overview> Morning Glory Anomaly 131 P: 2.6 /100,000 <https://onlinelibrary.wiley.com/doi/full/10.1111/aos.12778> Multiple Evanescent White-Dot Syndromes 53 I: 0.45 /100,000 <https://www.reviewofophthalmology.com/article/an-update-on-white-dot-syndromes> Myelinated Nerve Fiber 515 P: 0.57% <https://pubmed.ncbi.nlm.nih.gov/2338989> Neovascular Age-related Macular Degeneration 200 P: 3% in the oldest <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10262804> Normal 200 unknown Optic Atrophy 140 I: 2.86 /100,000 <https://www.ncbi.nlm.nih.gov/books/NBK559130> Optic Disc Coloboma 65 P: 8.9 /100,000 <https://pubmed.ncbi.nlm.nih.gov/30549247> Pathologic Myopia 300 P: 0.2-3.8% <https://eyewiki.aao.org/Pathologic_Myopia_(Myopic_Degeneration)> Pigmented paravenous retinochoroidal atrophy 8 P: 0.1 /100,000 <https://www.orpha.net/en/disease/detail/251295> Polypoidal Choroidal Vasculopathy 300 P: 4∼9.8% in presumed AMD <https://eyewiki.aao.org/Polypoidal_Choroidal_Vasculopathy#Prevalence_and_Incidence:> Presumed Ocular Histoplasmosis Syndrome 6 I: 1.35 /100,000 in U.S. <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9361689> Punctate Inner Choroidopathy_Multifocal Choroiditis 317 I: 0.04 /100,000 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8675391> Purtscher Retinopathy 13 P: 0.024 /100,000 <https://journals.lww.com/ajg/fulltext/2022/10002/s1815_purtscher_retinopathy__a_rare_clinical.1815.aspx> Retinal Arterial Macroaneurysm 96 P: 0.22 /1000 <https://www.aao.org/eyenet/article/diagnosis-of-retinal-arterial-macroaneurysm> Retinal Detachment 300 I: 7.79 /100,000 <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9922621> Retinal Racemose Hemangioma 6 unknown <https://www.aao.org/education/disease-review/retinal-hemangiomas> Retinitis Pigmentosa 300 P: 0.2∼0.33 /1000 <https://www.orpha.net/en/disease/detail/791> Retinoblastoma 22 P: 5∼6.67 /100,000 <https://www.ncbi.nlm.nih.gov/books/NBK1452> Roth Spots 59 P: 5% in infective endocarditis <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7417078/> Serpiginous Choroidopathy 35 P: 0.2∼5% in uveitis <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6407399> Stargardt disease 95 P: 10∼12.5 /100,000 <https://www.orpha.net/en/disease/detail/827> Syphilitic Uveitis 4 I: 1.25 /1000 in syphilis cohort <https://www.tandfonline.com/doi/full/10.1080/22221751.2023.2290836 > Valsalva Retinopathy 92 unkown <https://emedicine.medscape.com/article/1228106-overview?form=fpf> Vogt-Koyanagi-Harada disease 200 P: 1%-4% in U.S. <https://eyewiki.aao.org/Vogt-Koyanagi-Harada_Disease> X-linked Retinoschisis 45 P: 4∼20 /100,000 in males <https://www.orpha.net/en/disease/detail/792> Total 7,007 |p0.1|>p0.15|>p0.1|p0.2|>p0.45 Data details for RetiZero Pretraining. 1c|No. Dataset 1c|Images 1c|Diseases number 1cSource 1c|1 ACRIMA 1c|705 1c|2 Diaz-Pinto A, Morales,S, Naranjo V, et al. CNNs for automatic glaucoma assessment using fundus images: an extensive validation. Biomedical engineering online, 2019, 18:1-19. 1c|2 APTOS 1c|3,662 1c|5 <https://www.kaggle.com/c/aptos2019-blindness-detection> 1c|3 BEH 1c|634 1c|2 Islam M T, Mashfu S T, Faisal A, et al.Deep learning-based glaucoma detection with cropped optic cup and disc and blood vessel segmentation. Ieee Access, 2021, 10: 2828-2841. 1c|4 Cataract 1c|601 1c|4 <https://www.kaggle.com/datasets/jr2ngb/cataractdataset > 1c|5 DDR 1c|13,673 1c|6 <https://github.com/nkicsl/DDR-dataset> 1c|6 DeepDRiD 1c|2,256 1c|5 Liu R, Wang X, Wu Q, et al. Deepdrid:Diabetic retinopathy—grading and image quality estimation challenge. Patterns, 2022, 3(6). 1c|7 Diabetic Retinopathy Level Detection 1c|4,396 1c|5 <https://www.kaggle.com/datasets/arbethi/diabetic-retinopathy-level-detection> 1c|8 Diabetic Retinopathy Organized 1c|35,128 1c|5 <https://www.kaggle.com/datasets/dola1507108/diabetic-retinopathy-organized> 1c|9 DR1-2 1c|1,597 1c|7 Pires R, Jelinek H F,Wainer J, et al. Advancing bag-of-visual-words representations for lesion classification in retinal images. PloS one, 2014, 9(6): e96814. 1c|10 DR15 1c|34,043 1c|5 <https://www.kaggle.com/datasets/nawa393/dr15_test> 1c|11 E-ophta 1c|463 1c|2 Decenciere E, Cazuguel G, Zhang X, et al.TeleOphta: Machine learning and image processing methods for teleophthalmology. Irbm, 2013, 34(2): 196-203. 1c|12 AIROGS 1c|101,433 1c|2 De Vente C, Vermeer K A, Jaccard N, et al. AIROGS: artificial intelligence for robust glaucoma screening challenge.IEEE transactions on medical imaging, 2023. 1c|13 DeepEyeNet 1c|6,048 1c|=13 Huang J H, Yang C H H, Liu F, et al. Deepopht: medical report generation for retinal images via deep models and visual explanation. Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2021: 2442-2452. 1c|14 FIVES 1c|800 1c|6 Jin K, Huang X, Zhou J, et al. Fives: A fundus image dataset for artificial Intelligence based vessel segmentation. Scientific Data, 2022, 9(1): 475. 1c|15 G1020 1c|1020 1c|2 Bajwa M N, Singh G A P, Neumeier W, et al. G1020: A benchmark retinal fundus image dataset for computer-aided glaucoma detection. 2020 International Joint Conference on Neural Networks&nbsp (IJCNN). IEEE, 2020: 1-7. 1c|16 Glaucoma dataset 1c|364 1c|2 Anushikha Singh, Malay Kishore Dutta, M.ParthaSarathi, Vaclav Uher and Radim Burget, “Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image” Computer&amp;amp;methods and programs in biomedicine, Vol. 124, pp. 108–120, Feb. 2016, doi:10.1016/j.cmpb.2015.10.010. Ashish Issac, M.Partha Sarathi and Malay Kishore Dutta, “An adaptive threshold based image processing technique for improved glaucoma detection and classification” Computer methods and programs in biomedicine, Vol. 122 No. 2, pp. 229-244, Nov. 2015, doi: 10.1016/j.cmpb.2015.08.002. 1c|17 IDRiD 1c|516 1c|5 Porwal P, Pachade S, Kokare M, et al. Idrid: Diabetic retinopathy–segmentation and grading challenge. Medical image analysis, 2020, 59: 101561. 1c|18 JICHI 1c|9,939 1c|5 Takahashi H, Tampo H, Arai Y, et al. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. PloS one, 2017, 12(6): e0179790. 1c|19 Messidor 1c|1,200 1c|4 <https://paperswithcode.com/dataset/messidor-1> 1c|20 MURED 1c|2,208 1c|9 <https://www.kaggle.com/datasets/abhirampolisetti/multi-label-retinal-disease-mured-dataset> 1c|21 REFUGE 1c|800 1c|2 Orlando J I, Fu H, Breda J B, et al. Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Medical image analysis, 2020, 59: 101570. 1c|22 ORIGA 1c|650 1c|2 Zhang Z, Yin F S, Liu J, et al. Origa-light: An online retinal fundus image database for glaucoma analysis and research. 2010 Annual international conference of the IEEE engineering in medicine and biology. IEEE, 2010: 3065-3068. 1c|23 PARAGUAY 1c|757 1c|7 Benítez V E C, Matto I C, Román J C M, et al. Dataset from fundus images for the study of diabetic retinopathy. Data in brief, 2021, 36: 107068. 1c|24 Retina Dataset 1c|601 1c|4 <https://www.kaggle.com/datasets/jr2ngb/cataractdataset> 1c|25 Kaggle 1c|35,126 1c|5 <https://www.kaggle.com/c/diabetic-retinopathy-detection/data> 1c|26 EyePACS AirDoc 1c|33,978 1c|53 Ju L, Wang X, Wang L, et al. Improving medical images classification with label noise using dual-uncertainty estimation. IEEE transactions on medical imaging, 2022, 41(6): 1533-1546. 1c|27 JSIEC 1c|1,000 1c|39 Cen L P, Ji J, Lin J W, et al. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nature communications, 2021, 12(1): 4828. 1c|28 RFMid 1c|2,531 1c|46 Pachade S, Porwal P, Thulkar D, et al. Retinal fundus multi-disease image dataset (rfmid): A dataset for multi-disease detection research. Data, 2021, 6(2): 14. 1c|29 ODIR5K 1c|7,000 1c|=7 <https://www.kaggle.com/datasets/andrewmvd/ocular-disease-recognition-odir5k> 1c|30 Ophthalmic Book 1c|23,228 1c|400 180 Ophthalmic books 1c|31 Online Resources 1c|15,544 2cOnline Resources 2c|Total 3c341,896 p0.15|>p0.7|>p0.15 Diseases category information of the data from 180 ophthalmic books. 1c|No. 1|c|Ophthalmic labels 1|cNumber 1c|1 Achromatopsia 1 1c|2 Acute exudative polymorphous vitelliform maculopathy 32 1c|3 Acute idiopathic blind spot enlargement 2 1c|4 Acute Idiopathic Maculopathy (AIM) 23 1c|5 Acute Macular Neuroretinopathy (AMN) 25 1c|6 Acute Posterior Multifocal Placoid Pigment Epitheliopathy (APMPPE) 128 1c|7 Acute Retinal Necrosis (ARN) 280 1c|8 Acute Zonal Occult Outer Retinopathy (AZOOR) 37 1c|9 Adult-Onset Foveomacular Vitelliform Dystrophy (AFVD) 30 1c|10 Albinism 100 1c|11 Alport retinopathy 4 1c|12 Amblyopia 5 1c|13 Amyloidosis 23 1c|14 Anemic chorioretinopathy 34 1c|15 Angioid streaks 153 1c|16 Annular choroidal dystrophy 13 1c|17 Anterior Ischemic Optic Neuropathy (AION) 51 1c|18 Arc Welder's Maculopathy 2 1c|19 Arteriosclerotic changes 1 1c|20 Arteritic Anterior Ischemic Optic Neuropathy (AAION) 49 1c|21 Asteroid hyalosis 23 1c|22 Asteroid macular dystrophy 2 1c|23 Autosomal Dominant Vitreoretinochoroidopathy 14 1c|24 Behcet disease 137 1c|25 Bergmeister papilla 18 1c|26 Best disease 301 1c|27 Bietti Crystalline Dystrophy (BCD) 47 1c|28 Bilateral Diffuse Uveal Melanocytic Proliferation (BDUMP) 25 1c|29 Birdshot chorioretinopathy 90 1c|30 Blood–brain barrier disruption maculopathy 5 1c|31 Blue cone monochromatism 1 1c|32 Bothnia retinal dystrophy 8 1c|33 Branch Retinal Atery Occlusion (BRAO) 190 1c|34 Branch Retinal Vein Occlusion (BRVO) 326 1c|35 Bull eye maculopathy 41 1c|36 Cancer-associated retinopathy 2 1c|37 Carotid-cavernous fistula 1 1c|38 Central Areolar Choroidal Dystrophy (CACD) 57 1c|39 Central Retinal Artery Occlusion (CRAO) 192 1c|40 Central Retinal Vein Occlusion (CRVO) 337 1c|41 Central Serous Chorioretinopathy (CSCR) 230 1c|42 Cherry-red spot 34 1c|43 Chikungunya retinitis 8 1c|44 Chorioretinal atrophy 37 1c|45 Chorioretinitis 60 1c|46 Chorioretinitis sclopetaria 18 1c|47 Choroidal coloboma 126 1c|48 Choroidal degeneration 9 1c|49 Choroidal detachment 60 1c|50 Choroidal folds 81 1c|51 Choroidal granuloma 33 1c|52 Choroidal hemorrhage 21 1c|53 Choroidal infarction 8 1c|54 Choroidal infiltration 1 1c|55 Choroidal lesion 24 1c|56 Choroidal Neovasculization (CNV) 203 1c|57 Choroidal nevus 165 1c|58 Choroidal rupture 95 1c|59 Choroidal scar 4 1c|60 Choroidal sclerosis 3 1c|61 Choroidal tubercle 36 1c|62 Choroideremia 114 1c|63 Choroiditis 96 1c|64 Chronic granulomatous disease 3 1c|65 Cilioretinal artery occlusion 33 1c|66 Coats disease 208 1c|67 Cobblestone degeneration 14 1c|68 Combined retinal artery and vein occlusion (RAVO) 17 1c|69 Commotio Retinae 60 1c|70 Compressive optic neuropathy 10 1c|71 Cone-rod dystrophy 139 1c|72 Congenital achromatopsia 2 1c|73 Congenital grouped retinal pigment epithelium albinotic nevi 7 1c|74 Congenital Hypertrophy of the Retinal Pigment Epithelium (CHRPE) 208 1c|75 Congenital optic disc pigmentation 2 1c|76 Congenital retinal macrovessels 8 1c|77 Contusion injury 18 1c|78 Cotton Wool Spots (CWS) 41 1c|79 Crystalline retinopathy 33 1c|80 Cuticular drusen 5 1c|81 Cystoid degeneration 3 1c|82 Dalen-Fuchs nodule 2 1c|83 Dark without pressure 1 1c|84 Decompression retinopathy 7 1c|85 Diffuse Unilateral Subacute Neuroretinitis (DUSN) 91 1c|86 Disciform lesions 3 1c|87 Dislocated lens 25 1c|88 Doyne Honeycomb Retinal Dystrophy (DHRD) 84 1c|89 Dragged vessels 5 1c|90 Drusen 223 1c|91 Drusenoid PED 2 1c|92 Dry Age-related Macular Degeneration (dry AMD) 160 1c|93 Eales disease 84 1c|94 Endophthalmitis 52 1c|95 Enhanced S Cone Syndrome (ESCS) 17 1c|96 Enlarged optic cup 2 1c|97 Epiretinal Membrane (ERM) 243 1c|98 Erosive vitreoretinopathy 5 1c|99 Exudative retinal detachment 35 1c|100 Familial dominant drusen 3 1c|101 Familial Exudative Vitreoretinopathy (FEVR) 148 1c|102 Familial Flecked Retina syndrome 33 1c|103 Familial internal limiting membrane dystrophy 8 1c|104 Familial optic neuropathy 11 1c|105 Foveal hypoplasia 10 1c|106 Frosted-branch angiitis 51 1c|107 Fundus Albipunctatus 42 1c|108 Fundus Flavimaculatus 30 1c|109 fundus heterochromia 1 1c|110 Gas bubble 22 1c|111 Geographic Helicoid Peripapillary Choroidopathy (GHPC) 5 1c|112 Giant Retinal Tear (GRT) 19 1c|113 Glaucoma 154 1c|114 Glob perforation 1 1c|115 Goldmann-Favre disease 9 1c|116 Granuloma 19 1c|117 Gyrate Atrophy 72 1c|118 Hamartoma 19 1c|119 Hard exudate 22 1c|120 Heavy liquid droplet 5 1c|121 Hemangioblastoma 34 1c|122 High-altitude retinopathy 3 1c|123 Hyperlipidemia 7 1c|124 Hyperoxaluria 7 1c|125 Hypertensive retinopathy 229 1c|126 Hyperviscosity syndrome 3 1c|127 Hypotony retinopathy 25 1c|128 Idiopathic obliterative arteritis 4 1c|129 Idiopathic sclerochoroidal calcification 7 1c|130 Idiopathic vasculitis Aneurysms and Neuroretinitis syndrome (IRVAN) 24 1c|131 Incontinentia pigmenti 60 1c|132 Infection 226 1c|133 Infection Angiostrongyliasis 2 1c|134 Infection Calliphoridae 4 1c|135 Infection Candidiasis 76 1c|136 Infection Cat-Scratch Disease 78 1c|137 Infection CMV retinitis 236 1c|138 Infection Coccidiomycosis 9 1c|139 Infection Cryptococcosis 6 1c|140 Infection Cysticercosis 74 1c|141 Infection Dengue retinopathy 21 1c|142 Infection Filariasis 4 1c|143 Infection Gnathostomiasis 5 1c|144 Infection HIV retinopathy 57 1c|145 Infection Lyme disease 3 1c|146 Infection Malaria retinopathy 3 1c|147 Infection Nematode 4 1c|148 Infection Onchocerciasis 2 1c|149 Infection Ophthalmomyiasis interna 16 1c|150 Infection Presumed Ocular Histoplasmosis syndrome (POHS) 184 1c|151 Infection Rift Valley fever retinitis 5 1c|152 Infection Rubella retinopathy 17 1c|153 Infection Syphilis 15 1c|154 Infection syphilitic chorioretinopathy 112 1c|155 Infection Toxocariasis 116 1c|156 Infection Toxoplasmosis 342 1c|157 Infection Trematode 8 1c|158 Infection Tuberculosis 24 1c|159 Intermediate uveitis 106 1c|160 Intraocular Foreign Body (IOFB) 68 1c|161 Intraocular Garamycin 9 1c|162 Laser spots 45 1c|163 Late-onset retinal macular degeneration (LORMD) 5 1c|164 Lattice degeneration 113 1c|165 Leber Congenital Amaurosis (LCA) 65 1c|166 Leber Hereditary Optic Neuropathy (LHON) 46 1c|167 Leber idiopathic stellate neuroretinitis 25 1c|168 Leber miliary aneurysm 6 1c|169 Leukemic retinopathy 157 1c|170 Lipemia retinitis 14 1c|171 Lipid deposition 1 1c|172 Luetic chorioretinitis 11 1c|173 Lupus retinopathy 76 1c|174 Lymphoma 206 1c|175 Macropapilla 1 1c|176 Macular atrophy 8 1c|177 Macular dysplasia 1 1c|178 Macular dystrophy 44 1c|179 Macular edema 82 1c|180 Macular Hole (MH) 247 1c|181 Macular infarction syndrome 1 1c|182 Macular scar 1 1c|183 Macular telangiectasia 60 1c|184 Maternally inherited diabetes and deafness (MIDD) 6 1c|185 Megalopapilla 10 1c|186 Microcystoid degeneration 2 1c|187 micropapilla 1 1c|188 Microphthalmos 1 1c|189 Mitochondrial retinal Dystrophy 7 1c|190 Morning Glory syndrome (MGS) 83 1c|191 Multifocal Choroiditis (MFC) 115 1c|192 Multiple Evanescent White Dot Syndrome (MEWDS) 67 1c|193 Myelinated Nerve Fiber (MNF) 90 1c|194 Myopia 46 1c|195 Nanophthalmos 1 1c|196 Neovascular Age-related Macular Degeneration (nAMD) 320 1c|197 Neovascularization 59 1c|198 Neuroretinitis 72 1c|199 Newfoundland rod–cone degeneration (NFRCD) 3 1c|200 Nicotinic acid maculopathy 2 1c|201 Nonarteritic Anterior Ischemic Optic Neuropathy (NAION) 72 1c|202 Non-Proliferative Diabetic Retinopathy (NPDR) 363 1c|203 Normal 186 1c|204 Norrie Disease 4 1c|205 North Carolina Macular Dystrophy (NCMD) 108 1c|206 Ocular Ischemic Syndrome (OIS) 68 1c|207 Ocular melanocytosis 5 1c|208 Ophthalmic artery occlusion 8 1c|209 Optic disc anomaly 30 1c|210 Optic disc aplasia 3 1c|211 Optic disc atrophy 172 1c|212 Optic disc avulsion 29 1c|213 Optic disc coloboma 72 1c|214 Optic disc drusen 82 1c|215 Optic disc dysplasia 3 1c|216 Optic disc granuloma 11 1c|217 Optic disc hemorrhage 11 1c|218 Optic disc hyaline body 15 1c|219 Optic disc hypoplasia 65 1c|220 Optic disc metastasis 21 1c|221 Optic disc pallor 29 1c|222 Optic disc pit 130 1c|223 Optic neuritis 89 1c|224 Optic neuropathy 12 1c|225 Outer retinal corrugation 4 1c|226 Overlapping WDS 10 1c|227 Panuveitis 3 1c|228 Papilloedema 332 1c|229 Papillomegaly 8 1c|230 Papillophlebitis 6 1c|231 Papillorenal syndrome 13 1c|232 Paraneoplastic vitelliform dystrophy 2 1c|233 Paraneoplastic-Related Retinopathy 11 1c|234 Pathological Myopia (PM) 692 1c|235 Pattern Dystrophy 152 1c|236 Pearl degeneration 1 1c|237 Peripapillary atrophy 2 1c|238 Peripheral exudative hemorrhagic chorioretinopathy 9 1c|239 Perivasculitis 51 1c|240 Persistent Hyperplastic Primary Vitreous (PHPV) 60 1c|241 Persistent Placoid Maculopathy (PPM) 13 1c|242 Pigment Epithelial Detachment (PED) 97 1c|243 Pigmentary retinopathy 117 1c|244 Pigmented Paravenous Chorioretinal Atrophy (PPCRA) 9 1c|245 Polypoidal Choroidal Vasculopathy (PCV) 76 1c|246 Posterior microphthalmos 2 1c|247 Posterior placoid chorioretinitis 4 1c|248 Posterior scleritis 60 1c|249 Posterior staphyloma 18 1c|250 Posterior uveitis 16 1c|251 Posterior Vitreous Detachment (PVD) 34 1c|252 Pregnancy-associated retinopathy 53 1c|253 Pre-retinal hemorrhage 47 1c|254 Progressive dominantly inherited dystrophy 21 1c|255 Progressive Outer Retinal Necrosis (PORN) 39 1c|256 Progressive subretinal fibrosis and uveitis syndrome 8 1c|257 Progressive systemic sclerodermic retinopathy 1 1c|258 Proliferative Diabetic Retinopathy (PDR) 546 1c|259 Proliferative Vitreoretinopathy (PVR) 201 1c|260 Pseudopapilloedema 9 1c|261 Pseudovitelliform detachment 23 1c|262 Punctate Inner Choroidopathy (PIC) 54 1c|263 Purtscher retinopathy 68 1c|264 Radiation retinopathy 65 1c|265 Reactive lymphoid hyperplasia 6 1c|266 Relentless Placoid Chorioretinitis (RPC) 9 1c|267 Reticular pseudodrusen 15 1c|268 Retinal Angiomatous Proliferation (RAP) 42 1c|269 Retinal arteriovenous malformation 11 1c|270 Retinal artery macroaneurysm 151 1c|271 Retinal break 15 1c|272 Retinal cyst 2 1c|273 Retinal Detachment (RD) 463 1c|274 Retinal dystrophy 7 1c|275 Retinal folds 19 1c|276 Retinal hemorrhage 50 1c|277 Retinal infarction 8 1c|278 Retinal infiltrates 2 1c|279 Retinal ischemia 9 1c|280 Retinal pigment epitheliitis 13 1c|281 Retinal sheathing 8 1c|282 Retinal tear 208 1c|283 Retinal telangiectasia 70 1c|284 Retinal tuft 19 1c|285 Retinitis 54 1c|286 Retinitis Pigmentosa (RP) 302 1c|287 Retinitis punctata albescens 13 1c|288 Retinopathy of Prematurity (ROP) 570 1c|289 Retinoschisis 122 1c|290 Retrobulbar neuritis 1 1c|291 Roth spots 15 1c|292 RPE atrophy 13 1c|293 RPE epithelioma 3 1c|294 RPE hyperplasia 14 1c|295 RPE nevus 7 1c|296 RPE tear 8 1c|297 Sarcoidosis 11 1c|298 Scleritis 1 1c|299 Sclerochoroidal calcification 7 1c|300 Serous retinal detachment 7 1c|301 Serpiginous choroidopathy 143 1c|302 Shaken baby syndrome 63 1c|303 Sickle cell retinopathy 172 1c|304 Silicone oil 46 1c|305 Sjögren Reticular Dystrophy 11 1c|306 Sjögren–Larssen syndrome 2 1c|307 Snailtrack degeneration 18 1c|308 Snowflake degeneration 6 1c|309 Solar/Laser Maculopathy 78 1c|310 Sorsby Pseudoinflammatory Fundus Dystrophy (SPFD) 49 1c|311 Staphyloma 4 1c|312 Stargardt disease 267 1c|313 Stationary night blindness 11 1c|314 Submacular abscesses 1 1c|315 Subretinal fibrosis 14 1c|316 Subretinal fibrosis and uveitis syndrome 28 1c|317 Subretinal hemorrhage 32 1c|318 Sub-RPE hemorrhage 1 1c|319 Suprachoroidal hemorrhage 2 1c|320 Susac syndrome 11 1c|321 Sympathetic ophthalmia 56 1c|322 Synchysis scintillans 2 1c|323 Systemic diseases 70 1c|324 Takayasu retinopathy 4 1c|325 Talc retinopathy 18 1c|326 Terson syndrome 28 1c|327 Tilted disc 44 1c|328 Torpedo maculopathy 8 1c|329 Toxicity 423 1c|330 Toxicity Chalcosis 3 1c|331 Toxicity Siderosis 3 1c|332 Toxicity Tacrolimus microangiopathy 10 1c|333 Tractional retinal detachment 1 1c|334 Transplant-associated retinopathy 7 1c|335 Trauma 81 1c|336 Trauma electrocution retinopathy 3 1c|337 Trauma gunshot 2 1c|338 Trauma optic neuropathy 3 1c|339 Trauma retinal pigment epitheliopathy 6 1c|340 Trauma retinopathy 7 1c|341 Tumor 15 1c|342 Tumor adenocarcinoma of the RPE 3 1c|343 Tumor adenoma of the ciliary body pigment epithelium 2 1c|344 Tumor adenoma of the RPE 19 1c|345 Tumor astrocytic hamartoma 4 1c|346 Tumor cavernous hemangioma 44 1c|347 Tumor choroidal hemangioma 182 1c|348 Tumor choroidal melanoma 512 1c|349 Tumor choroidal metastasis 213 1c|350 Tumor choroidal osteoma 114 1c|351 Tumor choroidal plasmacytoma 2 1c|352 Tumor ciliochoroidal melanoma 29 1c|353 Tumor combined hamartoma of the retina and RPE 117 1c|354 Tumor congenital simple hamartoma of the RPE 2 1c|355 Tumor leiomyoma 1 1c|356 Tumor medulloepithelioma 6 1c|357 Tumor melanocytoma 6 1c|358 Tumor metastasis 4 1c|359 Tumor myeloma 2 1c|360 Tumor optic disc astrocytic hamartoma 5 1c|361 Tumor optic disc astrocytoma 6 1c|362 Tumor optic disc capillary hemangioma 15 1c|363 Tumor optic disc glioblastoma 2 1c|364 Tumor optic disc glioma 4 1c|365 Tumor optic disc hemangioblastoma 5 1c|366 Tumor optic disc hemangioma 2 1c|367 Tumor optic disc melanocytoma 104 1c|368 Tumor optic disc melanoma 3 1c|369 Tumor optic disc meningioma 5 1c|370 Tumor retinal angioma 1 1c|371 Tumor retinal astrocytic hamartoma 111 1c|372 Tumor retinal astrocytoma 59 1c|373 Tumor retinal capillary hemangioma 220 1c|374 Tumor retinal cavernous hemangioma 39 1c|375 Tumor retinal hamartoma 2 1c|376 Tumor retinal melanocytoma 1 1c|377 Tumor retinal metastasis 22 1c|378 Tumor retinal racemose hemangioma 25 1c|379 Tumor Retinoblastoma (RB) 513 1c|380 Tumor retinocytoma 7 1c|381 Tumor RPE hamartomas 3 1c|382 Tumor teratoma 1 1c|383 Tumor uveal hemangiopericytoma 1 1c|384 Tumor uveal melanoma 2 1c|385 Tumor uveal metastasis 1 1c|386 Tumor uveal schwannoma 3 1c|387 Tumor Vasoproliferative Tumor (VPT) 48 1c|388 Type 1 aneurysmal telangiectasis 1 1c|389 Type 2 aneurysmal telangiectasis 3 1c|390 Unifocal helioid choroiditis 1 1c|391 Uveal benign reactive lymphoid hyperplasia 14 1c|392 Uveal Effusion Syndrome (UES) 46 1c|393 Uveitis 46 1c|394 Valsalva retinopathy 59 1c|395 Vascular anomaly 42 1c|396 Vascular sheathing 4 1c|397 Vascular tortuosity 40 1c|398 Vasculitis 99 1c|399 Venous stasis retinopathy 25 1c|400 Vessel shunt 31 1c|401 Vitreomacular Traction syndrome (VMT) 51 1c|402 Vitreous base avulsion 3 1c|403 Vitreous cyst 9 1c|404 Vitreous Hemorrhage (VH) 71 1c|405 Vitreous liquefaction 4 1c|406 Vitreous opacity 15 1c|407 Vitritis 25 1c|408 Vogt-Koyanagi-Harada disease (VKH) 283 1c|409 West African crystalline maculopathy 9 1c|410 West Indies crinkled retinal pigment epitheliopathy 3 1c|411 White with pressure 4 1c|412 White without pressure 10 1c|413 Xerophthalmia 5 1c|414 X-linked juvenile retinoschisis (XLRS) 153 2l|Total 23,228
http://arxiv.org/abs/2406.08702v1
20240613000020
VLind-Bench: Measuring Language Priors in Large Vision-Language Models
[ "Kang-il Lee", "Minbeom Kim", "Seunghyun Yoon", "Minsung Kim", "Dongryeol Lee", "Hyukhun Koh", "Kyomin Jung" ]
cs.AI
[ "cs.AI", "cs.CL", "cs.CV" ]
Linear spectroscopy of collective modes and the gap structure in two-dimensional superconductors I. Iorsh June 17, 2024 ================================================================================================== § ABSTRACT Large Vision-Language Models (LVLMs) have demonstrated outstanding performance across various multimodal tasks. However, they suffer from a problem known as language prior, where responses are generated based solely on textual patterns while disregarding image information. Addressing the issue of language prior is crucial, as it can lead to undesirable biases or hallucinations when dealing with images that are out of training distribution. Despite its importance, current methods for accurately measuring language priors in LVLMs are poorly studied. Although existing benchmarks based on counterfactual or out-of-distribution images can partially be used to measure language priors, they fail to disentangle language priors from other confounding factors. To this end, we propose a new benchmark called VLind-Bench, which is the first benchmark specifically designed to measure the language priors, or blindness, of LVLMs. It not only includes tests on counterfactual images to assess language priors but also involves a series of tests to evaluate more basic capabilities such as commonsense knowledge, visual perception, and commonsense biases. For each instance in our benchmark, we ensure that all these basic tests are passed before evaluating the language priors, thereby minimizing the influence of other factors on the assessment. The evaluation and analysis of recent LVLMs in our benchmark reveal that almost all models exhibit a significant reliance on language priors, presenting a strong challenge in the field. § INTRODUCTION Recent Large Vision-Language Models (LVLMs) have demonstrated remarkable performance across various tasks through pre-training on massive multimodal datasets and visual instruction tuning. <cit.>. However, these models tend to generate responses based solely on spurious text patterns, leaving the given image unconsidered. We refer to this problem as language prior, borrowing the term from the Visual Question Answering (VQA) community <cit.>. Such language priors can lead to undesirable biases <cit.> and hallucinations <cit.>. For example, when a model is presented with an image of a red banana and a yellow apple along with the question, “Is the banana yellow?,” it has been observed that the model frequently responds with “Yes,” ignoring the image content <cit.>. To develop a trustworthy LVLM, resolving the language prior issue is crucial; however, it has not been explored much nor has benchmarks that can accurately measure the issues. One approach to measure language priors is assessing performance on VQA benchmarks consisting of counterfactual images (e.g., Whoops! <cit.> and ROME <cit.>). If a model bears language priors, it will answer the question based on learned facts or common sense from its parametric knowledge without collaborating information in the given context (i.e., image); easily failing on answering counterfactual VQA tasks. However, it is challenging to distinguish the models' misbehaviors solely caused by language priors from those caused by other deficiencies in LVLMs. For example, there could be multiple factors affecting performance in counterfactual-contents VQA tasks – not only language priors but also commonsense knowledge, visual perception capabilities, and the model's reluctance to counterfactual responses. Such confounding factors make it difficult to evaluate methodologies for improving language prior problems and to assess progress in the research field. In this paper, we propose VLind-Bench, the first benchmark that can accurately measure the language priors, or blindness, of various LVLMs and disentangle the root causes of their failures. To precisely measure language priors, it is necessary to create test instances that models fail if and only if they rely on language priors. For this purpose, we meticulously design a sequence of tests and measure the accuracy on each of them (Figure <ref> (a)). Specifically, each instance in VLind-Bench involves four tests that can check whether a model possesses (1) commonsense knowledge, (2) visual perception, (3) commonsense bias, and (4) language prior. The first three serve as a sanity check performed before the test of language prior, which is the ultimate goal of our benchmark (Figure <ref> (b)). To the best of our knowledge, existing benchmarks can only show the individual task-level performance of LVLMs. With VLind-Bench, we evaluate recent open-source and proprietary LVLMs' language priors. The results show that all of the models except GPT-4o <cit.> suffer from excessive reliance on language priors, demonstrating the challenging nature of our benchmark and the need for further improvements. Furthermore, our experiment and analysis on existing LVLMs show that the influence of language priors is inversely proportional to the scale of the backbone LLM. We also reveal that Reinforcement Learning from Human Feedback (RLHF) techniques <cit.>, which are designed to mitigate hallucinations, can help reduce the reliance on language priors. § RELATED WORK §.§ Large Vision-Language Models Recently, there has been a lot of effort in extending Large Language Models (LLMs) to include visual inputs, forming a new class of models known as Large Vision-Language Models (LVLMs) <cit.>. These LVLMs are gaining attention as a new paradigm in vision-language learning by transferring the exceptional properties of LLMs, such as multi-step reasoning ability and in-context learning, to the multimodal domain. However, these LVLMs are not free from the bias and hallucination issues inherent in LLMs <cit.>. Despite this, creating benchmarks to diagnose these problems is more challenging with the image modality, leading to slower progress in benchmark development compared to LLMs. §.§ Benchmarks with Counterfactual Context Since counterfactual contexts can assess the robustness and generalization capabilities of LLMs or LVLMs, several benchmarks utilizing this approach have been proposed. These benchmarks assume that if a model responds based on memorized facts without properly understanding the context of text or images, it would fail to correctly solve tasks conditioned on counterfactual contexts. Benchmarks such as IfQA <cit.> and DisentQA <cit.> counterfactually augment textual contexts to determine whether the language model accurately incorporates augmented information when answering questions. <cit.> evaluate LLMs on reasoning tasks based on counterfactual contexts. Benchmarks like Whoops! <cit.> and ROME <cit.> evaluate the counterfactual reasoning abilities of multimodal models by conducting VQA tasks conditioned on counterfactual images. However, these benchmarks cannot disentangle the reliance on language priors and commonsense biases of a model, as described in section <ref>. § BENCHMARK STRUCTURE VLind-Bench conducts four types of assessments, each designed to test different capabilities, as illustrated in Figure <ref> (a). By providing multiple tests concerning the exact same image or text that are used in the language prior test, it is possible to check if the model has the essential abilities to make the language prior test meaningful. Depending on the problem's characteristics, each test utilizes one of two images, either factual or counterfactual, as input. First, we provide a counterfactual image along with two statements and evaluate whether the model can correctly classify these statements as true or false based on the image (Figure <ref> (a) - iv: Language Prior). If the model relies on language priors, it will not incorporate the counterfactual circumstances presented in the image into its reasoning, achieving low performance on this test. However, merely answering questions about counterfactual images is insufficient to accurately measure the language priors due to several confounding factors. Firstly, when a model fails a task involving a counterfactual image, it is unclear whether this failure is due to the model's reliance on language priors or because the model possesses commonsense bias. Here, commonsense bias refers to the tendency of models, including unimodal language models, to avoid responding in ways that contradict common sense. Therefore, we evaluate whether the model can overcome such commonsense bias regardless of modality, by providing the model with the image and a text description of the image as input (Figure <ref> (a) - iii: Commonsense Bias). Additionally, the failure in the counterfactual task might stem from an inability to recognize the objects in the counterfactual image. Conversely, the model may simply lack common sense and pass the test merely by chance. To this end, we provide two tests to check commonsense knowledge and visual perception abilities. The statements used for checking commonsense knowledge are identical to those for language priors, but factual images are given instead of counterfactual images, and the models are instructed to evaluate the truth values based on common sense (Figure <ref> (a) - i: Commonsense Knowledge). In the case of visual perception, counterfactual images are still used; however, the statements are designed to assess the model's ability to recognize objects (Figure <ref> (a) - ii: Visual Perception). If a model fails any test assessing its basic ability, evaluating it on more complex tests that rely on that basic ability would be meaningless. Therefore, the evaluation of our benchmark proceeds sequentially, starting with easier problems that assess fundamental abilities and gradually advancing to more difficult problems that are counterfactual and multimodal in nature (Figure <ref> (b)). This pipelined evaluation paradigm could be more universally applied, not only for measuring language priors but also for more accurately assessing the varying capabilities of AI systems. §.§ Commonsense Knowledge (CK) First, it is essential to verify whether the model possesses commonsense knowledge about the instances of the benchmark. This step allows us to determine whether the model's success at counterfactual tests is genuine or due to a lack of common sense. Therefore, we introduce a Commonsense Knowledge test (CK) to assess the model's commonsense knowledge about the given instances. Specifically, the CK comprises one image I_fact and two statements s_fact and s_cf. The image I_fact depicts a factual circumstance that aligns with common sense (e.g., an image of the Statue of Liberty). Among the two statements, s_fact is a factual statement that is true based on real-world common sense (e.g., “The Statue of Liberty is holding a torch.”), while s_cf is a counterfactual statement that is false (e.g., “The Statue of Liberty is holding a sword.”). Also, we use the prompt template, pr_CK, to instruct the LVLM to evaluate the truth value of the input text based on common sense. To pass the CK, the model must accurately predict the truth value of both statements: P_CK = 1(LVLM(I_fact, pr_CK(s_fact))=“True”LVLM(I_fact, pr_CK(s_cf))=“False”), where P_CK indicates whether the model passed CK or not. LVLM(i, t) is a composition of two functions: one that maps the image input i and text input t to the LVLM's response, and another that maps the LVLM's response to “True” or “False” using a string match. §.§ Visual Perception (VP) The fundamental ability underpinning all multimodal tasks is visual perception, particularly the ability to recognize objects <cit.>. Similar to the CK, evaluating a model on more complex tasks would be meaningless when it fails in object recognition. Therefore, we introduce the Visual Perception test (VP) to assess whether LVLMs can recognize objects in a given counterfactual image. VP consists of one counterfactual image I_cf and two statements s_exist and s_nil. Contrary to the CK, the image I_cf shows a counterfactual scene, which contradicts the world knowledge or common sense (e.g., an image of the Statue of Liberty holding a sword). The reason for using counterfactual images is that the VP needs to evaluate visual perception capabilities on the same images that are used for language prior assessments, where the use of counterfactual images is essential. In VP, both the two statements say that “There is object in the image.”, while the objects are set such that s_exist is true and s_nil is false under the given image (e.g., “There is the Statue of Liberty.” and “There is umbrella.”). To this end, we define P_VP to indicate whether the model passed VP, with a prompt template pr_VP to instruct the models to evaluate the truth value of input text based on the given image. The indicator for passing the VP, P_VP, is defined similarly: P_VP = 1(LVLM(I_cf, pr_VP(s_exist))=“True”LVLM(I_cf, pr_VP(s_nil))=“False”) §.§ Commonsense Bias (CB) It has been observed that LVLMs, including LLMs, exhibit a reluctance to provide responses that contradict common sense or learned world knowledge, even when they are explicitly instructed to respond based on counterfactual contexts <cit.>. We propose a Commonsense Bias test (CB) to disentangle this bias from language priors, which is the goal of this benchmark. To eliminate the influence of modality in the evaluation of commonsense bias, we provide LVLMs with a counterfactual textual context T_cf and a counterfactual image I_cf as input. Also, we provide the models with two statements, s_cf and s_fact, which are true and false respectively under the given context. We wrap the context and statement with a prompt template pr_CB, which instructs the model to explicitly follow the information provided in the context, rather than common sense. The indicator for CB is as follows: P_CB = 1( LVLM(I_cf, pr_CB(T_cf, s_cf))=“True” LVLM(I_cf, pr_CB(T_cf, s_fact))=“False” P_CK=1) Note that P_CB=1 only if P_CK=1, according to the proposed evaluation pipeline (Figure <ref> (b)). §.§ Language Prior (LP) The evaluation of the language prior, which is the final and most crucial issue, is conducted through the Language Prior test (LP) involving a counterfactual image I_cf and two statements s_cf and s_fact. Basically, the LP is nearly identical to the CB in all aspects except for the absence of text context T_cf and a slight difference in prompt template pr_LP. The indicator for LP is as follows: P_LP = 1( LVLM(I_cf, pr_LP(s_cf))=“True” LVLM(I_cf, pr_LP(s_fact))=“False” P_CB=1 P_VP=1) § DATA GENERATION Here, we explain the data generation process of VLind-Bench. As described in the previous section, the benchmark consists of four types of tests, incorporating various forms of images and texts. First, at the core of the benchmark data, there are counterfactual textual context T_cf and image I_cf, accompanied by two statements s_cf and s_fact, for CB and LP. To evaluate CK and VP, there are also a factual image I_fact and two statements s_exist and s_nil regarding object recognition. To ensure the high quality of the data samples, we proceed with the following procedure. Counterfactual Textual Contexts and Statements First, we generate counterfactual textual context T_cf and corresponding statements s_cf and s_fact, which are true and false, respectively, based on the context. The contexts must describe a wide range of real-world topics and be suitable for visual depiction. To achieve this goal, we selected 11 concepts that span various aspects of commonsense knowledge, ranging from natural sciences such as and , to humanities such as and . For each selected concept, we employed GPT-4 () <cit.> to create 50 instance triples, each consisting of a context, a true statement, and a false statement. We provided a detailed instruction with 3-shot prompt as input, using hand-crafted concept-specific examples to reflect the characteristics of each concept. The examples are designed to be easy in terms of reasoning, to minimize the influence of the models' reasoning ability and focus solely on measuring language priors. To ensure the quality of the generated data, three graduate students manually checked the correctness of the triples. We then conducted a majority vote among the three annotations to determine whether each triple should remain in our benchmark. As a result, the initial set of 550 instance triples was reduced to 421. Counterfactual Images Next, we proceed with the generation of counterfactual image I_cf from the filtered textual contexts. Given the significance of LP in our benchmark, we generate multiple images per test for LP, unlike factual images where we generate only one image per test. We take the average performance on these images, enabling a more accurate evaluation. The images are generated using DALL-E 3 <cit.>, where the textual context T_cf is provided as input, and 12 images are sampled. To provide diversity of image style, we produce four images each in photorealistic, illustration, and cartoon styles per one textual context. Consequently, for the 421 contexts, a total of 5,052 images are generated. The generated images must provide sufficient context to accurately classify the statements as true or false and be free of artifacts. Similar to the previous stage, each image is verified by three graduate student reviewers and filtered using a majority vote. Contexts with no accepted images are also filtered at this stage. After this filtering process, 302 contexts and 2,274 images remained in the benchmark dataset. Commonsense Knowledge and Visual Perception Tests In the final stage of data generation, we produce factual images I_fact for CKs and statements s_exist and s_nil for VPs. For the factual image, since it needs to describe a circumstance where s_fact as true, we input s_fact directly into DALL-E 3 to generate the image. However, some s_fact's are very difficult to translate into images using this method. In such cases, we convert T_cf into factual textual context using GPT-4, or alternatively, we use existing images from the web. Statements for visual perception tests are simply sentences about the presence of objects and thus can be generated using a template. We first prompt GPT-4 to extract one key noun from T_cf and generate one arbitrary noun not present in T_cf. Then, we construct s_exist and s_nil using the template “There is [noun] in this image.”. To verify the quality of the generated I_fact, s_exist, and s_nil, we evaluate whether OpenAI GPT-4o () <cit.>, which is the most advanced available LVLM, passes the CK and VP. For instances where GPT-4o fails, human verification was conducted. If the failure was due to an error in the data generation process, we addressed the cause of the error by either regenerating the factual image or manually correcting the nouns in statements. Details for human verification and input prompts are provided in Appendix A. Statistics The statistics of the benchmark data generated through the process are presented in Table <ref>. The difficulty of data generation varies for each concept, resulting in different proportions of samples being filtered out during the human review process. Ultimately, a total of 302 instance triples and 2,576 images, encompassing both counterfactual and factual images, were included in the benchmark. Data samples for each concept can be found in Appendix B. § EXPERIMENTS §.§ Metrics In section <ref>, all indicator values for the four tests have been defined for a single instance. For some test 𝒯∈{CK, VP, CB, LP}, the final VLind-Bench score S_𝒯, is represented as the average of the indicator values P_𝒯^i's across all instances that have passed previous tests. S_𝒯=1/M_𝒯∑_i=1^N P_𝒯^i Here, i is the data index, N is the number of total instances in our benchmark, and M_𝒯 is the number of instances that have passed all the previous tests before 𝒯 (which is essentially the number of instances considered by 𝒯). To be more concise, M_CK=M_VP=N, M_CB=|{i| P_CK^i=1}| and M_LP=|{i| P_CB^i=1 P_VP^i=1}|. We refer to these four scores as pipeline scores, as they reflect the pipelined evaluation structure of VLind-Bench (columns under “Pipeline Score” in Table <ref>). Alternatively, following the commonly accepted definition of accuracy, the performance can be expressed as the ratio of correct instances to the total number of instances (columns under “Accuracy” in Table <ref>). §.§ Models We have selected and evaluated recent proprietary and open-source LVLMs on the VLind-Bench. The open-source LVLMs were chosen to represent a diverse range of scales and training methodologies. Unfortunately, the performance of the InstructBLIP models could not be evaluated using the prompt template from section <ref>, as they completely failed to generate responses. Therefore, we utilized a modified prompt, in which the question sentence was placed at the end. Additionally, we assessed the performance of some backbone LLMs on CK and CB tasks without the image input. To ensure the reproducibility of the experiments, all inferences were conducted under a zero temperature setting. All the experiments are conducted using 4 NVIDIA RTX A6000 GPUs. §.§ Main Results The overall model performance is shown in Table <ref>. Surprisingly, numerous models demonstrated somewhat low scores in S_CK, implying a deficiency of commonsense knowledge in LVLMs. Conversely, S_VP scores concerning object recognition ability exhibited relatively high scores. This pattern of low commonsense knowledge scores and high visual perception scores aligns with observations from previous work <cit.>. Additionally, the lower S_CB and CB scores compared to S_CK indicate that LVLMs are reluctant to respond contrary to commonsense knowledge. When comparing LP and S_LP scores, it is evident that some models with similar LP scores exhibit differing S_LP scores. For instance, while the LLaVA 1.5 13B model and the InstructBLIP 7B model have similar LP scores, the LLaVA model achieves nearly three times higher S_LP score. This clear lack of correlation between LP and S_LP scores indicates that our pipelined evaluation provides additional information beyond what can be obtained by conducting task-level evaluation alone. Finally, the generally low S_LP score suggests that all models, except for GPT-4o, exhibit a reliance on language priors. This reliance was more pronounced in open-source models compared to proprietary ones. Furthermore, the reliance on language priors appeared inversely proportional to the scale of the backbone LLM. This trend can be observed by comparing the S_LP scores across various sizes of models within the same LLaVA and InstructBLIP series. RLHF-V An exception to such trend between model scale and language prior is the superior performance of models that applied the RLHF-V <cit.> methodologies. Models such as OmniLMM and MiniCPM trained using RLHF-V, demonstrated superior performance compared to models of similar or greater scale. Specifically, RLHF-V employs a method called Dense Direct Preference Optimization (DDPO) to mitigate multimodal hallucination. DDPO constructs win-lose pairs by having humans modify only the hallucinatory spans in the model responses to align with image information, thereby forcing the use of visual modality to increase the reward. Such construction of training data might be the reason for the reduced reliance on language prior. Additionally, the high performance of these methods on counterfactual images suggests that the ability to utilize image information generalizes to out-of-distribution samples. Applying RLAIF-V <cit.>, an AI-feedback variant of RLHF-V, to LLaVA 1.5 7B also results in significant performance improvement. LLM performance Some might question whether the performance of LVLM is significantly influenced by the performance of its backbone LLM. To answer this question, we conducted an evaluation of several backbone LLMs on CK and CB tasks. The results, as illustrated in columns S_CK and S_CB, indicate that the performance of the LLMs is not highly correlated to the performance of the LVLMs. Consequently, we can conclude that the absolute scale of the backbone LLMs and the training methodology have a more substantial impact on the final performance of LVLMs than the performance of the backbone LLMs themselves. Another finding is that the LVLMs are sometimes superior to their original backbone LLMs on S_CB. Given that S_CB encompasses the same content in both image and text formats, this suggests that, in certain scenarios, learning from the visual modality may enhance robustness in the text modality. Performance by Concept One particularly interesting finding is that the model performance varies significantly depending on the concept. For instance, high-performing open-source models such as OmniLMM scored zero in S_LP for the concept of “weight,” and even GPT-4o only managed to achieve a score of 61.0% (Table <ref>). This suggests that although LVLMs might possess real-world knowledge about physical properties like weight, they lack robust concepts of these properties that can be generalized under counterfactual situations. § DISCUSSION AND CONCLUSION In this work, we proposed VLind-Bench, a benchmark designed to precisely measure language priors in LVLMs. We evaluated several LVLMs using this benchmark and analyzed the results, finding that the reliance on language priors is inversely proportional to the model scale. Additionally, the RLHF-V technique turned out to significantly aid in reducing such reliance. As demonstrated with VLind-Bench, we endorse a pipelined evaluation paradigm for the general construction of benchmarks to disentangle the specific abilities intended for measurement. Language Priors and Model Scale The tendency for the reliance on language prior to be inversely proportional to the scale of backbone LLMs may appear counterintuitive (i.e., LLaVA in Table <ref>). We have not identified the precise cause of this trend. One possible explanation is that larger pre-trained models are less prone to overfitting to the dataset during the visual instruction tuning process, thereby better maintaining their ability to attend to image information. In the experiments, we employ models with various scales of image encoders (ranging from approximately 300M to 5B), however, no clear correlation was observed between the language prior and the size of the image encoder. Diagnosing LVLMs VLind-Bench can diagnose a model's capabilities in multiple aspects and components, providing clues on where to focus for comprehensive improvements. For instance, a low S_LP score suggests that enhancements should be in the vision-language training aspect, while a low S_CK score indicates that improvements should focus on the knowledge aspect of the backbone LLM. In the case of the former, utilizing the RLHF-V techniques can significantly reduce the model's reliance on language priors, as observed in Section <ref>. Limitations and Future Work Although VLind-Bench minimized potential confounding factors in assessing language priors, there may still be unconsidered factors. The text and image data are sampled from generative models, which may result in discrepancies from real-world distributions. When evaluating LVLMs, we prompted the models to respond exclusively with either “True” or “False.” However, some models could perform better by generating a rationale before responding <cit.>. This aspect was not explored in our study but may be considered in future research. Additionally, the CBs in our benchmark does not necessarily need to receive both text and image as input to check the commonsense bias. Such design choice is mostly due to a lack of established practices for feeding text-only inputs to LVLMs. As alternatives to I_cf, we conducted experiments using a plain single-color image or rendered text prompts as visual input (refer to Appendix C); however, none of these approaches works – these kinds of images can be considered out-of-distribution samples, and some proprietary models output error messages for these inputs. Exploring more established methods for text-only inputs in LVLMs falls outside the scope of our paper, but further research in this area is necessary both from a practical perspective and for a deeper understanding of how individual components of LVLMs operate. Finally, while our primary goal in Section <ref> was to generate data for a benchmarking purpose, we can also use this process to generate training data automatically. Training LVLMs with such dataset could help mitigate reliance on language priors, but we leave this as future work. § HUMAN VERIFICATION AND MODEL PROMPT DETAILS Criteria for Instance Triple Verification The reviewers are provided with the context, the true statement, and the false statement (which was defined as instance triple in the Section 4). For each instance triple, the reviewers are given two options: Accept and Reject. The appropriateness is verified based on the following criteria. * Decisions are made based solely on the text without considering image generation. * If a true (false) statement is not clearly true (false), it should be rejected. * If the context is not counterfactual, it should be rejected. * Even if a true (false) statement is indeed true (false), it should be rejected if it does not address the counterfactual aspect of the context. * If the truth values of statements cannot be inferred from the context, it should be rejected. * Annotators may use internet searches to determine the appropriateness of the context and statement. Criteria for Image Verification The reviewers are provided with the context, the true statement, the false statement, and the generated image. For each image, the reviewers are given two options: Accept and Reject. The appropriateness is verified based on the following criteria. * If a true (false) statement is not clearly true (false), it should be rejected. * Accept the image if it is sufficient to determine the truth values of the statements, even if the image does not precisely depict the context. * Reject if the generated image is of significantly poor quality. * Annotators may use internet searches to determine the appropriateness of the image. Each instance triple or image was reviewed by a total of three reviewers. Only those instance triples or images that were accepted by at least two reviewers were included in our benchmark. Prompt Template for Instance Triple Generation We used the following prompt template for instance triple generation. To facilitate understanding of the reader, the template is filled with examples of the concept “location,” with the filled-in sections indicated in italics. Given a concept, create related counterfactual situation (context) which can be described with an image. Also generate two statements with different truth values for each situation. Make only clear statements so that there is no room for vague or different truth value of the statement depending on the point of view. For example, through the concept of "location", we can create a counterfactual situation such as "A variety of marine life lives in the city built underwater." and describe it with an image of a underwater city. And then we can make two statements, "The city's buildings are surrounded by marine life." and "The city has human residents.", which is true and false under given counterfactual situation, respectively. List 50 context and statement pairs for the concept of "location." Output the results using the following json template. [{"id": 1, "context": "A ship is located in the middle of a large city.", "true_statement": "The ship is surrounded by buildings.", "false_statement": "The ship is in the ocean."}, {"id": 2, "context": "A glacier is found in a tropical jungle.", "true_statement": "The glacier coexists with tropical trees.", "false_statement": "The glacier is in the polar region."}, ...] Prompt Template for Generating Nouns for VPs As described in Section 4, we employed GPT-4 to extract one key noun from T_cf and generate one arbitrary noun not present in T_cf, to construct s_exist and s_nil. To ensure appropriateness, two instances of each noun were initially generated, after which a manual selection process was conducted to choose the better option between the two. We used the following prompt template for generating nouns for the VPs. Extract nouns from the following context. If there are more than two nouns, pick the two most important nouns. Also generate two random nouns that are not included in the context. Here are some examples. Context: Wombats burrow in the frozen tundra, their tunnels creating intricate networks under the snow. {"nouns": ["wombat", "tunnel"], "non-existent_nouns": ["zebra", "closet"]} Context: The jellybean is heavier than the digital piano. {"nouns": ["jellybean", "piano"], "non-existent_nouns": ["car", "oven"]} Context: Context § DATA SAMPLES § EXPERIMENTS USING A PLAIN WHITE IMAGE AND RENDERED TEXT PROMPTS As discussed in Section 6, we conducted experiments using a plain white image and rendered text prompts as visual inputs instead of I_fact and I_cf in CK and CB. When employing the plain white image, we replaced all images in the CK and CB inputs with a plain white image. In the case of using rendered text prompts, we substituted all CK and CB input images with images that had the content of the textual prompts rendered in black text on a white background. Table 4 presents the results of this experiment, showing a notable performance decline, particularly in the CK. This performance decline can be attributed to the absence of information that was present in the original images. Additionally, both plain white image and rendered text prompts can be considered out-of-distribution inputs (OOD), leading to unstable performance. § MODEL PERFORMANCE BY IMAGE STYLE Here, we observed how performance varies across different image styles. As mentioned in Section 4, we generated images in photorealistic, illustration, and cartoon styles. Table 5 shows that the performance across these styles in the CK, VP, and CB did not vary significantly. A notable variation in performance was observed only in LP, where the photorealistic style yielded better results compared to the other two styles. This could be due to the model's assessment that images in the illustration or cartoon styles lack realism compared to photorealistic images, leading it to generate responses that align more closely with common sense. § DATA ACCESS AND LICENSE * VLind-Bench dataset URL: https://huggingface.co/datasets/klee972/VLind-Benchhttps://huggingface.co/datasets/klee972/VLind-Bench * Code for evaluation: https://github.com/klee972/VLind-Benchhttps://github.com/klee972/VLind-Bench * Metadata URL: https://huggingface.co/api/datasets/klee972/VLind-Bench/croissanthttps://huggingface.co/api/datasets/klee972/VLind-Bench/croissant * Dataset DOI: 10.57967/hf/2475 VLind-Bench is distributed under https://creativecommons.org/licenses/by-sa/4.0/CC BY-SA 4.0. We, the authors, bear all responsibility in case of violation of rights and confirmation of the data license.
http://arxiv.org/abs/2406.08965v1
20240613095244
The Szász inequality for matrix polynomials and functional calculus
[ "Piotr Pikul", "Oskar Jakub Szymański", "Michał Wojtylak" ]
math.FA
[ "math.FA", "math.CV" ]
theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition definition definition[theorem]Definition remark[theorem]Remark example[theorem]Example equationsection tablesection figuresection
http://arxiv.org/abs/2406.08645v1
20240612210900
ODIN: Identifying Protoclusters and Cosmic Filaments Traced by Ly$α$-emitting Galaxies
[ "Vandana Ramakrishnan", "Kyoung-Soo Lee", "Maria Celeste Artale", "Eric Gawiser. Yujin Yang", "Changbom Park", "Robin Ciardullo", "Lucia Guaita", "Sang Hyeok Im", "Seongjae Kim", "Ankit Kumar", "Jaehyun Lee", "Seong-Kook Lee", "Byeongha Moon", "Nelson Padilla", "Alexandra Pope", "Roxana Popescu", "Hyunmi Song", "Paulina Troncoso", "Francisco Valdes", "Ann Zabludoff" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
50 h_50^-1 km s^-1 Mpc^-1 < ∼ .5ex > ∼ .5ex ” ' 0000-0002-9176-7252]Vandana Ramakrishnan Department of Physics and Astronomy, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907, USA 0000-0003-3004-9596]Kyoung-Soo Lee Department of Physics and Astronomy, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907, USA 0000-0003-0570-785X]Maria Celeste Artale Departamento de Ciencias Fisicas, Universidad Andres Bello, Fernandez Concha 700, Las Condes, Santiago, Chile 0000-0003-1530-8713]Eric Gawiser Physics and Astronomy Department, Rutgers, The State University, Piscataway, NJ 08854 0000-0003-3078-2763]Yujin Yang Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea 0000-0001-9521-6397]Changbom Park Korea Institute for Advanced Study, 85 Hoegi-ro, Dongdaemun-gu, Seoul 02455, Republic of Korea 0000-0002-1328-0211]Robin Ciardullo Department of Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA 0000-0002-4902-0075]Lucia Guaita Departamento de Ciencias Fisicas, Universidad Andres Bello, Fernandez Concha 700, Las Condes, Santiago, Chile 0009-0003-9748-4194]Sang Hyeok Im Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea 0009-0002-3931-6697]Seongjae Kim Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea 0000-0001-6270-3527]Ankit Kumar Departamento de Ciencias Fisicas, Universidad Andres Bello, Fernandez Concha 700, Las Condes, Santiago, Chile 0000-0002-6810-1778]Jaehyun Lee Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of KoreaKorea Institute for Advanced Study, 85 Hoegi-ro, Dongdaemun-gu, Seoul 02455, Republic of Korea :0000-0001-5342-8906]Seong-Kook Lee SNU Astronomy Research Center, Department of Physics and Astronomy, Seoul National University, Seoul, Korea Astronomy Program, Department of Physics and Astronomy, Seoul National University, Seoul, Korea 0009-0008-4022-3870]Byeongha Moon Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea 0000-0001-9850-9419]Nelson Padilla Instituto de Astronomía Teórica y Experimental (IATE), CONICET-UNC, Laprida 854, X500BGR, Córdoba, Argentina 0000-0001-8592-2706]Alexandra Pope epartment of Astronomy, University of Massachusetts, Amherst, MA 01003, USA 0000-0001-8245-7669]Roxana Popescu Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA 0000-0002-4362-4070]Hyunmi Song Department of Astronomy and Space Science, Chungnam National University, 99 Daehak-ro, Yuseong-gu, Daejeon, 34134, Republic of Korea 0000-0001-6162-3023]Paulina Troncoso Escuela de Ingeniería, Universidad Central de Chile, Avenida Francisco de Aguirre 0405, 171-0614 La Serena, Coquimbo, Chile 0000-0001-5567-1301]Francisco Valdes NSF's National Optical-Infrared Astronomy Research Laboratory, 950 N. Cherry Avenue, Tucson, AZ 85719, USA 0000-0001-6047-8469]Ann Zabludoff Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA § ABSTRACT To understand the formation and evolution of massive cosmic structures, studying them at high redshift, in the epoch when they formed the majority of their mass is essential. The One-hundred-deg^2 DECam Imaging in Narrowbands (ODIN) survey is undertaking the widest-area narrowband program to date, to use Lyα-emitting galaxies (LAEs) to trace the large-scale structure (LSS) of the Universe at three cosmic epochs. In this work, we present results at z = 3.1 based on early ODIN data in the COSMOS field. We identify and characterize protoclusters and cosmic filaments using multiple methods and discuss their strengths and weaknesses. We then compare our observations against the IllustrisTNG suite of cosmological hydrodynamical simulations. The two are in excellent agreement, with a similar number and angular size of structures identified above a specified density threshold. We are able to recover the simulated protoclusters with log(M_z=0/M_⊙) ≳ 14.4 in ∼ 60% of the cases. With these objects we show that the descendant masses of the protoclusters in our sample can be estimated purely based on our 2D measurements, finding a median z = 0 mass of ∼10^14.5M_⊙. The lack of information on the radial extent of each protocluster introduces a ∼0.4 dex uncertainty in its descendant mass. Finally, we show that the recovery of the cosmic web in the vicinity of protoclusters is both efficient and accurate. The similarity of our observations and the simulations imply that our structure selection is likewise robust and efficient, demonstrating that LAEs are reliable tracers of the LSS. § INTRODUCTION According to the hierarchical theory of structure formation, matter is organized into a cosmic web, comprised of linear filaments intersecting at nodes of high density surrounded by vast voids <cit.>. This large-scale structure (LSS) determines how much cold gas is available to a galaxy and the likelihood of a merger or interaction with another galaxy, thereby acting as one of the fundamental drivers of galaxy evolution. Out to z ≈ 1.5, redshift surveys and other observational techniques have enabled the selection of samples of galaxies inhabiting clusters, groups, and filaments <cit.>. These studies show that galaxies in cluster or group environments tend to be older and more massive, and are more likely to have ceased star formation than those in the field <cit.>. Filaments may have a weaker but similar effect and may be responsible for pre-processing galaxies that are falling into cluster- or group environments <cit.>. At Cosmic Noon (z ≳ 2), when the global star formation rate reached its peak <cit.>, these environmental effects are predicted to be even more dramatic. In the high-density regions within the LSS, the accretion rates of infalling gas and the incidence of galaxy interactions are expected to be greatest, fostering both enhanced in-situ star formation and black hole activity. A popular hypothesis is that highly dissipative gas-rich mergers help the efficient feeding of gas into the central black hole and trigger an active galactic nuclei (AGNs), which may ultimately quench the star formation activity <cit.>. These expectations are indeed in line with the heightened SF and AGN activities found in a handful of protocluster systems <cit.> as well as the emergence of quenched galaxies in such environments <cit.>. Yet, the role that LSS environment plays in galaxy formation at Cosmic Noon remains under-explored. Our limited knowledge is due to a combination of factors, including the observational limitations of measuring precise redshifts of faint, high-redshift galaxies, the inherent scarcity of massive cosmic structures, and our incomplete grasp of the indicators for the locations of these structures. The lack of readily identifiable signatures—such as a hot intracluster medium and/or a concentration of quiescent galaxies—in young, yet-to-be-virialized structures of mostly star-forming galaxies leads to a strong reliance on spectroscopy for finding protoclusters. While the lack of large, statistical samples prevent us from disentangling the effects of cosmic variance from general properties of protocluster galaxies, studies from heterogeneously selected samples can lead to seemingly conflicting conclusions. Although cosmic filaments connected to these protoclusters likely play a vital role in replenishing fresh gas for sustained star formation, such medium-density features are even more difficult to discern than dense protocluster cores. The One-hundred-deg^2 DECam Imaging in Narrowbands <cit.> survey is designed to obtain large and uniformly selected samples of protoclusters and filaments at three cosmic epochs (z = 2.4, 3.1 and 4.5) using Lyα-emitting galaxies (LAEs) as tracers of underlying matter distribution. As the most common electron transition in the universe, Lyα emission traces ionized and/or excited gas from star formation, black hole activity, and the gravitational collapse of dark matter halos. A large fraction of low-luminosity star-forming galaxies <cit.> show Lyα emission <cit.>. These LAEs tend to have lower stellar masses, younger population ages, less internal extinction than systems selected via their broadband colors, and low galaxy bias <cit.>. These traits make LAEs a most efficient tracer of the underlying dark matter distribution, one which can be used to constrain cosmology <cit.> and most massive cosmic structures <cit.>. Upon completion, ≈600 protoclusters are expected to be discovered by ODIN, which will facilitate robust statistical investigations of the cosmic evolution of protoclusters and their galaxy inhabitants. In this paper, we use the early ODIN data in the COSMOS field to present our selection of protoclusters and cosmic filaments. Building on the results presented in <cit.>, we calibrate and fine-tune our LSS detection methods by carrying out careful comparisons with cosmological hydrodynamical simulations. The outline of this paper is as follows. In Section <ref>, we give details of the observational and simulation data. In Sections <ref> and <ref>, we describe how we construct LAE surface density maps and how we use them to detect protoclusters and filaments. We validate our procedures and interpret the results using the mock data created from the simulations in Section <ref>. Finally, the properties of our observationally selected structures are discussed in Section <ref> followed by a summary of our findings in Section <ref>. Throughout this paper, we assume a Planck cosmology <cit.>: Ω_Λ = 0.6911, Ω_ b = 0.0486, Ω_ m = 0.3089, H_0 = 100 h km s^-1 Mpc^-1 and h=0.6774. Distances are given in units of comoving Mpc (cMpc) unless noted otherwise. § OBSERVATIONAL AND SIMULATION DATA §.§ The ODIN survey The ODIN survey is conducting the widest-area deep narrowband imaging program to date, using three custom narrowband filters (N419, N501 and N673) to identify redshifted emission. In this work, we make use of the Year 1 ODIN data taken with the N501 filter (λ_C/Δλ = 5014/75 Å z̅/Δ z = 3.12/0.06) in the extended COSMOS field. For more details about the survey fields and observing strategy, we refer interested readers to <cit.>. The method for selecting ODIN LAEs is detailed in <cit.>. Briefly, we identify LAEs as N501-detected sources exhibiting a narrowband excess over the continuum, corresponding to a rest-frame equivalent width of 20 Å. The continuum magnitude is calculated as a weighted combination of the magnitude in two broadband filters (g and r for N501 LAEs). We exclude sources flagged for saturated pixels or other image defects and those close to bright stars. The continuum is estimated using broadband data from the Hyper Suprime-Cam Subaru Strategic Program <cit.> second data release <cit.>. Our resulting LAE sample comprises 6,056 sources over ≈7.5 deg^2. While dedicated spectroscopic follow-up programs with Keck, Gemini, and DESI are ongoing, more than 25% of our N501-selected LAEs have been targeted to date. Of the sources that yielded a redshift, ≈97% are confirmed as LAEs (Ramakrishnan et al. in prep.). §.§ Building Mock ODIN Observations with TNG To build a concrete framework in which we can interpret our observations, we use the IllustrisTNG300-1 simulation <cit.> and define our mock LAE samples. TNG300 provides the largest simulation volume (302.6 cMpc on a side) of all the IllustrisTNG simulations. Given the rarity of massive galaxy (proto)clusters, this is especially crucial for our work. All TNG simulations assume the Planck cosmology <cit.>. The surface area viewed along the X, Y, or Z direction is ≈90,000 cMpc^2, well matched to the angular extent of the ODIN data in the COSMOS field, ≈95,500 cMpc^2, and the simulation box is several times larger than the ODIN radial extent of ∼60 cMpc at z=3.1. TNG300 has a baryon mass resolution of 1.1 × 10^7 M_⊙ and a dark matter mass resolution of 5.9 × 10^7 M_⊙. In our analysis we employ galaxies with stellar mass greater than 10^7 M_⊙, corresponding to a halo mass of ≳ 10^9 M_⊙; as do not employ any stellar or gas physics in our analysis, the resolution is sufficient for our purposes. In constructing mock LAE samples, our primary goal is to mimic the spatial distribution of the ODIN LAEs as closely as possible so that we can evaluate their utility as tracers of the LSS. Understanding the complex behavior of Lyα radiative transfer requires vastly higher resolution simulations and thus is outside the scope of this work. In what follows, we describe how we select mock LAEs from the TNG galaxies. First, we match the 75 Å (60 cMpc at z = 3.1) full-width-at-half-maximum of N501 by creating cosmic `slices' from the TNG z=3 snapshot with ≈80 cMpc in line-of-sight thickness. The filter and thus the window function are not a perfect top hat in shape. To emulate this effect, we assign the LAE selection probability to match the shape of the filter transmission function. In all things being equal, the probability of being selected as a mock LAE is 1, 0.5, and 0 at the distance of 0, 30, and 40 cMpc from the center of the slice, respectively. In practice, the probability of an LAE being detected at a given position along the redshift direction also depends on its line luminosity, as bright LAEs are more likely to be detected when they fall on the wings of the filter than fainter ones, and hence are detected over a larger volume. However, since we are interested only in the average number density of all LAEs, irrespective of line luminosity, this does not significantly affect our analysis. Second, we aim to reproduce the small- and large-scale clustering of LAEs. The galaxy bias for z∼ 3 LAEs is relatively low at b≲ 2 <cit.>, suggesting that LAEs have low stellar mass content and are hosted by low-mass halos <cit.>. To match the small-scale clustering, we must simultaneously match the LAE overdensity distribution across the field; the details of this measurement are presented in Section <ref>. Motivated by the findings of <cit.>, we model the stellar masses of the mock LAEs as a lognormal distribution: i.e., log M_∗/M_⊙ is a Gaussian function, with mean and standard deviation (μ, σ). Lowering μ values would shift the host halos to lower masses (thus lower bias). A larger scatter σ (while fixing μ) would permit both higher- and lower-mass halos to host LAEs, thereby changing how LAEs trace these halos. We find that μ = 8.75 and σ=0.75 yield the best fit to our surface density measurements. The left panel of Figure <ref> shows the resultant LAE overdensity map relative to the data. Our best-fit parameters are fully consistent with the stellar mass distribution of LAEs based on SED fitting reported by <cit.>, log(M_*/M_⊙) = 8.97^+0.60_-0.71. They are also similar to the result of <cit.>, who find log(M_*/M_⊙) = 8.45^+0.72_-0.67 for LAEs at z = 2.1. Finally, we match the measured sky density of LAEs. The N501 LAE surface density is 0.22 arcmin^-2. In comparison, the number of TNG galaxies selected based on the above criteria is typically ten times greater. Additionally, we assume 10% of our LAEs may be contaminants. This number is based on the fraction of our spectroscopic targets in the two faintest bins (N501=24.5–25.0 and 25.0–25.5 AB) that yielded a redshift, which is 94.3% and 85.9%, respectively (Ramakrishnan et al. in prep.). By doing so, we are assuming that all sources that did not result in a redshift are interlopers. Our spectroscopic success rate (i.e., the fraction of DESI-confirmed sources that yield a redshift for which it is within the expected range) is 97%, so our estimate is conservative. Changing it to 5% does not change our results. From the selected TNG galaxies, we randomly choose a subset whose number is equal to 90% of the LAE density; assuming that LAE sample contaminants are unclustered, the remaining 10% is drawn as a purely random distribution within the same region. To compare directly with our observations, we utilize the progenitors of 30 of the most massive galaxy clusters in the final simulation box of TNG300, ranked based on their total stellar mass. The stellar mass, halo mass and Group ID of these clusters at z = 0 are found in Andrews et al. (2024). We identify the protoclusters by tracing the main progenitor branch of the merger trees of the z = 0 clusters to their z = 3 predecessors. The descendant masses of the protoclusters ranges from 1.5 × 10^15M_⊙ for the most massive system to 2 × 10^14M_⊙ for the least massive. We generate cosmic slices, aligning the midpoint of each with that of the protocluster using the periodic boundary conditions of the simulation. Slicing the TNG volume along the X, Y and Z axes produces 90 distinct slices for comparison. In a given slice, typically a few other protoclusters from the top 30 most massive sample are included although, depending on their positions in the redshift direction, they may be only partially represented. § TRACING LARGE-SCALE STRUCTURE This section aims to delineate the large-scale structure in our observations by constructing the LAE surface density maps. We employ two methods; first, smoothing over the LAE positions with a fixed Gaussian kernel; second, constructing the Voronoi diagram of the LAEs. These two methods are summarized below but are discussed in more detail in <cit.>. §.§ Gaussian smoothing The simplest approach to measure the surface density across a field involves smoothing over the positions of the galaxies within it using a fixed-size kernel. We employ a two-dimensional Gaussian kernel with FWHM 10 cMpc. This FWHM is decided following the methodology of <cit.>, also utilized in <cit.>. The kernel size is chosen such that the resultant surface density map maximizes the total probability at the positions of the LAEs in the real data. This is achieved through a leave-one-out cross-validation, where the likelihood of finding a point at the location r⃗_⃗j⃗ of the jth data point is estimated as: p(r⃗_⃗j⃗) = ∑_i≠ j1/√(2π)σexp-(r⃗_⃗i⃗-r⃗_⃗j⃗)^2/2σ^2 The optimum σ value is the one which maximizes ∏_jp(r⃗_⃗j⃗). Before creating the LAE surface density map, we fill in the holes left by removing the sources near bright stars and image defects with uniformly distributed random points with surface density matched to that of the LAEs. After convolving with the kernel, the overdensity map is computed by dividing the Σ_LAE map by the mean surface density, Σ̅_ LAE: (1 + δ_LAE) = Σ_LAE/Σ̅_LAE The mean and standard deviation are determined by fitting the LAE surface density distribution with a Gaussian function, exp[-(Σ_LAE - Σ_LAE)^2/2σ^2]. The fit is restricted to within ± 1.5σ (after iterative sigma-clipping) to ensure that our estimate is not biased by the presence of multiple high LAE overdensities, which show up at the high end of the distribution. The left panel of Figure <ref> shows the resultant map, which we will refer to as the GS map, hereafter. It shows overdense regions that are both strongly clustered and highly irregular in shape, consistent with expectations from the hierarchical theory of structure formation, in which smaller structures continuously merge to form larger ones, a phenomenon supported by hydrodynamical simulations <cit.>. The three most prominent overdensity complexes, highlighted by red boxes and labeled as complexes A, B, and C, were discussed in detail in <cit.>. Smoothing with a fixed kernel is most effective at identifying structures with a size and shape similar to that of the kernel itself. Thus, the Gaussian smoothing method may not adequately capture non-isotropic features, potentially resulting in an underestimation of the significance of many observed structures. To address this, we explore tessellation-based methods in Section <ref>. §.§ Voronoi tessellation Tessellation-based density estimates offer the advantage of being scale-independent and do not assume any specific shape or size for the underlying structures. The two algorithms most commonly used in the literature are the Voronoi tessellation <cit.> and the Delaunay tessellation <cit.>. <cit.> find, through analysis of a simulated dataset, that the Delaunay tessellation fares more poorly at estimating the `true' surface density value at a given point as compared to the Voronoi tessellation. Additionally, the method tends to overestimate the surface density in overdense regions. Considering these results, we opt for the Voronoi tessellation algorithm as outlined below. In a two-dimensional case, the Voronoi tessellation (VT) divides a plane into distinct cells based on the positions of a set of generating points e.g., the sky locations of LAEs. The Voronoi cell of each generating point includes all regions in the plane that are closer to it than to any other generating point. Consequently, the area of each cell is a measure of surface density: cells associated with LAEs in overdense regions will be small due to the proximity of numerous other LAEs, whereas those in underdense regions will be larger. Since each cell contains a single LAE, its LAE surface density Σ_i is given by, Σ_i = 1/A_i where A_i is the area of the ith cell. As before, the masked regions are filled in before constructing the Voronoi diagram. In Figure <ref>, we show the VT map together with the GS map. In both panels, white contours delineate 3σ overdensities, with σ denoting the field density fluctuations computed from the distribution. While both maps detect the most significant structures, the VT method detects them at higher significance. As expected, the method also identifies a greater number of overdensities by capturing irregular or anisotropic features, as expected. § FEATURES OF THE LSS IN OBSERVATIONS: PROTOCLUSTERS AND FILAMENTS Galaxy protoclusters represent some of the most striking features of large-scale structure at high redshift and are expected to be observed as significant galaxy overdensities spanning ≈10 cMpc in scale <cit.>. According to the hierarchical theory of structure formation, massive halos hosting protoclusters are connected to filaments of the cosmic web <cit.> along which pristine gas is being accreted to feed star formation. Indeed, these predictions align qualitatively with the features observed in our map. Multiple overdensities with angular scales of 5–10 cMpc are evident in Figure <ref> (right), frequently clustered together to form complexes comprising 2 to 5 adjacent overdensities. Additionally, regions of high overdensity, where > 3, are interconnected by more moderate `bridges' with = 2–3. Several features show a distinctly linear morphology, reminiscent of cosmic filaments <cit.>. Motivated by these observations, our next objective is to pinpoint the locations of protoclusters and filaments of the cosmic web using LAEs as tracers. By directly comparing our observations with TNG300 predictions, we will also test how robust our protocluster candidates are and measure their key properties. §.§ Selecting protoclusters from density maps Protoclusters are generally defined as regions that will evolve into a virialized structure with masses ≳ 10^14 M_⊙ by z = 0 <cit.>. In distant look-back times, protoclusters are loosely bound regions comprised of multiple dark matter halos. This poses a challenge in measuring their physical extent. In this study, we define a protocluster as a region exhibiting significant size and overdensity, unlikely to occur by chance alone, and containing sufficient mass that could plausibly collapse to form a single halo with mass greater than or equal to 10^14 M_⊙ by z = 0. According to this operational definition, selecting protoclusters entails identifying contiguous regions above a pre-set overdensity threshold and minimum area. This procedure mirrors source detection in pixelated astrophysical images. Therefore, adopting approaches similar to those outlined by <cit.>, we utilize source detection software on our density maps for protocluster selection. We begin by pixelating the surface density maps by interpolating them over a two-dimensional grid of positions with pixel size 36. Our grid size, corresponding to 115 ckpc at z = 3.1, is small enough to clearly delineate the boundaries of the structures, as it is two orders of magnitude smaller than the typical size of a protocluster. For source detection, we use SEP <cit.>, a Python implementation of the SExtractor <cit.> software. The number of detected structures depends strongly on the detection threshold (DETECT_THRESH) and minimum area (DETECT_MINAREA). Our goal is to optimize these parameters to maximize the identification of robust candidates while minimizing the inclusion of spurious objects resulting from chance alignments of LAEs. To assess contamination, we randomly select a subset of all continuum and line-emitting sources detected within the N501 image, matching the number of LAEs in the field. We then generate GS and VT maps for these random points following the same procedure as for the LAE surface density maps. Since these points are distributed across a wide range of redshift, any overdensity observed in these `random maps' is unlikely to be genuine but rather the result of chance alignments. Hence, the contamination fraction for a specific set of detection parameters can be approximated by dividing the number of sources detected in the random map by the number detected in the LAE surface density map. This evaluation is depicted in the top and bottom right panels of Figure <ref> for the VT and GS map, respectively, with the contamination fraction averaged over 30 iterations. For the GS-selected protoclusters, the contamination fraction is low for all sets of detection parameters used. Our fiducial setup, DETECT_THRESH of 3σ and DETECT_MINAREA of 4600 pixels (≃ 60 cMpc^2), yields a contamination fraction of < 0.1% where σ is the (sigma-clipped) standard deviation of δ_LAE. Compared to the GS map, the pixel-to-pixel density fluctuations are much greater in the VT map, introducing much higher noise in structure detection. We mitigate this effect by smoothing the map with a 5 cMpc FWHM Gaussian kernel, large enough to reduce the noise but small enough to avoid adjacent structures blending into one considering that a typical diameter of a protocluster at z∼3 is 10 cMpc. We choose DETECT_THRESH of 4.5σ and DETECT_MINAREA of 3000 pixels (≃ 40 cMpc^2), yielding the contamination rate of 20%. Figure <ref> (top and middle rows) shows the protoclusters identified from the GS and VT-based maps within the highlighted Complexes A, B, and C. The GS map predominantly captures the largest structures, with smaller and more irregular formations remaining undetected; nonetheless, the identified structures exhibit high robustness. In contrast, the VT map reveals numerous structures overlooked by the GS map but is also subject to higher noise levels and is more susceptible to chance alignments of LAEs. The detailed comparison of different detection methods is presented in Section <ref>. §.§ Selecting protoclusters with HDBSCAN Density-based clustering techniques, as a subset of unsupervised machine learning algorithms <cit.>, discern clusters by identifying regions of high point density in data space, set apart by areas of low density. These methods can be applied to pinpoint protoclusters from the clustering of the LAEs. Here, we employ the Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) algorithm <cit.>. HDBSCAN offers the advantage of detecting clusters with varying densities, regardless of their shape, provided they exceed a specified size threshold. Briefly, this algorithm measures density at each location, finding clusters as peaks in the density distribution separated by troughs. Density estimation is based on the distance to the Kth nearest neighbor. In HDBSCAN, each point is classified into clusters or noise based on two parameters: the Kth nearest neighbor distance and the minimum cluster size. The latter dictates the minimum number of data points constituting a cluster. In the Python implementation of the HDBSCAN algorithm <cit.>, these parameters are denoted as min_samples and min_cluster_size, respectively. In our setup, we opt for values of 15 and 10, respectively (i.e., a minimum cluster size of 15 LAEs and the 10th nearest neighbour distance; see Appendix <ref>). Additionally, we require that each cluster exhibit a median surface density above a specified threshold. This surface density is calculated as the number of LAEs divided by the enclosed area, computed as the sum of the areas of all Voronoi cells encompassing the cluster members. We set the threshold at 0.23 arcmin^-2 (i.e., ≳ 1σ above the field mean), which yields a contamination rate of ≈20% similar to that for the VT sample. The HDBSCAN-detected protoclusters are illustrated in the bottom row of Figure <ref>. Symbols of the same color denote membership within the same structure, while gray circles represent LAEs outside of protoclusters. They are also shown in Figure <ref>. §.§ Cosmic filaments with DisPerSE The cosmic web at low redshift (z ≲ 1) has been investigated in detail through multiple surveys <cit.>. These studies have yielded valuable insight into the complex interplay of filaments and the clusters at the nodes of the cosmic web. Notably, <cit.> found evidence that galaxies accreted onto clusters along filaments show signs of pre-processing. Moreover, several studies have observed that galaxies within filaments show higher quenched fractions than those within the field <cit.>. At higher redshift, the role of the cosmic web on galaxy formation remains largely observationally unconstrained. Motivated by the discernible presence of filamentary structures in the LAE surface density map, we aim to detect cosmic web filaments in our observations in this section. We explore the reliability of our filament detection in Section <ref>. To identify cosmic filaments, we use the Discrete Persistent Structure Extractor <cit.>, a widely utilized tool in both observational <cit.> and simulation studies <cit.>. DisPerSE utilizes the Delaunay Tessellation Field Estimator <cit.> to create a discrete representation of the density field based on a given set of points. It then locates critical points within this density field approximation and defines filaments as arcs connecting saddle points to maxima. The filaments are constructed by creating short segments tangent to the gradient of the density field at each point. To account for data noise, DisPerSE employs the concept of persistence, which measures the density contrast between critical points defining a filament. Persistence represents the range of density thresholds over which a filament connecting two critical points remains significant relative to the noise. This significance is expressed in units of σ, where an Nσ filament corresponds to a probability under a 1D Gaussian distribution. We stress that this method of measuring the persistence level does not imply the use of a Gaussian distribution within DisPerSE but rather is purely an expression of the likelihood of the filaments arising from noise. For example, a persistence level of 3σ (2σ) means that the extracted filament has a 99.7 (95.5)% probability of being a true feature. After filling in the voids left by star masks, we run DisPerSE with a persistence of 2σ. The resultant filament network is illustrated in the bottom right panel of Figure <ref> as red curves superimposed on the VT density map in greyscale. The figure reveals a complex web extending over the entire field. Several previously noted linear overdensities (e.g., within complexes A and C) are part of this network. Moreover, the filaments appear to converge at the locations of the extended complexes, interconnecting the individual protoclusters within. Indeed, every protocluster is connected to at least one filament, with several positioned at the convergence of multiple filaments. Our findings are fully consistent with the hierarchical picture wherein protoclusters occupy the nodes of the cosmic web. Given that the line-of-sight window function set by the N501 filter is substantially larger than the dimensions of individual protoclusters and cosmic filaments, our 2D-based detection algorithm is expected to include false positives arising from random noise fluctuations. In Section <ref>, we show that while false detection - in particular, of filaments - does indeed occur, all filaments around overdense protoclusters are robustly identified. Protocluster detection is even more secure. § COSMIC STRUCTURES IN SIMULATIONS: BUILDING EXPECTATIONS WITH TNG Cosmological hydrodynamical simulations, such as TNG300, offer invaluable insights not readily accessible through direct observations. By facilitating connections between observable traits of protoclusters and cosmic filaments with more fundamental attributes and allowing us to track their evolution across cosmic time, simulations provide a crucial context in which we understand our data. In this section, we compare the ODIN protocluster and filament samples with those derived from the TNG300 simulation to gain an understanding of the physical properties of the structures detected in Section <ref>. §.§ Comparison of observations and simulations In the left panel of Figure <ref>, we compare the LAE surface density distribution of the real data with the TNG slices. As detailed in Section <ref>, the latter are constructed from TNG300 and are designed to match the ODIN filter transmission as well as the LAE surface density and clustering bias. Multiple realizations are averaged over to result in the surface density distribution shown for the simulation. The high-end tail of the LAE surface density distribution, which represents the field's highest density regions corresponding to protoclusters and filaments, is nearly perfectly reproduced. In Figure <ref>, we show the VT map of both data and one z=3 TNG slice side by side. The displayed slice is chosen at random and is centered on a massive protocluster with descendant mass 5.5 × 10^14 M_⊙ (Group ID=13 at z = 0: cyan cross). It also contains five additional structures (yellow crosses) that will evolve into a cluster with mass greater than 2 × 10^14 by z=0, as well as several objects that will evolve into less massive clusters (green crosses). Indeed the majority of the largest overdensities seen within the slice are associated with protoclusters. The two maps are remarkably similar in that both display structures of similar sizes and irregular morphologies, arranged into extended complexes. We highlight three such similar regions (Complexes A1, B1, and C1 in the real data, and A2, B2, and C2 in the simulations, shown by dashed-line boxes). These mock structures are also connected by cosmic web filaments. The distributions of the total transverse area and median LAE surface density of protoclusters detected in real data and simulation span a similar range, demonstrating good agreement given the constraints of modeling. This is illustrated in Figure <ref>. The median number of protoclusters (averaged over 90 TNG slices) is 37 compared to 33 detected in our data. They correspond to the protocluster surface density of (4.1 ± 0.6) × 10^-4 and (3.3 ± 0.6) × 10^-4 cMpc^-2, respectively. The uncertainties represent the shot noise only and thus should be considered as a lower limit. The excellent agreement between our data and TNG simulations suggests that the massive cosmic structures identified with current and future ODIN data are robust. §.§ Calibration of descendant mass estimate Estimations of the descendant mass allow us to establish direct connections between high-redshift protoclusters and present-day galaxy clusters, thereby tracing the evolution of massive cosmic structures and their galaxy populations across cosmic time. Following the methodology outlined by <cit.>, we compute the descendant mass of ODIN protoclusters by assuming that the mass contained within the overdensity region will collapse into a cluster-sized halo by z = 0, with a total mass M_z=0 given by M_z=0 = ρ_mV_PC = ρ_0,z(1+δ_m)V_PC where ρ_m and δ_m are density and overdensity of matter, respectively. V_PC is the protocluster volume, and ρ_0,z is the mean density of the Universe at redshift z. If the galaxy bias, b_g, is known, Equation <ref> becomes: M_z=0 = (1+δ_g/b_g) ρ_0,zV_PC where δ_g is galaxy overdensity. We assume b_g = 1.8 <cit.> and fix δ_g to the median LAE overdensity within the protocluster region. Since we do not have any redshift information for the majority of our LAEs, we assume that the size of a protocluster in the line-of-sight direction is similar to that in the transverse direction and approximate the volume of a protocluster from its area (A_PC) as V_PC = A_PC^1.5: i.e., the shape of a protocluster is approximated as a cube. M_z=0 is then estimated as, M_z=0 = (1+δ_g/b_g)ρ_0,z A_PC^1.5 The relation V_PC = A_PC^1.5 is a good estimate for an isotropic structure. As we have already seen, many of our protoclusters are strikingly non-spherical in shape, and thus the above relation may be a poor approximation. If a protocluster is extended in the transverse dimension, it would be observed as a more modest overdensity with a larger angular extent. Conversely, a structure stretched along the line-of-sight direction is expected to be more compact with a larger overdensity. Using TNG-detected structures, we quantify the detection rate and the uncertainty in mass estimates due to the lack of information in the third dimension. Figure <ref> shows that there is a good agreement between TNG-selected and observational protoclusters for both angular sizes (middle) and median LAE overdensity (right). We also assess the likelihood of the most 30 massive cluster progenitors to be selected as a protocluster candidate when viewed along the X-, Y-, and Z directions. To this end, we only consider the structures for which: i) the measured (angular) position center lies within 10 cMpc of the true center; and ii) the line-of-sight position center is within 25 cMpc of the center of a given TNG slice. For reference, the FWHM of all slices is 60 cMpc and a typical size of a protocluster is ≈10 cMpc. Based on this definition, the median recovery rate of the protoclusters is 60%. The progenitor of the most massive cluster is always detected while the progenitors of the next 4 most massive clusters are detected > 90% of the time. As expected, the likelihood of finding smaller structures depends on the sightlines being favorable or adverse to robust detection. If we lower the threshold for our VT-based protocluster selection from 4.5σ to 3.5σ, the recovery rate of the protoclusters increases to 80%. The higher success rate comes at the price of a much greater contamination of ∼45%, compared to ≈20% for our fiducial setup. In the left panel of Figure <ref>, we compare the estimated and true descendant mass, M_est,z=0 and M_200,z=0 for the recovered clusters in the 90 slices. The figure illustrates that M_est,z=0 tends to underestimate the true mass about 80% of the time. The median M_est,z=0/M_true,z=0 ratio is 0.35, as indicated by the solid black line. This implies that the cosmic volume that ends up in a galaxy cluster by z=0 is much greater than what we identify in the data as significant galaxy overdensities likely associated with a protocluster. Our findings are consistent with the results from <cit.>, who, based on the semianalytical models implemented in the Millennium simulations, found that about 40% of the total mass at z=0, M_ today, is enclosed within a 6 (8) cMpc-radius sphere for Virgo- and Coma-sized protoclusters with M_ today=(3-10) × 10^14 M_⊙ and > 10^15 M_⊙, respectively. In comparison, the median effective radius of our simulated protoclusters (computed as (A_PC/π)^0.5) is ∼ 5 cMpc. To account for this effect, we correct the estimated masses by a factor of 0.35. Figure <ref> shows that the 2D-based mass estimates yield a considerable scatter of 0.49 dex. The most extreme outliers arise from cases where two or more protoclusters are merged together during detection, particularly for the protoclusters with M_200,z=0≤ 10^14.5 M_⊙; these points are highlighted with a black outline in Figure <ref>. Even excluding these cases, the scatter remains high at ∼ 0.44 dex. We ascertain that the scatter is a result of the loss of information in the z direction as follows. We perform the 3D Voronoi tessellation of the 3D volume centered on the 30 massive clusters, this time, using all the galaxies with M_ star > 10^7 M_⊙. Using the subhalo merger trees of the z=0 cluster members, we also identify the `member galaxies' in the z=3 snapshot. The LAE overdensity and the protocluster region are computed similarly to the 2D case but Voronoi cells are now 3D polyhedra instead of 2D polygons, while galaxy overdensity is measured as the median LAE overdensity of the Voronoi cells containing member galaxies. V_PC is obtained by summing over these Voronoi cells. The resultant mass estimate is shown in the right panel of Figure <ref>. As expected, the scatter of the 3D z=0 mass estimates is considerably smaller, at ∼ 0.15 dex, than in the 2D case, once we have eliminated all uncertainties regarding the cluster membership and the information on the third dimension. The estimated descendant mass is also significantly less underestimated than in the 2D case, being ∼ 85% of the true value. It is noteworthy that the intrinsic scatter of 0.15 dex is comparable to 0.2 dex estimated by <cit.> in converting galaxy overdensity measured within a (15 cMpc)^3 cubic window into descendant mass. When we compare the `true' volume found with the 3D Voronoi tessellation to that derived from the projected protocluster area, we find that the scatter of the latter (≈0.52 dex) is sufficient to account for the entirety of the uncertainty in the mass estimate. This suggests that the uncertainty in M_z=0 is dominated by the volume estimation and that the isotropic assumption is less than ideal for protoclusters at high redshift. This result is not surprising, but it does quantitatively demonstrate the need for large-scale spectroscopic follow-up of protocluster candidates. However, we also emphasize that the 2D mass estimation is correct in an average sense, and statistical analyses for a large sample of protocluster candidates which make use of this mass estimate will be robust. §.§ The fidelity of filament recovery At high redshift, active star formation and galaxy growth taking place in protoclusters are likely supported by cool gas transported along filaments of the cosmic web <cit.>. The wide area coverage of ODIN presents a unique opportunity to study protoclusters in concert with the surrounding cosmic web, provided that we can accurately recover the filaments. As they are even more extended structures than protoclusters, the effect of projection along the line-of-sight on filament recovery must be considered separately for filaments. In this section, we use the TNG300 slices to examine how well the observed 2D filament network corresponds to the true underlying 3D network. We focus specifically on filaments in the vicinity of protoclusters, to understand how likely it is for a filament that appears near a protocluster in projection to be physically connected to the protocluster. We make use of the 90 slices of the TNG300 full box at z = 3 described in Section <ref>. For each slice, we use DisPerSE to extract filaments using the positions of the mock LAEs matched to ODIN, projected along the slice axis. This is henceforth referred to as the `2D skeleton' or `2D filaments'. In order to create a comparison set of filaments which represents the `true' cosmic web, we also run DisPerSE on the 3D positions of the slice galaxies with 10^7 M_⊙ < M_* < 10^12 M_⊙, henceforth referred to as the `3D skeleton' or `3D filaments'. We use a persistence threshold of 2σ to extract the 2D skeleton, identical to that used on the observed LAEs. For the 3D skeleton, we use multiple persistence thresholds of 5, 6, and 7σ in order to compare the 2D skeleton against 3D features of varying significance. In Figure <ref>, we show the dark matter distribution in one of the TNG300 slices overlaid with the 2D and 3D skeletons. The similarity between the DisPerSE filaments (both 2D and 3D) and the underlying dark matter distribution is clear, as is the fact that the 2D and 3D skeletons mirror each other closely. However, the 2D skeleton does not perfectly reproduce the true 3D cosmic web. To quantify the extent to which the cosmic web can be recovered by our observational data, we compare the 2D filaments to the 3D skeleton. We do this following a similar procedure to <cit.> <cit.>, by matching the individual segments out of which DisPerSE constructs the 2D and 3D skeletons (see Section <ref>) to each other. We measure the (projected) separation between the midpoints of each segment of the 2D skeleton and those of the 3D skeleton. We denote the minimum distance from a segment of the 2D filament network to one of the 3D network as , and the inverse as . This mapping is illustrated in the left panel of Figure <ref>. The former measurement enables us to determine which of the 2D filaments represent true features of the 3D cosmic web, while the latter allows us to determine which features of the underlying filament network are successfully recovered by observations. In the top right panel of Figure <ref>, we show histograms of for the three persistence thresholds. Hydrodynamical simulations have found that the typical radius of a filament at z ∼ 3 is 2–3 pMpc <cit.>. The median values of are well below this range, being 2.0, 2.7, and 4 cMpc respectively for the 3D skeletons found with persistence 5, 6, and 7σ. The values are less than 5 cMpc (1.25 pMpc) for the majority of the 2D segments. We refer to the segments with  < 5 cMpc as `matched 2D segments', meaning that they have a `match' in the 3D skeleton. The fraction of matched segments decreases with increasing persistence threshold, from ∼ 80% for a persistence threshold of 5σ to ∼ 60% when the persistence threshold is 7σ. This is reasonable; as the persistence threshold is raised, the extracted network is increasingly restricted to only the most significant filaments and finer features are lost, leading to a greater number of 2D segments going unmatched. Irrespective of the adopted persistence value, 55% of the 3D segments have 2D matches within  < 5 cMpc. We conclude that 2D skeleton serves as a good representation of the underlying cosmic web even though it faithfully recovers the individual features ≈55% of the time. Next, we examine whether the fraction of matched 2D segments depends on the persistence threshold used to extract the 2D skeleton. If the matched fraction increases substantially with increasing persistence threshold, it would indicate that our chosen persistence threshold is too generous. Reassuringly, we find that the matched fraction only rises marginally with the persistence threshold, increasing by ≲ 5% from a persistence threshold of 2σ to 5σ. This is visualized in Figure <ref>, where we show the 2D skeletons extracted with persistence thresholds of 2, 3, and 5σ in comparison to the dark matter distribution and the 3D skeletons. The majority of the 2D filaments extracted with a lower persistence threshold, which do not appear with a higher one, can be visually matched to the true features of the 3D skeleton. Intriguingly, the fraction of matched 2D segments increases with increasing surface density for all cases, as shown in the top panel of Figure <ref>, suggesting that the higher the observed LAE surface density is, the more likely the detected filament is to be an accurate representation of the 3D cosmic web. We also examine the converse, i.e. the probability that a filament within an overdense region is recovered by observations. On considering the 3D skeleton within 10 cMpc of a massive protocluster, we find that the fraction of matched 3D segments is considerably higher than in average regions, increasing from a median of ∼ 55% to a median of ∼ 86% percent in all three persistence cases. The matched fraction is even higher if we restrict the region under consideration to within 5 cMpc of the protoclusters, being 100% in the majority of slices. In the bottom panel of Figure <ref>, we show as a function of the (2D) LAE surface density in the slices. To avoid confusion, we only show the result for the 5σ threshold skeleton, as the remaining two persistence thresholds give nearly identical results. With increasing LAE surface density, decreases rapidly; i.e., in dense protocluster regions, not only is the recovery rate of the 3D skeleton considerably higher but also the 2D skeleton resembles the 3D skeleton more closely. This is illustrated in Figure <ref> where we show the 2D and 3D skeletons for the same slice as in Figure <ref> but zoom in on the massive protocluster. These results show that the ODIN data will be capable of studying protoclusters within the context of the surrounding cosmic web. § DISCUSSION In Section <ref>, we based our protocluster selection on the GS and VT maps and the HDBSCAN algorithm. These methods resulted in 9, 33, and 47 structures, respectively. We now wish to evaluate the similarities and differences between these methods. The false positive rate in the GS method is very low for all sets of detection parameters and is ∼ 0 for our fiducial criteria. The 9 GS-based protoclusters may thus be considered the most secure candidates. All GS structures are present in the VT sample with similar morphologies. In all cases, the separation between the centers determined by the GS and VT method is within 2.5 cMpc. For all but one that is present in both GS and VT samples, the measured area is slightly greater (by a factor of ∼ 1.1 - 1.8) in the latter. This is because irregular overdensities are generally detected at higher significance in the VT map. All structures in the GS sample are also present in the HDBSCAN sample. One is split into two structures by HDBSCAN as shown in the middle column of Figure <ref>. For the remaining 8, the structures have similar morphologies in both the GS and HDBSCAN analyses; however, the measured area in HDBSCAN is larger than that estimated from the GS method in by a factor of ∼ 2 - 5. More interesting is the comparison between the structures detected from the VT map and using HDBSCAN. Figure <ref> shows the locations of the VT (upper right) and HDBSCAN samples (lower left). Of the 33 protocluster candidates in the VT sample, only two are not detected by HDBSCAN. As for the remaining 31, one is recovered as two separate objects and two are merged into a single structure. For all the protoclusters detected by both the VT and HDBSCAN methods, the latter yields larger measured areas than the former, typically by more than a factor of 2. Why are the VT-based structures smaller and fewer in number compared to the HDBSCAN-selected ones? Visual inspection of Figure <ref> suggests that many HDBSCAN-selected structures with no VT counterpart are more elongated and filamentary. In particular, the filamentary arm previously highlighted in Complex A is identified by HDBSCAN (highlighted in red in the figure) but not from the VT map. If we relax the overdensity threshold for the VT detection from 4.5σ to 3.5σ, the number of HDBSCAN-selected protoclusters that overlap one or more VT-based ones increases from 30 to all 47. The areas of the two sets of protoclusters become more comparable. However, the downside of lowering the detection threshold for the VT map is an increase in the contamination rate. We conclude that HDBSCAN fares better in recovering lower surface density features with a more elongated morphology, but with the downside that it is more difficult to ascribe a physical meaning to the detection parameters for this method (Kth nearest neighbour and minimum cluster size) than for those for the GS and VT methods (minimum area and density threshold). We now estimate the descendant masses of our protocluster candidates. We showed in Section <ref> that for our choice of protocluster detection parameters, Equation <ref> underestimates the descendant masses of the structures selected from the VT map by a factor of ∼ 3. A similar analysis for the GS and HDBSCAN methods (see Appendix <ref>) shows that the masses of the former are underestimated by a similar factor of ∼ 3, and the latter by a factor of ∼ 1.15. In Figure <ref>, we show histograms of the descendant masses after making the appropriate corrections. The median masses are log(M_z=0/M_⊙) = 14.35, 14.75, and 14.52 for the VT, HDBSCAN, and GS protoclusters, respectively, suggesting that our protocluster candidates will evolve into moderately massive `Virgo-type' clusters <cit.>. § CONCLUSIONS The ODIN survey is the largest-area deep field narrowband survey undertaken to date. By enabling the selection of LAEs, which are low-mass, star-forming galaxies and are well-localized in redshift space over a wide contiguous area, ODIN makes it possible to comprehensively trace the large-scale structure at high redshift. In this paper, we have used the early ODIN data taken in the COSMOS field with the N501 filter to compile a sample of protoclusters and cosmic filaments. We create LAE surface density maps using two methods - by smoothing over the LAE positions with a fixed-size Gaussian kernel (GS map, Section <ref>) and by constructing the Voronoi diagram of the LAEs (VT map, Section <ref>). We select protocluster candidates from the surface density maps (Section <ref>) and by applying the density-based clustering algorithm HDBSCAN to the LAE positions (Section <ref>). We also select filaments of the cosmic web (Section <ref>). We assess the reliability of our structure detection by comparing our observations against the results obtained with a carefully created mock sample of LAEs from the IllustrisTNG300-1 hydrodynamical simulation (Section <ref>). Our main results and conclusions are as follows: 1. The large-scale structure revealed by our surface density maps is distinctly clumpy and irregular, with overdense regions clustered together in extended complexes. The VT map identifies filaments as regions of moderate LAE overdensity (= 2–3) connecting the regions of highest density (> 3). These features are in line with the expectations of hierarchical structure formation. 2. The three methods of identifying protoclusters which we explore all have their own strengths and weaknesses. The VT map identifies a greater number of objects but also suffers from a higher contamination rate than the GS map. The GS map recovers only the most significant overdensities but is almost free of interlopers. The HDBSCAN map recovers the maximum number of structures, however the input parameters are less straightforward to interpret or assign physical meaning to than for the surface density maps. 3. The surface density maps of the z = 3 TNG300 simulation box display remarkably similar features to those seen in the observations. The number and size of the overdensities selected with our fiducial detection parameters are likewise similar in the two. As shown in Figure <ref>, many of the prominent overdensities in the TNG300 surface density maps correspond to the progenitors of z = 0 clusters. 4. The simulation shows that we can successfully recover ∼ 60% of the protoclusters with descendant mass ≳ 2 × 10^14 M_⊙. We find that the descendant mass of the simulated protoclusters can be estimated purely based on their area and LAE overdensity measured in 2D. The lack of information in the redshift direction introduces a scatter of ∼ 0.4 dex on the measurement. 5. By comparing the filaments identified in 2D and 3D in TNG300, we show that despite projection effects, the cosmic web recovered in 2D is a close representation of the true LSS in regions with high LAE surface density. In the vicinity of protoclusters, the 3D filament network is almost perfectly recovered. 6. On estimating the descendant masses of our observed protocluster samples, we find that they span the range log(M_z=0/M_⊙) ∼ 14.0 - 15.0, with the median of the GS, VT and HDBSCAN samples being 14.35, 14.52 and 14.75 respectively. The majority of our protoclusters are thus likely to evolve into intermediate-mass `Virgo-type' clusters. Our results establish the robustness of our protocluster and filament samples and demonstrate that LAEs are reliable and efficient tracers of large-scale structures at high redshift. Upon completion, the ODIN survey will allow us to select secure samples of massive structures that are nearly ten times larger than those presented here and span three cosmic epochs. ODIN will thus be well placed to grant insight into the formation and evolution of cosmic structures near Cosmic Noon. The authors acknowledge financial support from the National Science Foundation under Grant Nos. AST-2206705 and AST-2206222 and from the Ross-Lynn Purdue Research Foundation Grant. This material is based upon work supported by the National Science Foundation Graduate Research Fel- lowship Program under Grant No. DGE-2233066 to NF. The Institute for Gravitation and the Cosmos is supported by the Eberly College of Science and the Office of the Senior Vice President for Research at the Pennsylvania State University. J.L. is supported by the National Research Foundation of Korea (NRF-2021R1C1C2011626). SL is supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT; Nos. 2020R1I1A1A01060310, 2020R1A2C3011091, 2021M3F7A1084525). Based on observations at Cerro Tololo Inter-American Observatory, NSF’s NOIRLab (Prop. ID 2020B-0201; PI: K.-S. Lee), which is managed by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation. Blanco SExtractor <cit.>, SEP <cit.>, hdbscan <cit.>, Astropy <cit.> § PARAMETER SELECTION FOR HDBSCAN As stated in Section <ref>, HDBSCAN is a density-based clustering algorithm which separates points into `clusters' and `noise'. HDBSCAN performs this classification by estimating the density at each data point from its Kth nearest neighbor distance, referred to as the `core distance'. Points in dense(sparse) regions will have relatively small(large) core distances, and the core distance of a data point is thus inversely proportional to its density. HDBSCAN is an extension of the DBSCAN algorithm. DBSCAN selects clusters by imposing a user-defined density threshold. This is implemented by placing a ceiling on the core distance/Kth nearest neighbour distance, that is, by requiring that there be K points within a distance ϵ or less of a given point. DBSCAN begins with a point which passes the requirement of having a core distance less than or equal to ϵ, and classifies it and its neighbours within ϵ as a single cluster. If any of these neighbours also pass the density criterion, their neighbours within ϵ are assigned to the same cluster as well. This continues until there are no more points within the cluster which have a core distance less than ϵ, at which point the cluster is closed. DBSCAN then selects an unassigned point with K or more neighbours within ϵ and repeats the process. This continues until no points which pass the density threshold remain, at which juncture all remaining data points are classified as noise. The approach of DBSCAN, of applying a constant density/core distance threshold, is restrictive. The optimal value of ϵ is difficult to identify, and can vary from region to region within the data. HDBSCAN circumvents the difficulty of selecting a single density threshold by making use of hierarchical clustering; that is, it creates a hierarchy of clusters by applying successively higher density thresholds, and then selects the optimal set of clusters from this hierarchy. This makes it ideal for selecting clusters of varying density from a single dataset, as in our case. As it constructs a cluster hierarchy, HDBSCAN does not require the user to supply the parameter ϵ. Instead, the parameters which govern the cluster selection are K, as before (which determines how density is measured) and an additional parameter, the minimum cluster size. HDBSCAN uses the minimum cluster size to condense the cluster hierarchy - any cluster in the hierarchy which has fewer points than this size is discarded. This is shown in Figure <ref> for a sample dataset. From this condensed hierarchy HDBSCAN selects the final set of clusters. The hdbscan library provides two ways of selecting the final set of clusters. The first method, the `excess-of-mass' method, selects the most stable set of clusters, defined as those which persist for the greatest range of density thresholds. The second method, the `leaf' method, selects the smallest surviving clusters in the hierarchy. These two methods are illustrated in Figure <ref> for the same sample dataset. Based on the above description, there are two main parameters which govern the clustering based on HDBSCAN, namely K and the minimum cluster size. By determining which nearest-neighbour distance will be used for estimating the density, K controls how finely the density distribution is sampled, in essence acting as a kind of smoothing scale. Changing the minimum cluster size affects the condensed cluster hierarchy by changing the number of points required to be considered a cluster, that is, more points are discarded as noise and the cluster selection is more conservative. Setting either of these parameters to too low a value will result in many noise peaks being erroneously selected as clusters, whereas setting them to be too large may result in structures being blended together. Our key consideration in choosing the parameters is ensuring that nearby structures are separated to the extent possible, which we enforce as described below. We run HDBSCAN on the positions of the LAEs with min_samples and min_cluster_size both in the range 10 – 30. We observe that the excess-of-mass method tends to blend structures together irrespective of the specific parameter values used, thus we choose to use the leaf method in our cluster selection. As discussed in the main text, we additionally require that the clusters be above a surface density threshold determined such that the contamination fraction is ≲ 0.2. We estimate the mean area of the structures selected for each combination of parameters, as shown Figure <ref>. With increasing values of min_samples and min_cluster_size, the mean area of the detected structures increases, indicating that smaller structures are indeed being blended together. We select our final parameters to be min_cluster_size = 10 and min_samples = 15, which as shown in Figure <ref> yields a protocluster sample with a mean area of ∼ 200 cMpc^2. This choice is based on the physical motivation that the scale of a protocluster is expected to be ∼ 10 – 15 cMpc <cit.>, corresponding to a projected area (for an isotropic structure) of ∼ 100 – 225 cMpc^2. We make use of a value of min_samples on the larger end of the allowed range because while over-smoothing the density distribution is undesirable, it is equally undesirable for the distribution to be too finely sampled as it becomes very noisy. § CALIBRATION OF THE MASS ESTIMATION FOR GS MAP AND HDBSCAN In Section <ref>, we evaluated the reliability of the estimated descendant massses for individual protocluster candidates selected from the VT map. We found that the descendant masses were on average underestimated by a factor of ∼ 3, which we attributed to the underestimation of the protocluster volume by our detection algorithm. Here we similarly examine the accuracy of the descendant mass estimates for the protocluster candidates selected from the GS map and with HDBSCAN. We once again use the 90 slices constructed in Section <ref>. Analogously to the middle and right panels of Figure <ref>, we compare the properties of the structures selected with HDBSCAN (selected from the GS map) from theses slices to those of the observed protoclusters in the top (bottom) panel of Figure <ref>. The structures remain similar in projected area and median LAE overdensity between theory and simulations, reaffirming the fact that our observational protocluster candidates are reasonable. As in Section <ref>, for each slice we consider those of the 30 massive cluster progenitors lying within the slice which are successfully recovered. The recovery rate of the HDBSCAN is ∼ 70%, slightly higher than that of the VT map. This is in accordance with the fact that the HDBSCAN candidates are the most numerous. By contrast, the recovery rate of the GS map is very low, ∼ 20%, in line with our observation that only the most prominent structures are recovered. We compare the estimated and true descendant masses for recovered clusters for both methods in Figure <ref>. We find that the descendant masses of the protoclusters are underestimated by a factor of ∼ 2 with the GS method, and by a factor of ∼ 1.15 with HDBSCAN. In both cases the scatter is similar to that observed with the VT map, ∼ 0.48 dex. aasjournal
http://arxiv.org/abs/2406.08468v1
20240612175407
Observing formation and evolution of dislocation cells during plastic deformation
[ "Albert Zelenika", "Adam André William Cretton", "Felix Frankus", "Sina Borgi", "Flemming B. Grumsen", "Can Yildirim", "Carsten Detlefs", "Grethe Winther", "Henning Friis Poulsen" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
A high-resolution view of the source-plane magnification near cluster caustics in wave dark matter models. J. M. Diego 1jdiego@ifca.unican.es Alfred Amruth 2 Jose M. Palencia 1 Tom Broadhurst 3,4,5 Sung Kei Li 2 Jeremy Lim 2 Rogier A. Windhorst 6 Adi Zitrin 7 Alexei V. Filippenko 8 Liliya L. R. Williams 9,10 Ashish K. Meena 7 Wenlei Chen 11 Patrick L. Kelly 9,10 June 17, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================= empty § MAIN Metals and alloys are typically polycrystalline aggregates. When deformed plastically, dislocations are introduced into the lattice of each grain <cit.>. For materials with medium to high stacking fault energies the primary mechanism for facilitating the shape change is dislocation slip. The interplay between the plastic flow and minimisation of elastic energy implies that the dislocations organize into boundaries. Moreover the micro-structure often evolves to comprise two types of boundaries, on a coarser scale Geometrically Necessary Boundaries, GNBs, reflecting systematic variations in the plastic flow, and on a finer scale Incidental Dislocation Boundaries, IDBs, thought to represent statistically trapped dislocations<cit.>. The IDBs separate nearly dislocation-free regions called cells. With increasing deformation, the flow stress increases, and the entire hierarchical structure shrinks in length scale. Empirically, structural properties such as cell size and mis-orientation distributions have been shown to exhibit scaling with the applied field for plastic strains larger than 5%-10% <cit.>, when separating IDBs from GNBs. However, despite extensive studies - motivated by the socioeconomic impact of metals - we are still unable to predict the type of micro-structure that forms as function of material and processing from first principles. While correlations have been established with the active slip systems <cit.>, the mechanisms underlying the cell formation and cell sub-division are not known. This prohibits realizing the vision of “materials science in the computer” in this field. At the root of this predicament are two issues. Firstly, the complexity and computational effort in handling the large sets of dislocations involved has been prohibitive. Discrete <cit.> and Continuum <cit.> Dislocation Dynamics models (DDD and CDD) can at best simulate representative volumes up to about 0.5 % and 1% strain, respectively. For that reason until recently even basic patterning has been elusive <cit.>. Secondly, it is challenging to visualise the micro-structural evolution experimentally in a representative way, as the multi length scale problem requires a combination of contrast to cells and local dislocation content over a large representative volume, in situ and within the bulk of a sample. Traditionally, dislocation structures are mapped by electron microscopy (EM) <cit.>. EM provides very detailed maps, but is inherently limited in terms of representative volume by the use of thin foils, micro-pillars or free surfaces. Hence, the multiscale dynamics may not represent bulk conditions. For bulk studies x-ray diffraction based imaging has emerged as a powerful tool. Multi-grain X-ray imaging modalities such as 3DXRD and DCT can provide comprehensive information on the grain level, well suited for interfacing with crystal plasticity models <cit.>. X-ray scanning nano-beam methods on the other hand have been demonstrated to visualise a few dislocations <cit.>. For quantitative measurements of dislocation densities line profile analysis of x-ray diffraction patterns prevails<cit.>, but results are averages over grains or over the illuminated volume. To address this multiscale experimental problem we have established Dark Field X-ray Microscopy <cit.>, DFXM. With modalities similar to dark field TEM <cit.>, this method enables large field-of-view visualisations of both dislocations <cit.> and cells<cit.> within the bulk. We here apply the technique to a comprehensive study of the structural evolution within Aluminum during in situ tensile deformation from an applied strain of ϵ = 0 to ϵ = 0.046. The inspected volume comprises ∼ 40,000 dislocation cells, providing excellent statistics in this initial patterning range, where the structure is little known and the overlap with the strain regime available in simulations is most prominent. The sample of choice is a single crystal of pure Aluminum, with [111] parallel to the tensile axis, see Fig. <ref> a). The mechanical device is mounted in the DFXM microscope such that diffraction in the vicinity of the [111] reflection is imaged for the illuminated layer. See Fig. <ref> for more details of the loading device. By tilting goniometer angles ϕ and χ, see Fig. <ref> a), contrast is provided to orientation and strain components. Three complementary modalities are illustrated in Fig. <ref> b)-e). Weak beam contrast provides statistics over dilute dislocations ensembles for ϵ≤ 0.005, orientation contrast provides statistics over cell properties, once these have formed, while the peak broadening Δ q is a proxy for the total dislocation density, ρ: Δ q ∼√(ρ). By repeating the mapping for a set of z_l layers, a 3D map can be created. For definitions, algorithms and specifications, see sections <ref> and <ref>. Single crystals of the present orientation form parallel planar GNBs in addition to cells <cit.>. The GNBs are clearly manifest in both EM and DFXM when inspecting planes which include the TA (see Fig. <ref> and Zelenika et al. <cit.>). In this work we report on the cell evolution primarily within a plane perpendicular to the TA. Here the GNBs are known to be less visible <cit.>. This was confirmed by the absence of preferred directions or orientation correlations between cells in the present data, see Figs. <ref> and <ref>. This indicates that a model that does not explicitly take the crystallography of the deformation process (slip planes) into account may be adequate. For ϵ < 0.01, the sample exhibits a set of isolated dislocations and dislocation entanglements, as evidenced by Fig.  <ref> b). In Section <ref> these clusters are quantified in three complementary ways, which sample the elastic field associated with the dislocations differently. Consistently the three approaches reveal that the clusters are randomly positioned and distributed homogeneously within the Field-Of-View (FOV). Moreover their size distributions are consistent with log-normal distributions. With increasing applied strain, the orientation spread and the total dislocation content - both averaged over the entire volume inspected - grow approximately linear with the strain, see Fig. <ref>. Moreover, domains of approximately uniform intrinsic orientation distribution appear, see Supplementary Video 1. We define cells as domains where the Kernel Average Misorientation, KAM, of all boundary voxels are above a threshold, θ_KAM. By means of morphological operations, the boundaries are made 1 pixel thin, and as a consequence a tessellation of the sample is obtained, see Fig. <ref>. Anticipating scaling we set θ_KAM to be a linear function of ϵ. The resulting KAM filter superposed on the orientation map are shown as function of ϵ as Supplementary Video 4. Snapshots are provided in Fig. <ref>. Inspection reveals that the tessellations are indeed similar, justifying the linear ansatz. This analysis also allows us to address a long-standing question: when are the cells formed? As shown in Fig. <ref> b) the sample undergoes a transformation from no cells to site-saturation of cells in the range ϵ = 0.008 to ϵ = 0.023. From Fig. <ref> c) it appear that this screening of the surrounding takes place as soon as the cells are fully formed, at ϵ = 0.024. Moreover, it is shown that results are consistent with TEM results for ϵ > 0.05. In a complementary approach we study the order in the structure on the micrometer length scale by deriving the autocorrelation function of the orientation maps. These are presented as function of ϵ in Supplementary Video 3. The absence of any side peaks clarifies that there is no long range ordering of cells. Next in Fig. <ref> a) for ϵ = 0.046, we compare the autocorrelation function with a hard sphere model of the cells with all parameters defined by the experimentally determined size distribution, see also Suppl. Information. The excellent match with experimental data informs that the cells “do not see each other”. We interpret this as evidence of the local and random nature of the cell formation process. The cell map at ϵ = 0.046 comprises ≈ 40,000 cells within the 9 layers analysed - one of them shown in Fig. <ref> d). This large ensemble allows us to distinguish with certainty between functional forms for various distributions, thereby strongly constraining models of structural evolution. The cell size distribution is found to be consistent with a log-normal distribution, see Fig <ref> a), while statistical tests reject other commonly used functions, cf. Fig. <ref>. Likewise, the distribution of misorientation angle between neighboring cells is well described as a χ function, Fig <ref> b). The corresponding peak broadening distribution, shown as Fig <ref> c), is a proxy for the local total strain, assuming that the dislocation density approximately reflects the plastic strain. The distribution is bimodal, which we interpret as an elastic strain and a plastic strain component. The former is to a good approximation normal, within data uncertainty the latter may be normal or log-normal. As expected the plastic strain component is predominantly present in the rather broad cell walls, cf. Fig. <ref> e). Distributions of aspect ratio are provided in Supplementary Information. l0.41 < g r a p h i c s > Structural order. a) Autocorrelation of the orientation map for ϵ= 0.046 (blue points) along y_l with a superposed model based on assuming the cells to be hard spheres (red line). All parameters in the model are provided by the experimentally determined cell size distribution. b) Area fraction of cells and average cell size as function of ϵ. c) Comparison of the properties measured by DFXM (symbols) with literature values based on TEM <cit.> (lines). For the average cell size the literature values (D_C) are compared to the results of the cell analysis (D_KAM) and of the autocorrelation analysis (D_AC). For average misorientation the literature values (Θ_C) are compared to the cell analysis values (Θ_KAM). Next, with the statistical tools presented, we describe the micro-structural evolution. Within the applied strain range from ϵ = 0.013 to 0.046, where the material goes from predominantly formation (increasing cell sizes) to predominantly fragmentation (decreasing cell sizes), the distributions evolve in a consistent way, as summarised in Fig. <ref> d) to f). Specifically, within statistical error the size distribution is log-normal throughout and exhibit scaling (σ is constant.) Likewise, the misorientation angle distribution grows linear with strain and exhibits scaling (within experimental error k is constant). Moreover, the bimodal model for Δ q is valid throughout. The area fractions of cell interior and wall regions remain approximately constant, as do the elastic distribution for the cell interior. In contrast the wall distribution widens in a linear fashion with ϵ - ϵ_0, with ϵ_0 = 0.012 being the onset of cell formation, cf. Fig. <ref> f). The existence of a log normal cell size distribution throughout is consistent with both cell formation and division being multiplicative stochastic processes. This finding leads to suggest that the combined cell formation and division process can be described as a Markovian growth-fragmentation process<cit.>. Used e.g. in population science <cit.> and chemical engineering <cit.>, such processes are mathematically proven to give rise to log-normal distributions and scaling. Specifically, our understanding is as follows: individual dislocation entanglements appear randomly in time and space. Similar to particle creation by diffusion they grow with a growth rate that is proportional to their size. Following impingement, the cell pattern exhibits scaling with θ_KAM∼ϵ. In the cross-over from formation to subdivision their area remains about constant, while new dislocations continue to build up the boundaries, the IDBs. The linear growth in dislocation content leads to a linear growth in the average misorientation across the IDBs. With the cells exhibiting no long range order, the IDB statistics can be modelled as a sum of stochastic processes, consistent with misorientations being associated with a chi distribution<cit.>. Finally, during fragmentation the larger cells are more likely to divide. This is corroborated by a positive correlation between cell heterogeneity and cell size, cf. Fig. <ref>. In the past, log-normal-distributed plastic strain <cit.>, misorientation <cit.> and geometrically necessary dislocation densities <cit.> have been obtained experimentally with surface studies at the grain scale for polycrystalline metals deformed to strains exceeding ϵ=0.1 and across a range of materials and operative micro-mechanical mechanisms like slip and twinning. Log-normal distributions have also been predicted for the plastic strain component by crystal plasticity simulations <cit.> of grain ensembles, without any dislocations and for cell size and misorientation angle by assuming a cell splitting probability proportional to the cell surface area and misorientation evolution by rotational diffusion <cit.>. Uniquely x-ray imaging has the angular resolution that provides contrast for cells at all relevant applied strain levels. The DFXM maps shown here represent a representative volume deeply embedded in a sample sufficiently large that the mechanical conditions represents bulk conditions. The maps presented can be input to crystal plasticity FEM or - in future - CDD models. Subsequently two 3D movies may be compared: one experimental and one with the simulations. As demonstrated in the past e.g. for grain growth such a comparison of hundreds of thousands of points in space-time will provide unprecedented opportunities for guiding and validating models <cit.>. The methodology presented applies to poly-crystals and all crystallographic space groups, limited mainly by the spatial resolution of 100 nm. To study the individual formation and fragmentation events we are commissioning a new goniometer and a more mechanically stable loading device, making it possible to track the micro-structural features in space and time by DFXM. Moreover, this may be complemented by mapping the local (purely elastic) longitudinal strain component <cit.>. Such extensive mapping is also relevant for studies of other processes such as annealing (recrystallisation) and ductile damage, as it will combine mapping of the deformed microstructure with large volume identification of nuclei and voids, respectively. § METHODS §.§ Sample The sample is a single crystal of 99.9999 % pure Aluminum of dimensions 1 × 1 × 20 mm. The tensile axis is (111). After cutting, the sample was annealed at 540 ^∘C for 10 hours. The sample was mounted by glue on a grooved PEEK holder with a gauge length of 5 mm. This was inserted in a four-point bending loading device with the sample on the tensile side. In this geometry the sample is subject to uniaxial tension. The tensile device is illustrated in Fig. <ref> and the resulting stress-strain curve in Fig. <ref>. §.§ DFXM experiment The DFXM experiments were conducted at Beamline ID06-HXM at the European Synchrotron Radiation Facility, ESRF. For details of the set-up see Kutsal et al. <cit.>. A monochromatic beam with an energy of 17 keV was focused to a line with a FWHM of ≈ 600 nm in the z_ℓ direction, illuminating a layer within the sample. The scattering angle for the Al {111} Bragg reflection is 2θ = 17.98 ^∘. The objective was a Be Compound Refractive Lens (CRL), with 88 lenslets with radius of curvature R = 50 μm, positioned at a sample-to-CRL-entry distance of d_1 = 269 mm and a CRL-exit-to-detector distance of d_2 = 4987 mm. The corresponding magnification and numerical aperture are ℳ = 18.52 (measured) and NA = 0.705 mrad (calculated from Eq. 9 in Poulsen et al. (2017) <cit.>), respectively. The 2D detector was located 5256 mm from the sample. With an additional magnification of 2 in the detector the effective pixel size was 656 nm (along x_l) × 202 nm (along y_l). The corresponding field of view in the sample is 350 μ m× 900 μ m. The scan parameters used are listed in Table 1 in Supplementary Information. The exposure time for a single image was 0.2 second including motor movements. We did not observe any creep in the microstructure over the time of the acquisitions (minutes to hours). §.§ Data analysis With the exception of the weak beam data, the entire analysis is based on the output of darfix <cit.>, as shown in Supplementary Videos 1 and 2. Specifically, for each voxel a 2D Gaussian is fitted to the (ϕ,χ) distribution. The subsequent analysis is detailed in Supplementary Information. § DATA AVAILABILITY All the data presented in this paper, along with the analysis tools used for data evaluation, are available on GitHub at https://github.com/adcret/PMP. These resources include all data sets and scripts used to process and analyze the data following initial treatment steps with the darfix software. A comprehensive description of the analysis can be found in the "cell_refinement_analysis.ipynb", further details about the packages and dependencies of this analysis tool are found in the "README.md" file. Data DOI: doi.org/10.15151/ESRF-ES-776857198doi.org/10.15151/ESRF-ES-776857198 § ACKNOWLEDGEMENTS We thank Kristian Mølhave for suggesting the design for the tensile rig. This work was funded by the European Research Council (Advanced grant no 885022) and by Danish Agency for Science and Higher Education (grant number 8144-00002B). C.Y. acknowledges the financial support by the ERC Starting Grant "D-REX" (no 101116911).We acknowledge ESRF for a PhD grant and for the provision of synchrotron radiation facilities under proposal number MA-4442 on beamline ID06-HXM. § AUTHOR CONTRIBUTIONS STATEMENT HFP wrote the manuscript with significant contributions from AZ and AC, then edits and correspondence from all co-authors. The conceptual idea of the study is due to GW and HFP. The planning and execution of the experiment was lead by AZ, with contributions from all co-authors except AC. Primarily AZ but also AC performed data analysis under the supervision of HFP, GW and CY. FF, FG and CD designed and built the four point bender. § ADDITIONAL INFORMATION The authors of this work have no competing interests in this work. § SUPPLEMENTARY INFORMATION Supplementary Information Supplementary Notes I and Figs. S1–<ref>. Supplementary Video 1: Center-of-Mass (ϕ,χ) orientation maps. Left: the full field-of-view for the central layer with a region-of-interest, ROI, marked by the rectangular box. Right: a zoom-in corresponding to the ROI. The color scale employed varies from strain level to strain level and is indicated by inverse pole figures inserted. Supplementary Video 2: Peak broadening maps. The voxel-by-voxel width (FWHM) of the avearge peak broadening Δ q = √(Δ q_ϕ^2 + Δ q_χ^2), with Δ q_ϕ and Δ q_χ being the FWHM's resulting from a 2D Gaussian fit to the (ϕ,χ)-distribution. Left: the full field-of-view for the central layer with a region-of-interest, ROI, marked by the rectangular box. Right: a zoom-in corresponding to the ROI. Supplementary Video 3: Auto-correlation maps. Center lines of the 2D autocorrelation function for the central layer along directions x_l and y_l. Supplementary Video 4: Center-of-Mass (ϕ,χ) orientation maps with KAM mask overlaid. The images are replica of those in Supplementary Video 1 with a Kernal-Averaged-Misorientation mask (black lines) overlaid. Supplementary Video 5: Call maps. Cells (colored regions) are identified as connected regions using a Kernel-Averaged-Misorientation (KAM) mask (black lines). The colors of the cells represent their average orientation, as indicated by the inverse pole figures inserted. Left: the full field-of-view for the central layer with a region-of-interest, ROI, marked by the rectangular box. Right: a zoom-in corresponding to the ROI. Supplementary Video 6: Peak broadening maps with KAM masks overlaid. The images are replica of those in Supplementary Video 2 with the KAM mask (black lines) overlaid. § SUPPLEMENTARY INFORMATION §.§ DFXM methodology The geometry of DFXM, coordinate systems, associated diffraction formalism and interfacing to micro-mechanical modelling is presented in detail in Poulsen et al., 2017 <cit.> and Poulsen et al., 2021<cit.>, while the details of the instrument at ID06-HXM, ESRF are presented in Kutsal et al., 2019<cit.>. For convenience we summarise the definitions and optical properties relevant for this article. The setup is illustrated in Fig. <ref> a). A nearly monochromatic beam illuminates the sample. This beam is condensed in the vertical direction to generate essentially a line beam. The goniometer is designed to access diffraction angles in a vertical scattering geometry, and probe reciprocal space in the immediate vicinity of a given reflection, here Q⃗ = (1,1,1). The optical axis of the objective is aligned to the diffracted beam to produce an image on the 2D detector. The objective acts as a classical microscope magnifying an objective plane within the sample onto the detector plane. This microscope is characterised by the numerical aperture, NA, and the focal distance, f_N of the objective, as well as the magnification of the x-ray signal, ℳ, and the Field-of-View, FOV, in the sample plane. Due to the layered incoming beam, the DFXM images are affine transformations of the structure in the (x_ℓ,y_ℓ)-plane with an effective pixel size that is 1/tan(2θ) larger along x_ℓ than y_ℓ. To generate 3D maps the sample is translated relative to the incident beam in direction z_ℓ and acquisition is repeated. In the simplified geometry used in this work, the goniometer has two angular degrees of freedom, the two orthogonal tilt directions ϕ and χ, with ϕ corresponding to rotation around y_ℓ. Diffraction contrast is obtained by scanning the sample in these two directions, known as rocking and rolling scans, respectively. These scans probe two shear components out of of the 9 components of the micro-mechanical tensor field <cit.>. As always, the two shear components comprise both a rotational and an elastic strain part. We cannot separate the two, but the magnitude of the field values implies that the rotation part is more dominant for the larger applied strains. To probe the field of the axial strain, the "2θ-arm" - comprising both the objective and the detector - may be rotated corresponding to a shift of Δ 2θ. However, this modality was not applied in this work. Instead the scans performed are mosaicity scans, 2D grid scans in ϕ and χ, repeated for a number of layers. As a result each voxel in the illuminated part of the sample are associated with a 2D distribution of intensity values, a function of ϕ and χ. A 2D Gaussian model to this distribution provides in general good fits to this intensity distribution. As a result five parameters are derived for each voxel: * the Center-of-Mass, COM, in ϕ and χ. We present these values in terms of poles in the (111) polefigure. Moreover, we quantify the evolution in the mosaicity map with ϵ in terms of texture evolution, by setting the third orientation axis, not measurable from only one diffraction vector, to a constant. * the normalised peak widths, Δ q_ϕ and Δ q_χ, in units of strain. These are the widths (FWHM) of the fitted Gaussian in the two directions. To reduce complexity these results are combined by calculating the scalar width Δ q = √(Δ q_ϕ^2 + Δ q_χ^2 ). * the amplitude. In kinematic diffraction theory this is proportional to the fraction of the voxel that is illuminated. In practice this is also influenced by vignetting and by uneven sampling in reciprocal space caused by the objective. Notably, the reciprocal space part of the experimental resolution function of the microscope is very asymmetric. A Monte Carlo simulation is provided in Fig. <ref> b) <cit.>, representative of the setting used here. The width (FWHM) of the resolution function in direction χ is defined by the NA of the objective, here 0.705 mrad. In contrast the width of the resolution function in direction ϕ is dominated by the incoming divergence of the beam. This is 10 times smaller, of order 0.1 mrad. In comparison the local angular spread within the prestine sample was determined to be 2.0 mrad and 0.77 mrad, respectively. Moreover, the data acquisition algorithm involved a continuous scanning over ϕ where intensities are integrated over 0.04 deg = 0.69 mrad. Hence, it appears the finite angular resolution can be neglected for most of the work below. §.§ Relationship between local peak broadening and dislocation density The relation between the broadening of diffraction peaks and dislocation density is well studied when it comes to the longitudinal direction. Within the limitation of a dilute system the longitudinal strain can be described in terms of the dislocation density, ρ, as<cit.> Δ q_long = Δ d/d =[ b^2/4π C_g ln( R_e/r_0) ]^1/2√(ρ). In this equation there are three constants: b, the modulus of the Burgers vector and R_e and r_0, the outer and inner cut-off of the dislocation system, respectively. The contrast factor C_g depends on the specifics of the active dislocation systems, but will be approximately constant during loading for the process studied here. This peak width purely reflects the elastic response. In contrast broadening along the two shear directions will be combinations of elastic and plastic contributions, for higher applied strains dominated by the plastic spin part (rotation of the lattice). In Electron Back-Scatter Diffraction, EBSD, the misorientation between neighboring pixels is used to derive the geometrically necessary dislocation content <cit.>. The analysis can be seen as a generalisation of the simple relationship between the misorientation angle θ_mis across a dislocation wall comprising identical and equi-spaced edge dislocations with neighboring distance a and corresponding density ρ. For small θ_mis θ_mis/2 = b/a ∼ b√(ρ). The same procedure applies to DFXM, where it is possible to derive all tensorial components, provided a 3D map is acquired. However, the spatial resolution function has to be taken into account. Thanks to its high angular resolution, DFXM offers another modality, based on the local peak broadening in the shear directions ϕ and χ. For each voxel a local (ϕ,χ) distribution is acquired. This informs of the total dislocation configuration. A full exposure is outside the scope of this article. Here we will rely only on the two second moments of the distribution, the normalised peak widths Δ q_ϕ and Δ q_χ introduced above. From these we define the scalar average normalised peak Δ q by Eq. <ref>. By analogy to Eqs. <ref> and <ref> we make the ansatz that the measured Δ q is proportional to √(ρ) and that the proportionality constant is independent of ϵ within the range explored here. It is at times of interest to determine a proxy for the dislocation density within a local region, e.g. in the vicinity of a cell boundary. The Gaussian approximation applied implies that Δ q^region = ( ∑_i ∈ region (Δ q_i)^2 + KAM_i^2 )^1/2 . To provide absolute numbers for dislocation densities, it is required to use data acquisition schemes involving more than one scattering vector. However, we argue that the arguments and conclusions presented are not restricted by this limitation. In particular we note that for stochastic process, the central-limit-theorem - and its extension to multiplicative processes - implies that contributions from different dislocation families (probed by different scattering vectors) add in a way that conserves the shape of the distributions. §.§ Load frame and stress-strain curve The specimen is glued to the upper surface of a notched polymer holder that is subject to bending, see Fig. <ref>. Due to the small cross-section of the sample compared to the polymer holder and its position far away from the neutral axis, the load case is considered to be uniaxial tension. The bending is imposed through linear deflection of the two upper rolls of the setup using a micrometer screw. A "calibration" of the axial macroscopic strain in the sample and deflection in rig through the micrometer screw has been performed using an optical light microscope and DIC, see Fig. <ref>. Between each DFXM scan the applied strain was increased manually. The strain rate is of order 10^-4/s. In Fig. <ref> we display the resulting relation between the force acted on the four point bender and the strain. Apparently, the force is nearly linear in the strain above the yield point. §.§ Data acquisition scheme and overview of data analysis Apart from weak beam data acquired for ϵ≤ 0.002 the scans performed during the in situ experiment are all mosaicity scans: two-dimensional scans of ϕ and χ. The scan parameters are listed in Table <ref>. Due to small mis-alignments between load steps it is not the exact same set of layers that is mapped. Hence, tracking is not feasible with the mechanical device used, but statistical comparisons and tests are. To improve the statistics the analysis performed for ϵ = 0.002, 0.005 and 0.046 are based on 9 layers. (Tests proved the results for the individual layers to be identical within statistical error.) For reasons of computer resources, the analysis for the other strain step are based on the middle layer only. The experimental data for all layers and all strain steps are available in the metadata. Most of the data analysis is based on the initial use of darfix for each layer. darfix generates pole-figures, cf. section <ref> and (ϕ,χ)-distributions for each voxel. As described in section <ref> the latter are generally well described as 2D Gaussian distributions. The resulting center-of-mass orientation maps and peak broadening maps are provided as Supplementary Videos 1 and 2, respectively. Note that for ϵ = 0.035 and 0.046 there are minor voids in the maps, caused by a lack of intensity, as these parts had orientations slightly outside the (ϕ,χ)-range mapped. Next cells are identified by the use of a Kernel-Averaged-Misorientation filter, see section <ref> and Supplementary Video 5. This work is supplemented by an analysis of ordering on μm length scale, presented in section <ref>. §.§ Macroscopic evolution Initially we report on the evolution of structural properties when averaged over all voxels. Shown in Fig. <ref> are the resulting Pole figures for the middle layer for strain steps ϵ = 0, 0.013, 0.035 and 0.046. This output was generated by darfix<cit.>. It represents the summed intensity over all images. In the undeformed state the sample is a bi-crystal, with a mis-orientation between two domains of ∼ 0.15. At higher applied strain levels the distribution becomes an approximately isotropic Gaussian. Shown in Fig. <ref> a) is the evolution with ϵ in the resulting widths (FWHM) along ϕ and χ when fitting a Gaussian to the polefigures. Shown in Fig. <ref> is the corresponding evolution of the local peak broadening. Apart from an offset at zero - reflecting that the pristine sample was not perfect - all parameters appear to be nearly linear in the applied strain. Moreover, we see that the local rotations (subfigure b) on average are approximately 1/3 of the global ones (subfigure a). §.§ The pristine sample and the initial deformation The pristine sample is a bi-crystal with a low-angle boundary exhibiting a mis-orientation of 0.15 deg, cf. the pole figure, Fig. <ref>, and the orientation map in Supplementary Video 1. This low angle boundary is clearly evident in the peak broadening map, cf. Fig. <ref> a). The dislocation content appears approximately constant at a level of 0.32 deg, about twice the misorientation angle. We interpret the excess angle as indicative of an elastic strain in the boundary of ∼ 0.005. The spatial width of the boundary is in some places defined by the resolution, at others it is extended by up to 1 μm, cf. Fig. <ref> b). Shown in Fig. <ref> c) is the result of applying a threshold to the Δ q map. A corresponding Kernel-Average-Misorientation (KAM) mask is shown in Fig. <ref> d). In both cases, the resulting "dislocation clusters" appear isolated and approximately randomly distributed (with the exception of the low angle boundary). The individual clusters differs substantially in shape and size between in the two images: we attribute this to the difference in dislocation population and strain components and noise. In Fig. <ref> e) and f) we have quantified the cluster size distributions using ImageJ. Given the populations size, both histograms are well described by log normal distributions. The clusters as defined from thresholds on Δ q or KAM are somewhat arbitrary, as they depend on the thresholds. As a complementary way to describe the dislocation ensemble, we can set a threshold on the weak beam image of the kind shown in Fig. <ref> b). A histogram of the resulting size-distribution is again well described by a long-normal distribution. Hence, three different analysis approaches all support the conclusion that the pristine sample comprise a set of dilute, non-interacting dislocation clusters, which are distributed randomly but homogeneously over the Field-of-View and with a size distribution which is consistent with a log-normal distribution. §.§ Long range order: autocorrelation based analysis Using an autocorrelation function to determine order is at the root of diffraction and crystallography <cit.>. Following our previous work on an ex situ sample <cit.> we here use it to determine order on the length scale of micrometers. The script for the 2D autocorrelation itself is the same as used in the previous work; essentially it is based on use of the MATLAB function xcorr2. The autocorrelation function for the undeformed and the most deformed states are shown in Fig. <ref>. The projections along the center lines are shown for all applied strains steps in Supplementary Video 3. The width (FWHM) of the central peak in the auto-correlation function along x_l and y_l are defined as the two "coherence lengths", 2ξ_x and 2ξ_y, respectively. The resulting widths are tabulated in Table <ref>. In contrast to the ex situ DFXM study on a similar sample but of a different orientation<cit.>, there is here no evidence of long-range order emerging from the autocorrelation function. (The one exception are the side-loops along the y_l-direction for ϵ = 0 and 0.005 - these are due to the bicrystal nature of the pristine sample.) The continuous decrease in the coherence lengths is attributed to the cell formation process. We note an anisotropy, with the coherent length in the x_ℓ-direction consistently being 2-3 times larger than the one in the y_ℓ-direction. In Fig.<ref> we present the autocorrelation function in directions x_ℓ and y_ℓ for ϵ = 0.046. Superposed are the autocorrelation of a hard sphere model with non-interacting ellipses with a size distribution given by the anisotropic log-normal distribution found in section <ref>. In order to compare the model and experimental data stereology has to be taken into account. The autocorrelation function measures cord lengths. As such the width, 2ξ is related to the area, A, by 2ξ = 4/π r = 4/π√(π)√(A). On the other hand, the size, s, determined from the KAM is defined as s = √(A). Hence, the hard sphere model is scaled by a factor 4/π√(π). Given the fact that there are no free parameters in this comparison, this is an excellent correspondence, consistent with the model that cells are fully formed at these points in time. The autocorrelation profiles for ϵ = 0.035 and ϵ = 0.024 both exhibit a similar shape, but with decreasing applied strain the correlation lengths become larger and increasingly different from the cell size, consistent with the cell formation not being complete. At ϵ≤ 0.013 the autocorrelation function appears to be a superposition of two functions, as shown in Supplementary Video 3. We interpret these data as evidence for a "two-phase" system. Part of the sample is in the process forming cells. Another part is still "undeformed" and this part consequently exhibits a longer correlation length. §.§ Cell analysis based on a variable KAM threshold. EBSD based analysis typically defines grains/cells by means of a mask derived from a map of the Kernel-average misorientation, KAM <cit.>. Following this tradition we will in the following define cells based on a threshold for the misorientation θ_mis and an isotropic kernel of size 2. As we in DFXM with one reflection only map two orientation degrees of freedom, the definition for misorientation is revised to be θ_mis = √( (Δϕ)^2 + (Δχ)^2 ). Shown in Fig. <ref> a) is a segmented image for ϵ = 0.046. The fraction of material below the threshold is in this case 71 %. Next morphological operations are applied to this binary image. Specifically, this image is skeletonized followed by a dilation of 1, implying that all cell boundaries have a thickness of 3 pixels. For the analysis in general the boundary thickness is 1 pixel. The resulting KAM mask is shown in Fig. <ref> b). This is overlaid on the orientation map in the last frame in Supplementary Video 4. Based on an ansatz of a linear scaling with the applied strain we apply a variable KAM threshold that increases with the applied strain. Specifically, we keep the fraction of the segmented image above the threshold constant: θ_mis = ∼ 71%. This corresponds to thresholds of 0.10, 0.20, 0.32 and 0.41 deg for ϵ = 0.013, 0.024, 0.035 and 0.046 deg, respectively. The almost linear correlation is remarkable and justifies the linear ansatz. The KAM algorithm leads to the generation of a site-filling map of domains. For lower degrees of applied strain one or more of these domains will represent the matrix: the undistorted part of the sample. Such domains (here defined by having areas larger than (25 μm)^2) are removed from the set. So are domains with a size smaller than 10 pixel units. Finally, due to the (ϕ,χ) region scanned being a bit too small a minor fraction of the ϵ = 0.035 and 0.046 maps are filled with voids - again these regions have been removed. The remaining domains are identified as cells. The resulting cell maps are provided as Supplementary Video 1, with some of teh frames also reproduced in Fig. <ref>. The scipy function name enables the generation of statistics of the structural parameters embedded in the cell map. In addition, nearest neighbors are identified by dilating each cell slightly and detecting overlapping voxels. The volume fraction and mean cell sizes are reporoduced as function of the applied strain in Fig. <ref>. To enable a direct comparison between load steps, the number of cells and area fractions have been calibrated to the same total area. Other key statistical parameters are provided in Table <ref>. §.§.§ Cell shape anisotropy Shape information arising from statistics on the ϵ = 0.046 set of cells is summarised in Fig.<ref>. The cells are on average elongated in the x_l-directions, consistent with the result of the auto-correlation analysis. §.§.§ Cell mis-orientations The local orientation relationships for the middle layer at applied strain ϵ = 0.046 have been characterised in two ways: Firstly, the misorientation angle and misorientation direction between neighboring cells is deduced from the set of cells generated above. The resulting mis-orientation angle and axis distributions are shown in Fig. <ref>. The sample appears to be isotropic to a high degree of accuracy. Secondly, as shown in Fig. <ref> a) we define a region-of-interest, ROI, around one of the poles in the pole-figure. The area of the ROI is 0.1 × 0.2. The subset of cells with orientations within the ROI, S_ROI are shown in Fig. <ref> b). Next distances d_ij between center positions ((x_l)_i, (y_l)_i) of all cells i,j ∈ S_ROI were calculated. The angular distribution of d_ij is shown in Fig. <ref> c). Again within experimental error the data are completely isotropic. §.§.§ Cell size distribution - comparison with theory A survey of the compatibility of a set of model distribution functions with the cell size distribution generated by the KAM filter for ϵ = 0.046 is shown in Fig. <ref>. Visual inspection clearly identify the log-normal distribution as superior. A non-parametric test, the Kolmogorov-Smirnov, was used to quantify the fits. The p values listed in the legends in Fig. <ref> represent the likelihood that the experimental data can be represented by the various distributions. With p=0.57 the data are clearly consistent with the log-normal distribution, which is remarkable given the large population. On the other hand, with the usual threshold for statistical significance (p_limit = 0.05) none of the other distributions pass the test. The fitting and tests were performed by standard Python SciPy code. Remarkably, the log-normal distribution is a good approximation in the entire range where applying a KAM filter is meaningful: for ϵ = 0.013 to ϵ = 0.046, cf. Fig. <ref>. The resulting optimised values μ and σ are listed in Table <ref> as well the mean (by definition equal to exp(μ + σ^2/2). Also shown is the p value for the corresponding test. §.§.§ Cells analysis with fixed KAM threshold For completion we provide an analysis similar to that of section <ref> for the case of keeping the KAM threshold fixed at the value optimised for ϵ = 0.046: θ_mis = 0.034. Results for volume fraction of cells, mean size and cell size distribution are shown in Fig. <ref> and <ref>, respectively. The conclusions in relation to the size distribution are identical to those obtained with a variable threshold, except for the cells forming later, as is to be expected. §.§ Dislocation density distribution In Fig. <ref> we plot the distribution of the average local peak broadening n directions ϕ and χ. As described in in Section <ref> this is a proxy for the square root of the total dislocation density. It appears that a fit to a bimodal Gaussian distribution is satisfactory for ϵ = 0.024, 0.035 and 0.046. Following the modelling work by Chen et al. <cit.> it is natural to identify the two peaks as arising from regions where the response is elastic and plastic, respectively. We note that the statistics is not sufficient to determine whether the plastic component is normal or log-normal in nature. For convenience we here used a normal distribution. Moreover, by inspection of the corresponding maps, see Supplementary video 2, we infer that these two peaks approximately corresponds to nearly dislocation free regions and "stored dislocations regions": cell interiors and walls. The resulting optimized parameters are provided in Table <ref>. To first order the area fraction of the two components is constant. As illustrated in Fig. <ref> the average peak width of the dislocation rich component grows approximately linearly with the applied strain. Shown in Fig. <ref> is the spatial distribution of (the square root of) the total dislocation densities in relation to the cell boundaries, as defined by the KAM filter. Consistent with results from TEM the boundaries tend to decorate with dislocations within a boundary region of order 1 micrometer. in addition we identify some larger regions of relatively high density. Some cells have a high density throughout their area. Statistical tests shows that these are all relatively small cells. We attribute them as being the "bottom" or "top" of larger cells which have their center-of-mass outside the layer inspected. Inspection also reveals that while most of the cells have a uniformly low density inside, some are heterogeneous in the sense that there are one or more subdomains with significantly larger density. Fig. <ref> confirms the existence of a positive correlation between such heterogeneity and cell size. Speculating that such high-density sub-domains are precursors to new cells, and that the likelihood for fragmentation on average is linear in area, this insight outlines a mechanism that conforms with the concept of multiplicative stochastic processes. This hypothesis implies a birth process that is the opposite of the one in recrystallization, where the nuclei are less deformed than the matrix. §.§ Comparison with literature values for larger applied strains To the knowledge of the authors there are no quantitative microstructural data for the strain range studied in this work. In the following we compare with the work of Huang, Hansen et al. <cit.>. They report on the TEM results of tensile strained polycrystalline 99.996 % pure aluminium in the range ϵ = 0.05 to ϵ = 1.0. Three characteristics types of microstructures appear as function of grain orientation. For [111] they report on the so-called type 3 structure, a cell block structure where the dense dislocation walls are rotated by about 40 degrees to the TA. However, this characterisation was based on thin foil studies in a plane with a normal ∥ TA. In contrast, TEM data acquired in the plane ⊥ TA exhibited no discernible band structure <cit.>. The current data are acquired in a plane that is rotated by only θ = 10 from the plane with a normal ⊥ TA. The spatial anisotropy observed may be related to an out-of-plane ordering, but similar to TEM, otherwise there is no trace of band structure in the plane observed. In Fig  <ref> we compare the average cells sizes and misorientation angles with the aforementioned TEM data. Given the different definitions of size, the TEM data should be multiplied by a stereological factor of π√(pi)/4 = 1.4 for a direct comparison. In relation to the misorientation angle, on average the TEM values should be √(3/2) = 1.22 times larger, given the fact that TEM measures all three components of the orientation while the DFXM data relates to only two. Given also the difference between single crystal and polycrystal samples, the correspondence with the TEM data is seen as excellent. §.§ Complementary study on a (111) single crystal with the TA within the inspection plane To confirm the finding from TEM that Geometrically Necessary Boundaries exist in this system, but only visible when using a different inspection plane, cf. Section <ref>, we here report on a supplementary DFXM experiment on a different specimen, but with the same sample material and dimensions as in the main text. Moreover, the x-ray set-up was essentially identical, but performed after the microscope was relocated to the new dedicated DFXM beamline, ID03 at ESRF. Deformed to an applied load of ϵ = 0.053, the tensile axis was still [111], but the DFXM mapping was conducted using the (2-20) perpendicular to the tensile axis as diffraction vector. The resulting orientation map is shown in Fig. <ref>. The orientation spread is similar but in this plane there is clear evidence of the GNBs. §.§ Mathematics in relation to log-normal and chi distributions In this section we provide the parameterisations used in this work for the two distributions and we present the conditions for scaling. The log-normal function, f(x) is a normalised function, parameterised as follows: f(x) = 1/√(2 π)1/σ xexp( -(ln x - μ)^2/2σ^2) With this parameterisation key properties of the distribution is given in Table. <ref>. For x →∞ we have f(x) ∼1/xexp( - (ln x)^2 ). To create a log-normal distribution with a given mean μ_X and variance σ_Xwe generate cells using the random function (where Z is a normal distribution) X = e^μ +σ Z with μ = ln( μ_X^2/√(μ_X^2 + σ_X^2)) and σ^2 = ln( 1 + σ_X^2/μ_X^2). Next, we establish the conditions for scaling. Consider two log-normal functions f(x,μ', σ') and g(x,μ, σ). Then these exhibit scaling with a factor k if and only if f(x,μ', σ') = k g(kx,μ, σ). For this to be true at all x we have μ' = μ - ln k; σ' = σ The chi function f(x) is a normalised function: f(x;σ,k) = 1/2^k/2-1Γ(k/2)( x/σ)^k-1exp(-1/2(x/σ)^2 ) Here k represents the number of degrees of freedom. For k=2 it becomes the Rayliegh distribution, for k=3 the Maxwell-Boltzmann speed distribution. With this parameterisation key properties of the distribution is given in Table. <ref>. To create a chi distribution with a given mean μ_X and degrees of freedom k we generate cells using the random function (where Z_i are normal distributions in the i=1… k directions) X = √(∑_i=1^k Z_i^2) . All chi distributions with same k scale automatically.
http://arxiv.org/abs/2406.08612v1
20240612194440
Observation of Declination Dependence in the Cosmic Ray Energy Spectrum
[ "The Telescope Array Collaboration", "R. U. Abbasi", "T. Abu-Zayyad", "M. Allen", "J. W. Belz", "D. R. Bergman", "I. Buckland", "W. Campbell", "B. G. Cheon", "K. Endo", "A. Fedynitch", "T. Fujii", "K. Fujisue", "K. Fujita", "M. Fukushima", "G. Furlich", "Z. Gerber", "N. Globus", "W. Hanlon", "N. Hayashida", "H. He", "K. Hibino", "R. Higuchi", "D. Ikeda", "T. Ishii", "D. Ivanov", "S. Jeong", "C. C. H. Jui", "K. Kadota", "F. Kakimoto", "O. Kalashev", "K. Kasahara", "Y. Kawachi", "K. Kawata", "I. Kharuk", "E. Kido", "H. B. Kim", "J. H. Kim", "J. H. Kim", "S. W. Kim", "R. Kobo", "I. Komae", "K. Komatsu", "K. Komori", "C. Koyama", "M. Kudenko", "M. Kuroiwa", "Y. Kusumori", "M. Kuznetsov", "Y. J. Kwon", "K. H. Lee", "M. J. Lee", "B. Lubsandorzhiev", "J. P. Lundquist", "A. Matsuzawa", "J. A. Matthews", "J. N. Matthews", "K. Mizuno", "M. Mori", "M. Murakami", "S. Nagataki", "M. Nakahara", "T. Nakamura", "T. Nakayama", "Y. Nakayama", "T. Nonaka", "S. Ogio", "H. Ohoka", "N. Okazaki", "M. Onishi", "A. Oshima", "H. Oshima", "S. Ozawa", "I. H. Park", "K. Y. Park", "M. Potts", "M. Przybylak", "M. S. Pshirkov", "J. Remington", "C. Rott", "G. I. Rubtsov", "D. Ryu", "H. Sagawa", "N. Sakaki", "R. Sakamoto", "T. Sako", "N. Sakurai", "S. Sakurai", "D. Sato", "S. Sato", "K. Sekino", "T. Shibata", "J. Shikita", "H. Shimodaira", "B. K. Shin", "H. S. Shin", "K. Shinozaki", "J. D. Smith", "P. Sokolsky", "B. T. Stokes", "T. A. Stroman", "Y. Takagi", "K. Takahashi", "M. Takeda", "R. Takeishi", "A. Taketa", "M. Takita", "Y. Tameda", "K. Tanaka", "M. Tanaka", "S. B. Thomas", "G. B. Thomson", "P. Tinyakov", "I. Tkachev", "T. Tomida", "S. Troitsky", "Y. Tsunesada", "S. Udo", "F. Urban", "I. A. Vaiman", "M. Vrábel", "D. Warren", "K. Yamazaki", "Y. Zhezher", "Z. Zundel", "J. Zvirzdin" ]
astro-ph.HE
[ "astro-ph.HE" ]
APS/123-QED Department of Physics, Loyola University Chicago, Chicago, Illinois 60660, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, Loyola University Chicago, Chicago, Illinois 60660, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute of Physics, Academia Sinica, Taipei City 115201, Taiwan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute of Physics, Academia Sinica, Taipei City 115201, Taiwan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: KIPAC, Stanford University, Stanford, CA 94305, USA Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Presently at: Purple Mountain Observatory, Nanjing 210023, China Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Kofu, Yamanashi 400-8511, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, Sungkyunkwan University, Jang-an-gu, Suwon 16419, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, Tokyo City University, Setagaya-ku, Tokyo 158-8557, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Faculty of Systems Engineering and Science, Shibaura Institute of Technology, Minato-ku, Tokyo 337-8570, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: Korea Institute of Geoscience and Mineral Resources, Daejeon, 34132, Korea Department of Physics, Sungkyunkwan University, Jang-an-gu, Suwon 16419, Korea Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Service de Physique Théorique, Université Libre de Bruxelles, Brussels 1050, Belgium Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Department of Physics, Yonsei University, Seodaemun-gu, Seoul 120-749, Korea Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea Department of Physics, Sungkyunkwan University, Jang-an-gu, Suwon 16419, Korea Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Center for Astrophysics and Cosmology, University of Nova Gorica, Nova Gorica 5297, Slovenia High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Faculty of Science, Kochi University, Kochi, Kochi 780-8520, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan College of Science and Engineering, Chubu University, Kasugai, Aichi 487-8501, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Quantum ICT Advanced Development Center, National Institute for Information and Communications Technology, Koganei, Tokyo 184-8795, Japan Department of Physics, Sungkyunkwan University, Jang-an-gu, Suwon 16419, Korea Department of Physics and The Research Institute of Natural Science, Hanyang University, Seongdong-gu, Seoul 426-791, Korea High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Presently at: Doctoral School of Exact and Natural Sciences, University of Lodz, 90-237 Lodz, Poland Astrophysics Division, National Centre for Nuclear Research, Warsaw 02-093, Poland Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Sternberg Astronomical Institute, Moscow M.V. Lomonosov State University, Moscow 119991, Russia Presently at: NASA Marshall Space Flight Center, Huntsville, Alabama 35812 USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Department of Physics, Sungkyunkwan University, Jang-an-gu, Suwon 16419, Korea Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Department of Physics, School of Natural Sciences, Ulsan National Institute of Science and Technology, UNIST-gil, Ulsan 689-798, Korea Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Department of Physics, School of Natural Sciences, Ulsan National Institute of Science and Technology, UNIST-gil, Ulsan 689-798, Korea Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Astrophysics Division, National Centre for Nuclear Research, Warsaw 02-093, Poland High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Earthquake Research Institute, University of Tokyo, Bunkyo-ku, Tokyo 277-8582, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa-shi, Osaka 572-8530, Japan Graduate School of Information Sciences, Hiroshima City University, Hiroshima, Hiroshima 731-3194, Japan Institute of Particle and Nuclear Studies, KEK, Tsukuba, Ibaraki 305-0801, Japan High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA Service de Physique Théorique, Université Libre de Bruxelles, Brussels 1050, Belgium Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Academic Assembly School of Science and Technology Institute of Engineering, Shinshu University, Nagano, Nagano 380-8554, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Graduate School of Science, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University, Sugimoto, Sumiyoshi, Osaka 558-8585, Japan Faculty of Engineering, Kanagawa University, Yokohama, Kanagawa 221-8686, Japan CEICO, Institute of Physics, Czech Academy of Sciences, Prague 182 21, Czech Republic Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia Astrophysics Division, National Centre for Nuclear Research, Warsaw 02-093, Poland Astrophysical Big Bang Laboratory, RIKEN, Wako, Saitama 351-0198, Japan College of Science and Engineering, Chubu University, Kasugai, Aichi 487-8501, Japan Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Institute for Nuclear Research of the Russian Academy of Sciences, Moscow 117312, Russia High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84112-0830, USA The Telescope Array Collaboration § ABSTRACT We report on an observation of the difference between northern and southern skies of the ultrahigh energy cosmic ray energy spectrum with a significance of ∼8σ. We use measurements from the two largest experiments—the Telescope Array observing the northern hemisphere and the Pierre Auger Observatory viewing the southern hemisphere. Since the comparison of two measurements from different observatories introduces the issue of possible systematic differences between detectors and analyses, we validate the methodology of the comparison by examining the region of the sky where the apertures of the two observatories overlap. Although the spectra differ in this region, we find that there is only a 1.8σ difference between the spectrum measurements when anisotropic regions are removed and a fiducial cut in the aperture is applied. Observation of Declination Dependence in the Cosmic Ray Energy Spectrum J. Zvirzdin June 17, 2024 ======================================================================= § INTRODUCTION Ultrahigh energy cosmic rays (UHECRs) are believed to be charged particles with energies greater than 10^18 eV, originating from outer space. Examining their energy spectrum is crucial because the features in the spectrum provide information on their potential sources and their propagation across the universe. An example of this is the high energy cutoff, first found by the High Resolution Fly's Eye experiment (HiRes) <cit.> and later confirmed by Pierre Auger Observatory (Auger) <cit.> and Telescope Array (TA) experiment <cit.>. The TA <cit.> and Auger <cit.> are currently the two largest UHECR observatories in operation. TA observes the northern hemisphere, while Auger views the southern hemisphere. Both observatories consist of fluorescence detectors (FDs) and surface detectors (SDs). Due to the fact that the FDs only operate on clear moonless nights, the SD data has the highest number of events by about an order of magnitude. For this reason, SD data is preferred for spectral and anisotropy studies. In this work, we present the difference between the TA and Auger spectra at the highest energies, which has an ∼8σ significance. This result is surprising and its validation is necessary. The TA spectrum views the declination region -15.7^∘ < δ < +90^∘, while the Auger spectrum observes -90^∘ < δ < +24.8^∘. Therefore, there is region of overlapping view between -15.7^∘ and +24.8^∘, which we call the common declination band. In this region, one would expect that the TA and Auger measurements should agree. However, this is true only when the energy spectra are independent of declination, and the apertures of the two experiments are identical. We will discuss the impact of these effects in this paper. In the following sections, we provide an overview of the TA SD systems in Section <ref>, detail the datasets utilized for this study in Section <ref>, and present our results in Section <ref>. We describe the spectra in the common declination band and comment on anisotropy regions in Section <ref> and provide a summary in Section <ref>. Finally, Appendix <ref> presents the cosmic ray flux measured by the TA SD systems. § TELESCOPE ARRAY SURFACE DETECTOR The Telescope Array is located near the city of Delta, Utah, USA, in the west desert at coordinates (39.3^∘ N, 112.9^∘ W), with an elevation of 1400 m above sea level. The TA SD array <cit.> consists of 507 scintillation detectors arranged in a square grid with a spacing of 1.2 km, covering an area of 700 km^2. Each detector consists of two layers of 1.2 cm thick plastic scintillator, stacked one above the other, and has an area of 3 m^2. When a cosmic ray air shower strikes the SD, there are thus two measurements of each detector’s pulse area. Detectors are powered by solar cells and batteries, and radio towers communicate with the detectors. The readout system consists of a flash analog-to-digital converter (FADC) with a 50 MHz sampling rate. Calibration events consist of single muon hits, and their pulse area distributions are collected over 10-minute time intervals. This allows every counter to be calibrated in terms of minimum-ionizing particles (MIPs) on a continual basis. When three or more nearest neighbor detectors have pulse areas greater than 3 MIPs, within an 8 μs period, the array is triggered and each counter with signal greater than 0.3 MIPs reports its FADC waveforms to the communication tower. The reconstruction of cosmic ray properties is performed by two fitting procedures—time fit and lateral distribution fit. First, we utilize the modified Linsley shower-shape function <cit.> to fit to the time distribution of the struck counters. This time fit yields the event's arrival direction and core position. Next, we perform a fit to the particle density distribution as a function of the distance from the shower axis, using the same lateral distribution function as employed by the AGASA experiment <cit.>. From this lateral distribution fit, we interpolate the density of shower particles at a lateral distance of 800 m from the shower axis, denoted as S(800). Using S(800) and the zenith angle of the incident cosmic-ray arrival direction, the cosmic ray’s energy is determined from a look-up table calculated using a Monte Carlo (MC) simulation of the experiment <cit.> [The MC described here uses events generated by the CORSIKA simulation package <cit.> using the QGSJET-II-03 high-energy hadronic interaction model <cit.> with an assumption of proton primaries. Since, in the end, we normalize the energy scale to that of the FD, the spectrum we calculate here is insensitive to the assumption of primary particles or the use of various available hadronic interaction models.]. This SD energy determination may have potential biases linked to the modeling of hadronic interactions in MC simulations. In contrast, an FD's energy measurement is calorimetric, and as a result, their energy scale uncertainty is experimentally well controlled. Therefore, we normalize the SD energy scale to that of the FD by utilizing events observed by both detectors. It was determined that the energy scale of the SD is 27% higher than that of the FD, independent of energy <cit.>. Therefore, a 27% normalization in SD energy determined by the MC simulation is performed. In addition, the constant intensity cut (CIC) method has also been used to determine the cosmic ray energy. This analysis was designed to be almost identical to that of Auger, taking into account attenuation of the shower in the atmosphere <cit.>. The CIC energy scale is again normalized by FD measurements. We compared the energies determined by the TA standard method, using the MC look-up table including the energy scaling to FD energy, with those obtained through the CIC method (which is independent of MC). It is found that the CIC energies agree within 2% with those determined by the TA standard method <cit.>. For comparison, Auger is located near the town of Malargüe, Mendoza, Argentina, at coordinates (35.2^∘ S, 69.4^∘ W), with an elevation of 1400 m above sea level <cit.>. The Auger SD consists of large water Cherenkov detectors, placed in a triangular grid of 1.5 km spacing with an area of about 3000 km^2. The spectrum is calculated using only the energy range where the detector is nearly 100% efficient, and a MC simulation is only used to correct for bin-to-bin migration of events (which is largest at the highest energies). § DATASETS For this work, we utilized TA data collected between May 11, 2008, and May 10, 2022. For comparison, we employed Auger “vertical" events (zenith angle less than 60^∘) as shown in <cit.>. To the TA data, we applied event selection criteria as explained below: * Each event must include at least five SD counters. * The reconstructed zenith angle must be less than 55^∘. * Both the geometry and lateral distribution fits must have χ^2/degree of freedom less than 4. * The angular uncertainty estimated by the geometry fit must be less than 5^∘. * The fractional uncertainty in S(800) estimated by the lateral distribution fit must be less than 25%. * The counter with the largest signal must be surrounded by four working counters: one to the north, east, south, and west on the grid, but they do not have to be immediate neighbors of the largest signal counter. In our previous paper on the energy spectrum measurements <cit.>, we applied event selection criteria with slightly different cuts aimed at optimizing energy resolution. However, the selection criteria described above employs a slightly looser set of cuts than in <cit.> in order to maximize data statistics in high energy regions. Notable differences include zenith angles less than 55^∘ and energies greater than 10^18.8 eV, where the detector is almost 100% efficient <cit.>. These criteria were initially selected to increase data statistics for anisotropy studies while keeping reasonable energy and angular resolutions, but we later adopted them for the TA and Auger Joint Spectrum Working Group's studies to maximize statistics in high energy regions as well as in the declination region seen by both experiments. With these selection criteria, we have 12,845 events with energies greater than 10^18.8 eV in the dataset. § RESULTS Figure <ref> shows the spectra of TA and Auger, adjusted for the overall energy scale by raising Auger's energy scale by 4.5% and lowering TA's by 4.5%. This 9% overall energy scale difference between the two measurements is well understood thanks to the efforts of the TA and Auger Joint Spectrum Working Group, which was established to investigate differences in spectrum measurements. It arises from the use of different constants in the reconstruction of fluorescence data and these different constants yield a negligible energy dependence <cit.>. In Figure <ref>, the black full squares indicate the energy spectrum of TA within the declination range of -15.7^∘ to +90^∘, which used the cuts described in Section <ref>. The blue open squares represent the energy spectrum of Auger spanning -90^∘ to +24.8^∘ in declination. Note that the gray full circles indicate the TA data selected based on the criteria outlined in <cit.> to encompass as wide an energy range as possible. For energies below 10^18.8 eV the TA SD does not have 100% efficiency, and a correction has been made by Monte Carlo calculation. The comparison shows that the spectrum measurements by TA and Auger align for energies below about 10^19.5 eV, above which a growing disagreement becomes evident. The high-energy cutoff occurs at different energies in the two hemispheres. To quantify the level of agreement or disagreement between the two spectra, we performed a joint fit to both cosmic ray spectra into a broken power law function (power law segments with three break points) using the binned Poisson likelihood method, Eq. 39.16 in <cit.>. This fit takes into account the numbers of events, the exposure, and the resolution correction factors of both experiments. The red line in Figure <ref> represents the result of this joint fit for data from TA (shown as black full squares) and Auger (shown as blue open squares). The cosmic ray flux measured by the TA SD for this study, utilized in Figure <ref>, is provided in Appendix <ref>. From the log-likelihood sum over event bins for the joint fit, we calculate the significance of the spectrum difference. The fit gave the log-likelihood sum of 130.33 for 26 degrees of freedom, corresponding to a Poisson probability of 7.5×10^-16. This corresponds to a one-sided test significance of 8.0σ. § SPECTRA IN THE COMMON DECLINATION BAND TA and Auger have different types of surface detectors, use somewhat different reconstruction techniques, and their apertures have different declination dependence in the common declination band. Comparing their spectra in this region of the sky is a stringent test of whether they have comparable results. Figure <ref> shows the spectra of the two experiments in the declination band -15.7^∘ < δ < +24.8^∘. The Auger data within the common declination band was utilized, as shown in <cit.>. The spectra seem to disagree at energies greater than 10^19.5 eV. To understand this discrepancy, we revisited the analysis by introducing the most direct comparison possible of the spectra from TA and Auger within this band. First, we chose to implement a fiducial declination cut in the TA data. Figure <ref> shows the TA and Auger exposures as a function of declination <cit.>. The black solid line represents TA exposure, and the blue dashed line indicates Auger exposure. Notably, the exposure of TA at its southernmost edge changes extremely rapidly. Therefore, we implemented the fiducial cut requiring δ > -5^∘ (the black dotted vertical line in Figure <ref>) to avoid this region of the sky. This cut excluded 654 events out of a total of 4,861 events in the common declination band. Another notable point is the difference between the sky just north of the common declination band and that to the south, as TA data shows anisotropy regions. These include anisotropy signals such as the Hotspot and the Perseus-Pisces supercluster (PPSC) excess, which were identified through oversampling searches using intermediate-scale angular circles <cit.>. Figure <ref> shows a sky map in equatorial coordinates using the Hammer projection to depict the locations of these excess regions. The two red dashed horizontal lines are the boundaries of the common declination band at δ = -15.7^∘ and +24.8^∘. Additionally, we mark the location of the fiducial cut at δ = -5^∘ (the area below the blue line is cut out) with the blue dash-dotted horizontal line. The two green circles indicate the Hotspot and PPSC excess regions in the TA data [ Note that the anisotropy signal regions depicted are based on the previous analysis results as follows. The Hotspot was identified in events with energies exceeding ∼10^19.75 eV, located at equatorial coordinates (144.0^∘, 40.5^∘) within a 25^∘ radius. Additionally, we observed additional anisotropies in events with energies greater than 10^19.4 eV in the direction of the Perseus-Pisces supercluster. The PPSC excess was located at (17.9^∘, 35.2^∘) in equatorial coordinates within a 20^∘ radius. ]. Both excess regions extend down into the common declination band. However, Auger has not reported any anisotropy regions intruding into the common declination band from the south <cit.>. Notably, the two TA excess regions in the common declination band are close to the northernmost edge of Auger's exposure, where it is rapidly falling. (See the blue dashed line in Figure <ref>.) We adopt the hypothesis that the TA excesses may affect the spectral characteristics observed within the common declination band. This influence could be significant if the spectrum within the anisotropy regions differs from that of the background. Figure <ref> shows the spectrum of events inside the Hotspot and PPSC excess regions, supporting that this is indeed the case. Therefore, we excluded 269 events from these excess regions out of a total of 4,861 events in the common declination band and reanalyzed the spectrum. We aimed to make the most direct comparison of the spectra from the TA and Auger within this band. Figure <ref> displays the results of a joint fit to the TA and Auger spectra, depicted by the red line, using data from the common declination band and after applying the cuts described above to the TA data. The black full squares indicate the TA data from the common declination band, following the fiducial cut in the aperture at δ > -5^∘ and the removal of the two anisotropic regions, while the Auger data within the common declination band are represented by the blue open squares. The fit yielded the log-likelihood sum of 40.12 for 26 degrees of freedom, corresponding to a Poisson probability of 3.8×10^-2. This is equivalent to a one-sided test significance of 1.8σ. Therefore, there is no statistically significant difference between the spectra. This constitutes a validation of the analysis methods of TA and Auger. Once comparable data sets are selected, the results are consistent within statistics. § SUMMARY The spectrum difference between TA and Auger has long been a source of controversy. How could two experiments have spectra that agree very well below 10^19.5 eV, then disagree so much above this energy? TA sees a more intense flux of cosmic rays and a higher cutoff energy. The two collaborations have founded a Spectrum Working Group to investigate differences, which clarified the origin of the overall energy scale difference to be in the fluorescence yield and other constants used in setting the energy scales of both experiments. Under the Working Group auspice, a study of the common declination band was initiated. After the analysis described in Section <ref>, we find that the TA and Auger spectra in the common declination band are in agreement within 1.8σ. Having validated the TA and Auger spectrum calculation methods, we quantify the declination dependence of the spectra as seen in the whole apertures of TA and Auger. A joint fit to the two spectra was performed, and the log-likelihood per degree of freedom was found to be 8.0σ. This constitutes the observation that the UHECR spectrum differs in the northern and southern hemispheres. We show that a significant part of the difference is due to events from the Hotspot and Perseus-Pisces supercluster excess regions.  § ACKNOWLEDGEMENTS The Telescope Array experiment is supported by the Japan Society for the Promotion of Science(JSPS) through Grants-in-Aid for Priority Area 431, for Specially Promoted Research JP21000002, for Scientific Research (S) JP19104006, for Specially Promoted Research JP15H05693, for Scientific Research (S) JP19H05607, for Scientific Research (S) JP15H05741, for Science Research (A) JP18H03705, for Young Scientists (A) JPH26707011, and for Fostering Joint International Research (B) JP19KK0074, by the joint research program of the Institute for Cosmic Ray Research (ICRR), The University of Tokyo; by the Pioneering Program of RIKEN for the Evolution of Matter in the Universe (r-EMU); by the U.S. National Science Foundation awards PHY-1806797, PHY-2012934, PHY-2112904, PHY-2209583, PHY-2209584, and PHY-2310163, as well as AGS-1613260, AGS-1844306, and AGS-2112709; by the National Research Foundation of Korea (2017K1A4A3015188, 2020R1A2C1008230, and 2020R1A2C2102800) ; by the Ministry of Science and Higher Education of the Russian Federation under the contract 075-15-2024-541, IISN project No. 4.4501.18, by the Belgian Science Policy under IUAP VII/37 (ULB), by National Science Centre in Poland grant 2020/37/B/ST9/01821, by the European Union and Czech Ministry of Education, Youth and Sports through the FORTE project No. CZ.02.01.01/00/22_008/0004632, and by the Simons Foundation (00001470, NG). This work was partially supported by the grants of the joint research program of the Institute for Space-Earth Environmental Research, Nagoya University and Inter-University Research Program of the Institute for Cosmic Ray Research of University of Tokyo. The foundations of Dr. Ezekiel R. and Edna Wattis Dumke, Willard L. Eccles, and George S. and Dolores Doré Eccles all helped with generous donations. The State of Utah supported the project through its Economic Development Board, and the University of Utah through the Office of the Vice President for Research. The experimental site became available through the cooperation of the Utah School and Institutional Trust Lands Administration (SITLA), U.S. Bureau of Land Management (BLM), and the U.S. Air Force. We appreciate the assistance of the State of Utah and Fillmore offices of the BLM in crafting the Plan of Development for the site. We thank Patrick A. Shea who assisted the collaboration with much valuable advice and provided support for the collaboration’s efforts. The people and the officials of Millard County, Utah have been a source of steadfast and warm support for our work which we greatly appreciate. We are indebted to the Millard County Road Department for their efforts to maintain and clear the roads which get us to our sites. We gratefully acknowledge the contribution from the technical staffs of our home institutions. An allocation of computing resources from the Center for High Performance Computing at the University of Utah as well as the Academia Sinica Grid Computing Center (ASGC) is gratefully acknowledged. § SPECTRUM DATA POINTS Table <ref> provides the cosmic ray flux for each energy bin depicted in Figure <ref>, utilizing 14 years of Telescope Array surface detector data, collected between May 11, 2008, and May 10, 2022, in the full aperture of -15.7^∘ < δ < +90^∘. Note that the energy values in Figure <ref> have been reduced by 4.5% compared to those detailed here. Table <ref> includes log_10 (E/eV) representing the energy of the bin center, J denoting the flux in the unit of [eV^-1m^-2sr^-1s^-1], and σ_ upper and σ_ lower representing the statistical uncertainties on the flux, corresponding to the upper and lower 68% confidence limits. All uncertainties are expressed in the unit of [eV^-1m^-2sr^-1s^-1]. 29 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Abbasi et al.(2008)Abbasi et al.]HiRes:2007lra author author R. U. Abbasi et al. (collaboration HiRes), title title First observation of the Greisen-Zatsepin-Kuzmin suppression, https://doi.org/10.1103/PhysRevLett.100.101101 journal journal Phys. Rev. Lett. volume 100, pages 101101 (year 2008), https://arxiv.org/abs/astro-ph/0703099 arXiv:astro-ph/0703099 NoStop [Abraham et al.(2008)Abraham et al.]PierreAuger:2008rol author author J. Abraham et al. (collaboration Pierre Auger), title title Observation of the suppression of the flux of cosmic rays above 4× 10^19eV, https://doi.org/10.1103/PhysRevLett.101.061101 journal journal Phys. Rev. Lett. volume 101, pages 061101 (year 2008), https://arxiv.org/abs/0806.4302 arXiv:0806.4302 [astro-ph] NoStop [Abu-Zayyad et al.(2013a)Abu-Zayyad et al.]TelescopeArray:2012qqu author author T. Abu-Zayyad et al. (collaboration Telescope Array), title title The Cosmic Ray Energy Spectrum Observed with the Surface Detector of the Telescope Array Experiment, https://doi.org/10.1088/2041-8205/768/1/L1 journal journal Astrophys. J. Lett. volume 768, pages L1 (year 2013a), https://arxiv.org/abs/1205.5067 arXiv:1205.5067 [astro-ph.HE] NoStop [Abu-Zayyad et al.(2013b)Abu-Zayyad et al.]TelescopeArray:2012uws author author T. Abu-Zayyad et al. (collaboration Telescope Array), title title The surface detector array of the Telescope Array experiment, https://doi.org/10.1016/j.nima.2012.05.079 journal journal Nucl. Instrum. Meth. A volume 689, pages 87 (year 2013b), https://arxiv.org/abs/1201.4964 arXiv:1201.4964 [astro-ph.IM] NoStop [Tokuno et al.(2012)Tokuno et al.]Tokuno:2012mi author author H. Tokuno et al., title title New air fluorescence detectors employed in the Telescope Array experiment, https://doi.org/10.1016/j.nima.2012.02.044 journal journal Nucl. Instrum. Meth. A volume 676, pages 54 (year 2012), https://arxiv.org/abs/1201.0002 arXiv:1201.0002 [astro-ph.IM] NoStop [Abraham et al.(2004)Abraham et al.]PierreAuger:2004naf author author J. Abraham et al. (collaboration Pierre Auger), title title Properties and performance of the prototype instrument for the Pierre Auger Observatory, https://doi.org/10.1016/j.nima.2003.12.012 journal journal Nucl. Instrum. Meth. A volume 523, pages 50 (year 2004)NoStop [Aab et al.(2015)Aab et al.]PierreAuger:2015eyc author author A. Aab et al. (collaboration Pierre Auger), title title The Pierre Auger Cosmic Ray Observatory, https://doi.org/10.1016/j.nima.2015.06.058 journal journal Nucl. Instrum. Meth. A volume 798, pages 172 (year 2015), https://arxiv.org/abs/1502.01323 arXiv:1502.01323 [astro-ph.IM] NoStop [Teshima et al.(1986)Teshima et al.]Teshima:1986rq author author M. Teshima et al., title title Properties of 10**9-GeV - 10**10-GeV Extensive Air Showers at Core Distances Between 100-m and 3000-m, https://doi.org/10.1088/0305-4616/12/10/017 journal journal J. Phys. G volume 12, pages 1097 (year 1986)NoStop [Yoshida et al.(1994)Yoshida et al.]Yoshida:1994jf author author S. Yoshida et al., title title Lateral distribution of charged particles in giant air showers above EeV observed by AGASA, https://doi.org/10.1088/0954-3899/20/4/011 journal journal J. Phys. G volume 20, pages 651 (year 1994)NoStop [Takeda et al.(2003)Takeda et al.]Takeda:2002at author author M. Takeda et al., title title Energy determination in the Akeno Giant Air Shower Array experiment, https://doi.org/10.1016/S0927-6505(02)00243-8 journal journal Astropart. Phys. volume 19, pages 447 (year 2003), https://arxiv.org/abs/astro-ph/0209422 arXiv:astro-ph/0209422 NoStop [Abu-Zayyad et al.(2014)Abu-Zayyad et al.]TelescopeArray:2014nxa author author T. Abu-Zayyad et al. (collaboration Telescope Array), title title CORSIKA Simulation of the Telescope Array Surface Detector, @noop (year 2014), https://arxiv.org/abs/1403.0644 arXiv:1403.0644 [astro-ph.IM] NoStop [Stokes et al.(2012)Stokes, Cady, Ivanov, Matthews, and Thomson]Stokes:2011wf author author B. T. Stokes, author R. Cady, author D. Ivanov, author J. N. Matthews, and author G. B. Thomson, title title Dethinning Extensive Air Shower Simulations, https://doi.org/10.1016/j.astropartphys.2012.03.004 journal journal Astropart. Phys. volume 35, pages 759 (year 2012), https://arxiv.org/abs/1104.3182 arXiv:1104.3182 [astro-ph.IM] NoStop [Note1()]Note1 note The MC described here uses events generated by the CORSIKA simulation package <cit.> using the QGSJET-II-03 high-energy hadronic interaction model <cit.> with an assumption of proton primaries. Since, in the end, we normalize the energy scale to that of the FD, the spectrum we calculate here is insensitive to the assumption of primary particles or the use of various available hadronic interaction models.Stop [Aab et al.(2020)Aab et al.]PierreAuger:2020qqz author author A. Aab et al. (collaboration Pierre Auger), title title Measurement of the cosmic-ray energy spectrum above 2.5× 10^18 eV using the Pierre Auger Observatory, https://doi.org/10.1103/PhysRevD.102.062005 journal journal Phys. Rev. D volume 102, pages 062005 (year 2020), https://arxiv.org/abs/2008.06486 arXiv:2008.06486 [astro-ph.HE] NoStop [Kim et al.(2023a)Kim, Ivanov, Jui, and Thomson]Kim:2023eul author author J. Kim, author D. Ivanov, author C. Jui, and author G. Thomson, title title Energy Spectrum Measured by the Telescope Array Surface Detectors, https://doi.org/10.1051/epjconf/202328302005 journal journal EPJ Web Conf. volume 283, pages 02005 (year 2023a)NoStop [Kim et al.(2024)Kim et al.]TelescopeArray:2023bdy author author J. Kim et al. (collaboration Telescope Array), title title Highlights from the Telescope Array Experiment, https://doi.org/10.22323/1.444.0008 journal journal PoS volume ICRC2023, pages 008 (year 2024)NoStop [Verzi et al.(2017)Verzi, Ivanov, and Tsunesada]Verzi:2017hro author author V. Verzi, author D. Ivanov, and author Y. Tsunesada, title title Measurement of Energy Spectrum of Ultra-High Energy Cosmic Rays, https://doi.org/10.1093/ptep/ptx082 journal journal PTEP volume 2017, pages 12A103 (year 2017), https://arxiv.org/abs/1705.09111 arXiv:1705.09111 [astro-ph.HE] NoStop [Deligny(2020)]Deligny:2020gzq author author O. Deligny (collaboration Pierre Auger, Telescope Array), title title The energy spectrum of ultra-high energy cosmic rays measured at the Pierre Auger Observatory and at the Telescope Array, https://doi.org/10.22323/1.358.0234 journal journal PoS volume ICRC2019, pages 234 (year 2020), https://arxiv.org/abs/2001.08811 arXiv:2001.08811 [astro-ph.HE] NoStop [Abbasi et al.(2021a)Abbasi et al.]TelescopeArray:2021zox author author R. Abbasi et al. (collaboration Telescope Array, Pierre Auger), title title Joint analysis of the energy spectrum of ultra-high-energy cosmic rays as measured at the Pierre Auger Observatory and the Telescope Array, https://doi.org/10.22323/1.395.0337 journal journal PoS volume ICRC2021, pages 337 (year 2021a)NoStop [Tsunesada(2024)]Tsunesada:2023yhw author author Y. Tsunesada, title title Measurement of UHECR energy spectrum with the Pierre Auger Observatory and the Telescope Array, https://doi.org/10.22323/1.444.0406 journal journal PoS volume ICRC2023, pages 406 (year 2024)NoStop [Patrignani(2016)]Patrignani_2016 author author C. Patrignani, title title Review of particle physics, https://doi.org/10.1088/1674-1137/40/10/100001 journal journal Chinese Physics C volume 40, pages 100001 (year 2016)NoStop [Sommers(2001)]Sommers:2000us author author P. Sommers, title title Cosmic ray anisotropy analysis with a full-sky observatory, https://doi.org/10.1016/S0927-6505(00)00130-4 journal journal Astropart. Phys. volume 14, pages 271 (year 2001), https://arxiv.org/abs/astro-ph/0004016 arXiv:astro-ph/0004016 NoStop [Abbasi et al.(2014)Abbasi et al.]TelescopeArray:2014tsd author author R. U. Abbasi et al. (collaboration Telescope Array), title title Indications of Intermediate-Scale Anisotropy of Cosmic Rays with Energy Greater Than 57 EeV in the Northern Sky Measured with the Surface Detector of the Telescope Array Experiment, https://doi.org/10.1088/2041-8205/790/2/L21 journal journal Astrophys. J. Lett. volume 790, pages L21 (year 2014), https://arxiv.org/abs/1404.5890 arXiv:1404.5890 [astro-ph.HE] NoStop [Abbasi et al.(2021b)Abbasi et al.]TelescopeArray:2021dfb author author R. U. Abbasi et al. (collaboration Telescope Array), title title Indications of a Cosmic Ray Source in the Perseus-Pisces Supercluster, @noop (year 2021b), https://arxiv.org/abs/2110.14827 arXiv:2110.14827 [astro-ph.HE] NoStop [Kim et al.(2023b)Kim, Ivanov, Kawata, Sagawa, and Thomson]Kim:2023ksw author author J. Kim, author D. Ivanov, author K. Kawata, author H. Sagawa, and author G. Thomson (collaboration Telescope Array), title title Anisotropies in the arrival direction distribution of ultra-high energy cosmic rays measured by the Telescope Array surface detector, https://doi.org/10.22323/1.444.0244 journal journal PoS volume ICRC2023, pages 244 (year 2023b)NoStop [Note2()]Note2 note Note that the anisotropy signal regions depicted are based on the previous analysis results as follows. The Hotspot was identified in events with energies exceeding ∼10^19.75 eV, located at equatorial coordinates (144.0^∘, 40.5^∘) within a 25^∘ radius. Additionally, we observed additional anisotropies in events with energies greater than 10^19.4 eV in the direction of the Perseus-Pisces supercluster. The PPSC excess was located at (17.9^∘, 35.2^∘) in equatorial coordinates within a 20^∘ radius.Stop [Abreu et al.(2022)Abreu et al.]PierreAuger:2022axr author author P. Abreu et al. (collaboration Pierre Auger), title title Arrival Directions of Cosmic Rays above 32 EeV from Phase One of the Pierre Auger Observatory, https://doi.org/10.3847/1538-4357/ac7d4e journal journal Astrophys. J. volume 935, pages 170 (year 2022), https://arxiv.org/abs/2206.13492 arXiv:2206.13492 [astro-ph.HE] NoStop [Heck et al.(1998)Heck, Knapp, Capdevielle, Schatz, and Thouw]Heck:1998vt author author D. Heck, author J. Knapp, author J. N. Capdevielle, author G. Schatz, and author T. Thouw, title title CORSIKA: A Monte Carlo code to simulate extensive air showers, @noop (year 1998)NoStop [Ostapchenko(2006)]Ostapchenko:2004ss author author S. Ostapchenko, title title QGSJET-II: Towards reliable description of very high energy hadronic interactions, https://doi.org/10.1016/j.nuclphysbps.2005.07.026 journal journal Nucl. Phys. B Proc. Suppl. volume 151, pages 143 (year 2006), https://arxiv.org/abs/hep-ph/0412332 arXiv:hep-ph/0412332 NoStop
http://arxiv.org/abs/2406.09340v1
20240613172049
Prospects for NMR Spectral Prediction on Fault-Tolerant Quantum Computers
[ "Justin E. Elenewski", "Christina M. Camara", "Amir Kalev" ]
quant-ph
[ "quant-ph", "physics.bio-ph", "physics.chem-ph" ]
[figure]subcapbesideposition=top indentpar[1] Y>X Z>X []justin.elenewski@ll.mit.edu MIT Lincoln Laboratory, Lexington, Massachusetts 02421, USA Department of Pediatric Oncology and the Linde Program in Cancer Chemical Biology, Dana–Farber Cancer Institute, Boston, MA, USA []amirk@isi.edu Information Sciences Institute, University of Southern California, Arlington, VA 22203, USA Department of Physics and Astronomy, and Center for Quantum Information Science & Technology, University of Southern California, Los Angeles, California 90089, USA § ABSTRACT Nuclear magnetic resonance spectroscopy is a prominent analytical tool, with applications throughout chemistry, medicine and solid–state physics. While conventional NMR spectrometers require large magnetic fields to interrogate a sample, recent advances in atomic magnetometry have enabled this spectroscopy far below geomagnetic field strengths. This zero–to–ultralow (ZULF) field regime can be advantageous since it mitigates relaxation and reveals spin couplings that are otherwise obscured, all while using compact and lower–overhead instrumentation. The resulting spectra are nonetheless difficult to interpret without computation, which can be taxing due to the presence of vector couplings and long–range spin networks. Following recent proposals, we demonstrate how fault–tolerant quantum computation could be used to simulate these spectra. Our analysis spans from input selection to the construction of explicit circuits based on qubitized quantum dynamics. By maintaining parity with experimental requirements, we demonstrate how NMR spectral prediction might be an early application for fault–tolerant quantum computers. Prospects for NMR Spectral Prediction on Fault–Tolerant Quantum Computers A. Kalev June 17, 2024 ========================================================================= Nuclear magnetic resonance (NMR) spectroscopy has become entrenched as an analytical tool in chemistry, physics, and materials science. Notably, this technique can resolve atomic–scale structures and dynamics across a hierarchy of timescales (10^-11–10^0 s), giving a fingerprint to identify samples and a means to probe their underlying physics <cit.>. The ability to accommodate diverse analytes has made these spectra a characterization requirement for many classes of small molecules and materials. Advances in high–field magnets and signal processing have also extended this capability to polymers, supramolecular assemblies and biological macromolecules <cit.>. This has delivered some of the most detailed protein structures to date and provided decisive insights into molecular biophysics <cit.>. Spatially–resolved NMR <cit.> and magnetic resonance imaging (MRI) have likewise become standard diagnostic tools in medicine <cit.>. Unfortunately, most NMR spectrometers are sizable, expensive, and require nontrivial infrastructure. The most accessible are lower–field spectrometers, which are comparatively small and inexpensive (< 100K USD). However, these are limited to low–resolution or quality control applications. High–field instruments are often needed to resolve spin couplings and complicated spectra, and have a much higher cost of ownership (1M to 10M USD). The resulting data can be cumbersome to interpret, requiring complimentary experiments and dedicated expertise for complicated systems. These factors can limit application domains and accessible analytes. The development of NMR has followed a quest for higher spectral resolution. This has led to spectrometers with large static magnetic fields, B_0, which give a broad frequency dispersion Δω∼ B_0 among resonances and increase the fraction P ∼ B_0 / k_BT of polarized spins <cit.>. More recently, advances in atomic magnetometry have raised the possibility of spectroscopy in low– (30-50 μT) or zero–to–ultralow field regimes (ZULF; below 100 nT) <cit.>. Here, hyperpolarization can access super–thermal spin populations (P ∼ 10^-1) that exceed those from high–field magnets (P∼10^-3) <cit.>. These spectrometers are also comparatively inexpensive, portable, and capable of cryogen–free operation. While ZULF can mitigate relaxation and reveal high–resolution couplings <cit.> that capture different physics <cit.>, this regime differs markedly from high–field NMR <cit.>. Notably, these spectra are hard to interpret without computation, which is costly since we must deal with long–range Heisenberg–like Hamiltonians. The primary output of an NMR experiment is the time–domain free–induction decay (FID) signal 𝒮(t) = FID(t), which quantifies current induced in a series of RF pickup coils. However, the interpretable spectrum is actually its sparse, frequency–domain counterpart 𝒮(ω) = ∫_0^t_max𝒮(t) e^ω t dt. In principle this can be obtained by simulating a spin–spin correlation function, ⟨ S_tot(t) S_tot(0) ⟩ = tr [e^ H t S_tot e^- H t S_totρ], where the k–spin system is described by a Heisenberg Hamiltonian H and density matrix ρ. Here, the operator S_tot = ∑_k S_k captures a total spin. A typical NMR experiment uses phase–sensitive quadrature to detect S^+ = S^x + S^y, but this is due to the apparatus and the control pulse sequence. Simply calculating a spectrum only requires us to observe S^z and thus we can use S_tot = S^z_tot = ∑_k S^z_k for the total spin [The most straightforward digital quantum simulations will reproduce a pure state density matrix as opposed to the mixed state generated by relaxation operators. Instead, we treat relaxation through the decaying exponential factor in Eq. <ref>. Thus, the longitudinal and transverse magnetization profiles become equivalent for determining resonances in the NMR spectrum. It may be prudent to work with S^+ or S^- when developing pulse sequences due to the difference in objectives and potential methodology.]. Our predicted result 𝒮̃(ω) follows immediately, 𝒮̃(ω) = ∫_0^t_max ⟨ S^z_tot(t) S^z_tot(0) ⟩ e^ω t - γ_2 t dt. The factor containing γ_2 = T_2^-1 captures relaxation processes that lead to decoherence and dephasing. This can also be accommodated by introducing a relaxation superoperator, though the present case is simpler for analysis. Based on the above, simulating NMR spectra is inherently a exerciser in quantum dynamics. However, since the underlying Hamiltonian is a long–range Heisenberg model, the calculations can become classically hard for tens of coupled spins. This is true for ZULF spectra as well as other problems such as solid–state NMR and high–field NMR with strongly coupled spin systems <cit.>. Owing to this, the calculation of ZULF and nitrogen–vacancy NMR spectra have been touted as possible applications for NISQ–scale quantum computers <cit.>. We extend these efforts to fault–tolerant quantum computation and explore the overhead in simulating ZULF spectra for a broad range of high–value molecular targets. Our approach utilizes state–of–the art algorithms for quantum dynamics — based on quantum signal processing — and leverages optimized encodings to mitigate resource requirements. Moreover, we address we address efficient protocols for reconstructing spectra (using compressed sensing techniques) and the prospects of pulse sequence design. Our estimates indicate that meaningful spectra might be computed with resource overhead comparable or below other problems, such as factoring 2048–bit integers using Shor's algorithm <cit.> or simulating classically hard instances of the Fermi–Hubbard model. This suggests that NMR spectral prediction is a robust, near–term application for fault–tolerant hardware. § BACKGROUND AND CONTEXT §.§ NMR Hamiltonian A foundational assumption for NMR is that nuclear spin evolution can be separated from the corresponding electron dynamics <cit.>. This is reasonable since electrons outpace the nuclei by several orders of magnitude. However, the electronic subsystem still mediates nuclear spin couplings through time–averaged parameters. Based on this, our discussion will assume the usual Hamiltonian for a system of N nuclear spins: H = H_Zeeman + H_RF + H_J + H_D. The first term, H_Zeeman, captures coupling to a static, external magnetic field 𝐁_0 = (0,0,B_z0) through a Zeeman contribution, H_Zeeman = -ħ∑_k γ_k (1 - δ_k) 𝐈_k ·𝐁_0 = ħ∑_k ω_0k I^z_k. Note that we follow a convention where 𝐁_0 lies along the z–axis, which will also be the quantization axis for our spin system. We also define 𝐈_k = 𝐒_k / ħ = (S_k^x, S_k^y, S_k^z) / ħ to be a vector of spin angular momentum operators S^α = ħσ^α/2 acting on the k-th spin, where σ^α is the corresponding Pauli matrix. Other factors include the nuclear gyromagnetic ratio for the k-th atom, γ_k, and the diamagnetic shielding tensor, δ_k. The latter is highly sensitive to the electronic environment surrounding each nucleus, arising from effects such as paramagnetic spin–orbit coupling and higher–order diamagnetic interactions. The second line in Eq. <ref> applies to the liquid phase, where rotational averaging allows us to introduce ω_0k = -γ_k (1-δ^zz_k) B_0z as the isotropic chemical shift of the k-th nucleus. Chemical shifts depend on the specific nucleus and its location in a molecule, and generally range between 10^2–10^5 Hz (Fig. <ref>). Resonances near these frequencies are the most apparent features in high–field NMR spectra. A similar expression describes RF control, which appears through the term H_RF in Eq. <ref>: H_RF = - ħ∑_k γ_k 𝐈_k ·𝐁_RF (t). Here, 𝐁_RF (t) is the magnetic component of a classical RF pulse (generally with linear polarization). This term is required when simulating the response to a given control sequence, though it is not necessary when predicting spectral resonances. We will set H_RF = 0 when doing the latter. Nuclear spins couple through the H_J and H_D terms, which differ in terms of prominence and prevalence for detected spectra. The weakest yet most detectable is an effective hyperfine coupling, H_J = ħ∑_k < l𝐈_k ·𝐉_kl·𝐈_l, that is mediated by the electronic environment surrounding a pair of chemically bonded atoms. Couplings between spins k and l are encoded by the tensor 𝐉_kl, where we abuse notation and use a dot to denote contraction (following standard NMR literature). This acts as a perturbation to the Zeeman term in high–field spectra, splitting resonances that are detected at chemical shift frequencies. Thus, the measured couplings are a sensitive probe of the spin neighborhood and hence the local molecular geometry (Fig. <ref>). We can decompose this further into isotropic and anisotropic components H_J = H_J,iso. + H_J,aniso. as: H_J,iso. = ħ∑_k<l J_kl 𝐈_k ·𝐈_l H_J,aniso. = ħ∑_k < l𝐈_k ·𝐉^aniso._kl·𝐈_l. The anisotropic contribution will be negligible for our purposes, and thus Eq. <ref> is sufficient to the coupling between bonded spins. The scalar coupling J_kl is on the order of 1 Hz to 10^3 Hz. The last term H_D contains a magnetic dipolar coupling between spatially–proximate spins, H_D = ∑_k < l b_k l[𝐈_k ·𝐈_l - 3 1/||𝐫_kl||^2 (𝐈_k ·𝐫_kl) (𝐈_l ·𝐫_kl) ], which affects bonded and nonbonded pairs of atoms. Here b_kl is a dipolar coupling strength and 𝐫_kl is a vector between the k-th and l-th nuclei. Note that we have absorbed the spatial dependence into the dipolar coupling b_kl = μ_0 γ_k γ_l ħ / 4π||𝐫_kl||^3. These contributions are can be detected for atoms separated by less than 5 Å. §.§ (Ultra)Low– vs. High–Field NMR Spectroscopy High–field NMR spectra are acquired by using RF pulses to (i) generate Rabi oscillations when tuned to the shielded Lamor frequency ω_0k of each nucleus and (ii) to induce and invert a transverse magnetization profile that is detected by the spectrometer. The resulting spectra exhibit resonances near the shielded Lamor frequencies (chemical shifts) for each nucleus. Spin–spin couplings can split these resonances into detuned satellite features or cross–resonances in multidimensional spectra. This coupling data is rich, as it reflects spatial connectivity and dynamic processes within the sample. These conventional spectra are dominated by chemical shifts, in that the perturbations from coupling are several orders of magnitude smaller than the shifts. Thus, the resonances at each shift can be used to earmark a set of chemically distinct atoms, while their connectivity can be inferred from the couplings. In contrast, ZULF experiments invert this paradigm by giving spectra that are dominated by spin–couplings. A particularly striking aspect is that the chemical shift dispersion Δω becomes smaller than the spectral linewidth (roughly 20 Hz for ^1H), and thus spectra are centered on a common feature with peaks that are determined by intricate spin interactions. This complexity is more than superficial, as conventional NMR and ZULF each probe fundamentally different physical regimes <cit.>. Owing to this, experiments in this latter are sometimes referred to as J–coupling spectroscopy. The virtue of ZULF extends beyond its comparatively inexpensive and cryogen–free instrumentation. Notably, these experiments avoid relaxation effects that arise from field inhomogeneities, chemical shift anisotropy and dipolar coupling <cit.>. Certain spectral features also become more apparent, including select heteronuclear dipolar couplings that are masked in high–field experiments <cit.>. Spin coupling patterns can also become easier to identify. An additional virtue exists for ZULF–based imaging, where low field frequencies can circumvent skin effects when imaging metal samples or through metals. §.§ Classical Limitations Simulating NMR spectra is an exercise in quantum dynamics, which can be formally hard irrespective of the field regime. In the most naïve sense, the requisite evolution amounts to exponentiating a 2^N × 2^N matrix. Of course, realistic systems have structures that can be leveraged to reduce complexity, such as the clustered topology of real spin interaction networks. High–field NMR spectrometers (9 T to 20 T) also operate a regime where it is often safe to ignore contributions that do not commute with the Zeeman term, reducing vector spin couplings to scalar S^z_i S^z_k terms. This gives a considerable reduction in computational overhead. By leveraging the effects of decoherence / dephasing and restricted state–space approximations, state–of–the–art codes like Spinach can simulate certain NMR experiments for several thousand coupled spins <cit.>. A further simplification is to consider the evolution of isolated subsystems by specific control pulse sequences. This can decouple other components of the spin system, making simulation inexpensive and interpretation intuitive <cit.>. However, even high–field, liquid–phase spectra can become difficult for highly correlated spin systems <cit.>. Low–field and ZULF experiments do not benefit from many of these assumptions <cit.>. From the outset, spin couplings must remain a vector interaction 𝐈_j ·𝐈_k which makes the problem equivalent to a Heisenberg model instead of an Ising model. Moreover, ZULF spectroscopy's considerably longer relaxation timescales will minimize the classical simulation advantages that noise would lend by reducing fidelity targets <cit.>. Approximations that disregard certain spin couplings are also unacceptable, as these quantities are specifically targeted by ZULF spectroscopy <cit.>. The increased prominence of dipolar couplings becomes an additional complication <cit.> for classical simulation. Notably, the conjunction of slow relaxation with long–range interactions can confound tensor network simulators <cit.>. §.§ Utilility §.§.§ Computational Utility ZULF experiments can compliment their high–field counterparts in some applications and might one day replace them in others <cit.>. However, this technique is partially bottlenecked by difficulties in predicting and interpreting spectra. Immediate advantages appear in the spectra themselves, where spin couplings can be extracted with extremely high precision <cit.>. This consideration also holds for heteronuclear dipolar couplings that are unmasked in ZULF experiments <cit.>. Both of, these data can be extremely useful when determining high–resolution molecular structures. The unique nature of ZULF also has the potential to allow the direct determination of molecular chirality, which otherwise requires a cumbersome auxiliary compound <cit.>. Even the most prosaic application of a low–cost, benchtop J–coupling spectrometer could reduce turnaround in a chemical research setting. Nonetheless, these tasks require simulation to assign resonance peaks and fit candidate structures to spectra <cit.>. Eliminating costly superconducting magnets and cryogenic systems can lead to smaller, lower–cost NMR spectrometers. This is a substantial advantage of ZULF technologies. These instruments might enjoy broad use in field–use settings (e.g., environmental monitoring, forensic, and CBRN / defense applications), particularly when the goal is to screen an analyte against known molecular fingerprints. This market is currently addressed by portable optical (infrared, Raman) and mass spectrometers, though NMR provides a complimentary and discerning technique. Small NMR spectrometers could be useful when quantifying hazardous materials like explosives or chemical warfare agents, particularly when encountering a new compound. In this case, the ability to sequester the instrument in a controlled environment is especially valuable. While all molecular fingerprinting applications require experimental reference spectra or simulations, the latter is a standout for particularly hazardous materials. There are additional advantages for ZULF in laboratory, clinical, and production settings. For instance, these spectrometers can be used alongside systems for real–time reaction monitoring <cit.>, which is difficult with a bulky high–field spectrometer. ZULF's insensitivity to sample inhomogeneity (e.g., magnetic susceptibility variations) <cit.> or conductive environments <cit.> also permits integration into analytical systems like stopped flow mixers, where metal components or strong electrolytes could be present. This is likewise advantageous for medical diagnostics or manufacturing processes where NMR spectroscopy is has been difficult to apply. The same considerations can extend to whole–device metrologies – which has especially strong relevance for emerging battery materials – and spatially–resolved magnetic resonance imaging. The ZULF setting can also host techniques that are difficult to engineer with conventional NMR. Notably, the possibility for exploiting robust quantum control might deliver new opportunities for nanoscale measurement and imaging <cit.>. Similar considerations hold for more exotic magnetic resonance processes, such as β– or γ–NMR, which have proven valuable for materials diagnostics. §.§.§ Domain Specific Utility: Drug Discovery Workflow We consider two workflows when making our resource estimates. The most straightforward is a small–molecule drug–discovery workflow, which captures the domain specific impact of ZULF spectrometers. The fruits of this effort would be particularly valuable, as roughly 90% of marketed drugs are small molecules. To start, we can assume that a given drug discovery pipeline might handle up to 5000 distinct compounds that would benefit from ZULF NMR in a single year (or require multiple ZULF experiments with smaller, more complex molecule sets). Assuming that computation is utilized for each of these compounds, a useful turnaround time for each spectral prediction would be within 48 hours. Complex molecules – e.g., natural products which require elucidation – could enjoy a longer timeframe on the order of one to two weeks. A major pharmaceutical may have 10 robust small–molecule pipelines, and thus 5 × 10^4 compounds may be analyzed annually. Conversely, a startup may have a single notable pipeline. The United States has capitalized 56% percent of the top 25 pharmaceutical companies, so we will assume that there are 13 major domestic drug manufacturers (albeit with varied degrees of internal research and development) <cit.>. There are also more than 5000 pharmaceutical, biotechnology, and pharmaceutical–supporting enterprises in the US, which are equal or smaller in research volume. We assume that half of these address small–molecule research in some manner. A good approximation is to treat this contribution as 2500 startup-scale projects with the equivalent of a single robust small–molecule pipeline (even if this is actually adjacent research, e.g., diagnostics, research tool, or adjuvant synthesis). This amounts to 1.35 × 10^7 compounds to be analyzed per year. The value of ZULF would invariably be justified if the return on a spectrum is roughly 10% of a routine hourly NMR facility rate for high–field NMR (around $ 50 per hour in an academic setting). This suggests roughly $65 M in annual value for the pharmaceutical industry. § APPLICATION PARAMETERS §.§ Molecular Specification The scale and classical hardness of an NMR simulation are defined by a molecule's nuclear spin Hamiltonian. These molecules are invariably tied to a given application, which may encompass molecules of varying complexity throughout its workflow. We will consider two classes of problems when formulating our estimates. Our first is a drug discovery workflow, which addresses a range of compounds from lead discovery through candidate synthesis. The second is a molecular fingerprinting application, where computationally generated ZULF spectra would be required for a set of molecules. §.§.§ Drug Discovery Workflow Molecular screening is a common strategy in drug discovery <cit.>. This approach evaluates a library of compounds for activity at one more biological targets in a semi–automated assay. These targets range in size from macromolecules like receptor proteins, ion channels, and enzymes to whole-cell or tissue preparations. This library may draw from molecular fragments, previously identified drug candidates, or large complex molecules. The smallest of these are fragments, which resemble the synthetic building blocks that comprise an actual drug <cit.>. Previous drug candidates or drug–like molecules are larger, and will be analogous in scale or complexity to actual pharmaceuticals. The largest screening candidates are often `natural products', which are complex secondary metabolites from a range of organisms (fungi, bacteria, marine life, etc.) <cit.>. While this chemical complexity may increase the utility of a natural product dataset, these molecules must often be deconstructed to identify key molecular features. The elucidation of natural product structures is often laborious and time consuming, making them a prime target for new spectroscopic methodology. A workflow generally proceeds by screening these compounds until one or more `hits' are identified <cit.>. If these hits are fragments, they may be used as a starting point to design more substantial molecules that serve as drug candidates. Conversely, smaller segments of a natural product may be synthesized to find a minimal component for activity, termed a pharmacophore. Irrespective of the approach, the identification of this minimal component is a major goal. The structure–affinity or structure–activity relationship defined by extensions of this pharmacophore and biological activity is also of immense importance. Optimization of subject to these constraints will deliver a drug candidate or `lead' compound. A benchtop ZULF spectrometer could be useful for analyzing any of these molecular datasets. This would include rapidly assessing the structure of synthetic fragments and drug candidates, as well as complimenting high–field NMR studies of natural products. Thus, we assess overhead for common molecular fragments and screening libraries (the Maybridge RO3 Diversity and Screening collections), the top 300 small–molecule pharmaceuticals in the United States, and a curated collection natural products. The complexity of nuclear spin Hamiltonians from this workflow is captured in Fig. <ref>. §.§.§ Field Spectroscopy Workflow Compact NMR spectrometers can be of high utility outside the laboratory. Plausible field settings are diverse, spanning from industrial manufacturing floors and clinical healthcare settings to forensic deployments and the austere environments of combat zones. We will consider this ZULF use case for a range of applications. On one hand, we focus on explosives and nerve agents, where heteronuclear nitrogen and phosphorous NMR could be useful for identifying materials (particularly for novel, previously undetected substances). We also consider a forensic dataset containing drugs of abuse, which may be of interest to law enforcement and border protection. Finally, we generate estimates for molecular electronic materials with industrial relevance, with an emphasis on OLED and light–harvesting compounds. These represent a use case for industrial quality assurance and control. While we do not consider them as a separate dataset, many pollutants are comparable in size to our drug screening fragments. §.§ Hamiltonian Parameters Our molecular dataset includes Hamiltonians with both homonuclear, e.g., ^1H–^1H, and heteronuclear, e.g., ^1H–^15N, couplings. We will write J_AB^k to denote J–couplings by between nuclei of species A and B, where k is the number of bonds separating the coupled spins. For simplicity, we drop the isotopic label when it can be inferred from context (all nuclei are assumed to be the most abundant spin-1/2 isotope). To give a concrete example, the two–bond heteronuclear coupling between a proton ^1H and a carbon ^13C would be written as J_CH^2. The same convention extends to dipolar couplings, which will be denoted by J_AB^D. Distances these are noted parenthetically or in the text when the context is clear. Finally, we use ω_i = 2π f_i to denote chemical shifts. Note that our Hamiltonian is written in terms of angular frequency, though chemical shifts and couplings will be specified in Hz or MHz. The will be understood by convention, allowing us to omit factors of 2π from expressions. While a comprehensive treatment would utilize molecule–specific couplings based on experimental data, this is prohibitive the datasets that we consider. Instead, we use common reference values from the literature. These will match the established values in magnitude and thus be sufficient for resource analysis. Scalar J–couplings are taken from established sources <cit.>, including the database used in the Spinach package <cit.>. Note that electronic structure methods can be used to estimate couplings when suitable tabulated data are unavailable <cit.>. Our Hamiltonians address a robust spectroscopic limit, which means that we include a broad range of couplings to capture all parameters of relevance to high–resolution spectroscopy. Unless otherwise noted (e.g., for the explosives dataset), we only consider the homonuclear coupling between protons (J^1_HH, J^2_HH, J^3_HH) and scalar heteronuclear couplings that may perturb the proton spin network (J^1_CH, J^1_NH). We also include heteronuclear dipolar couplings (J^D_CH, J^D_NH ) since these can be detected in certain ZULF experiments. The low natural abundances of spin–active ^13C and ^15N isotopes make the detection of their large–scale spin networks unlikely without concerted effort. Nonetheless, we still include couplings among carbon and nitrogen nuclei (J^1_CC, J^1_CN, J^2_CC). This corresponds to isotopic enrichment of the sample or a limiting complexity for interpreting long, heavily sampled experiments. While this means that our datasets will overestimate the required overhead, the degree to which they do so not extreme (see Supplementary Material). §.§ Spectroscopic Specifications The proposed quantum simulations are intended to reproduce experimental ZULF NMR spectra. This section addresses how practical, experimental considerations would influence the choice of computational parameters. §.§.§ Precision Thresholds Our objective is to reconstruct ⟨ S_tot^z(t) S_tot^z ⟩ up to some maximal signal acquisition time t_acq while maintaining a finite spectral resolution Δω. This resolution is defined by the experimental spin–spin relaxation time Δω∝ 1/T_2, which cans be used to set precision targets for quantum dynamics. Moreover, NMR experiments are subject to relaxation processes which translate to lower precision demands as t approaches T_2. That is, if our spectral targets correspond to a dephasing rate of γ = 1/T_2, each spin will independently decohere with fidelity F ∼exp[-γ t]. An ensemble of N spins will also incur a multiplicative error of F for each spin. Consequently, we need only reproduce our overall unitary evolution with an error threshold (1-exp[-t/T_2]) ≤ϵ(t) ≤ (1-exp[-N t/T_2]). This will reduces the overhead for a qubitized time evolution, though constraints from amplitude amplification still require a minimum fidelity. Thus, we will impose a maximum error and thus minimum fidelity for the target time evolution unitary. §.§.§ Relaxation and Timescale Parameters A standing convention is to collect FID data up to an acquisition time of at least t_acq = 3T_2 for the slowest relaxing spin species. At this point 95% of the transverse magnetization profile will have been lost. While shorter sampling intervals can give the correct resonance peaks, they will be accompanied by satellite “ringing” artifacts. More typically, experiments will use times on the order of t_acq = 5T_2 for publication quality spectra, largely to capitalize on signal–to–noise concerns. It is tempting to scour the literature for actual acquisition times. However, it is important to note that a practical choice is guided by other parameters, t_acq = n_points / 2W, such as the desired spectral width W and the number of points n_points sampled in the signal. The experimental budget is also guided by a tradeoff between satisfying Nyquist sampling requirements and practical spectrometer availability. Thus, it difficult to map literature parameters back to quantum simulations. However, our objective is not to produce spectra that rival experimental data. Instead we seek to identify and correctly assign spectral resonances in classically intractable systems. Our proposed quantum methods are also free from relaxation effects, so there are no signal–to–noise issues. This means that the choice of simulation timescale is strictly guided by the need to capture experimentally relevant dynamical processes. Based on this, we will assume that most detectable coherence pathways will have developed by t_max = T_2, with those beyond that falling at or below the spectrometer detection threshold. However, we also provide precision datasets with a higher threshold of t_max = 3T_2 to accommodate more complex scenarios. True ZULF experiments have T_2 values that can exceed those of conventional NMR by an order of magnitude or more. However, the higher |𝐁_0| regime of low–field experiments correspond to shorter T_2 values, as will larger molecules. We adopt a representative T_2 value of 1 s for our spectra, which is a good compromise between these concerns. This also fixes our maximum simulation time. Note that we only leverage relaxation–induced fidelity constraints up to a maximum error of ϵ = 5.0 × 10^-3, which is necessary to to ensure performance guarantees for robust oblivious amplitude amplification. However, it will only incur a logarithmic increase in overhead. § QUANTUM ALGORITHMS Quantum signal processing (QSP) and its descendants are among the most efficient algorithms for fault–tolerant quantum simulation <cit.>. These qubitized methods can evolve a state up to time t with a precision of ϵ using O(t + log (1/ϵ)) queries to an oracle that block encodes a time–independent Hamiltonian. Spatial overhead is also efficient, as the block encoding can represent the Hamiltonian using O(log N) ancilla. We briefly review these methods as required to frame our analysis. §.§ Block Encoding Strategy We implement quantum dynamics using the quantum signal processing in the guise of the quantum eigenvalue transform (QET). A prerequisite is the ability to handle our nonunitary Hamiltonian H in a quantum circuit. This is accomplished by block encoding H in a larger unitary operator U_H, U_H = [ H ∗; ∗ ∗ ]. To utilize this, we define a signal state |G⟩≡|0^m⟩ with the requirement that, H = (⟨G|⊗ I_n) U_H (|G⟩⊗ I_n), which effectively “extracts” the block that contains our target operator. This embedding must be a contraction in order to maintain unitarity. Stated differently, the spectral norm of the Hamiltonian must satisfy || H||≤ 1. This is untrue for most Hamiltonians, so we must perform a rescaling H ↦α^-1 H. Note that we are only concerned with the block that encodes H, and thus the behavior of our operations on other blocks can be left undefined. Our nuclear spin Hamiltonian corresponds to a long–range Heisenberg model with a complicated topology. We must map this onto a one–dimensional qubit register for use with a quantum computer. Each term in H = ∑_i=1^M c_i Λ_i will then correspond to a Pauli string with two non–identity factors, Λ_i = I^⊗ p⊗ P_k ⊗ I^⊗ q⊗ P_l ⊗ I^⊗ (N-p-q-2). Since each Pauli string is unitary, H is inherently a linear combination of unitary operators (LCU). Our block encoding follows a conventional strategy. We begin by defining a select oracle, U_sel = ∑_i |i⟩⟨i|⊗Λ_i, which gives access to a unitary Λ_i in the LCU through an ancillary signal register |G⟩. A unary encoding is used to flag each Λ_i, which means that |G⟩ must contain m = ⌈log_2 M ⌉ qubits. Similarly, we define a prepare oracle, U_prep = 1/√(|| c ||_1)∑_i √(c_i) |i⟩⟨0^m| which generates a weighted superposition via ancillary signal states. Here || c ||_1 = ∑_i | c_i | is the 1–norm of the coefficient set {c_i}. Note that we have singled out |G⟩ = |0^m⟩ as a signal state for the prepare oracle. Our Hamiltonian is then delivered through a straightforward construction, U_H = (U_prep^†⊗ I_n) U_sel (U_prep⊗ I_n). A circuit representation of this arrangement is depicted in Fig. <ref>b. §.§ Quantum Eigenvalue Transform Quantum signal processing (QSP) delivers a means to transform a single eigenvalue by a polynomial function λ↦ f(λ) <cit.>. The quantum eigenvalue transform (QET) 𝒪_ϕ⃗ goes a step further and extends this over the spectrum of a normal operator <cit.>: 𝒪_ϕ⃗: H ↦ f(H) = ∑_λ f(λ) |λ⟩⟨λ|. This transformation is encoded using a series of phase angles {ϕ_i} that are classically optimized to specify f. It is implemented using qubitized operators (see Appendix I) that include our block encoding U_H, a qubitized reflection operator Z_Π = 2 |G⟩⟨G|⊗ I_n - I_m ⊗ I_n, and qubitized rotations Z_Π,ϕ = exp[ϕ Z_Π]. Putting these pieces together, the QET sequence is defined by the product, 𝒪_ϕ⃗ = e^ϕ_0 Z_Π∏_k=1^d [U_H Z_Π e^ϕ_k Z_Π]. In practice, the reflection can be incorporated into the encoding U_H or handled through a judicious choice of the phases. While demonstrating robust asymptotic scaling, the use of U(1) rotations leads to several fundamental restrictions. The most notable is that we can only encode strictly real or imaginary functions of definite parity. In the context of quantum dynamics, this means that we must implement the exponential evolution operator x ↦exp[ x t] = cos (xt) + sin (xt) by combining individual circuits for the constituent sine and cosine transforms as an LCU. Conventional QSP also places several restrictions on the encoded polynomials, which limits the ultimate transform to H ↦exp- H t/2. Given an error ϵ in this encoding (via imprecision in the phase sequence), the probability of a useful outcome will be p = ||exp[ H t]/2 ±ϵ||^2. This will lie close to 1/4 if ϵ is sufficiently small, potentially requiring repetition of the algorithm to get a workable result. A standard solution is to use robust oblivious amplitude amplification (ROAA), which can increase the success probability to near unity <cit.>. This technique promotes a QET circuit to a walk operator W = 𝒪_ϕ⃗(H), where the extended sequence W ℛ W^†ℛ W can suppress the failure probability to less than 2 ϵ, where ϵ is our precision target. However, ROAA also triples the temporal overhead of the algorithm. A recent alternative called generalized quantum signal processing (GQSP) <cit.> follows the spirit of early QSP proposals <cit.> while promoting the signal processing rotations to a series of SU(2) operations, R(λ,ϕ,θ) = [ e^ (λ + ϕ)cos(θ) e^ϕsin(θ); e^λsin(θ) -cos(θ) ]. Using these phases, it is possible to transform a block encoded operator H according to, [ ∏_k=1^d R(0,ϕ_k,θ_k) A ] R(λ_0,ϕ_0,θ_0) = [ P(𝒲) ·; Q(𝒲) · ], where A = |0⟩⟨0|⊗𝒲 + |1⟩⟨1|⊗ I is defined in terms of a walk operator 𝒲 = Z_Π U_H. This delivers a pair of complex polynomial transformations | P(𝒲)|^2 + | Q(𝒲)|^2 = 1 that are not subject to the same restrictions as conventional QSP methods. It also specifies an extremely simple and efficient algorithm to calculate both the required phase rotations and the form of one polynomial in terms of the other. These benefits compound for Hamiltonian simulation. This is because the spectral representation of 𝒲= ⊕_λ W_λ admits a particularly useful form, 𝒲 = ⊕_λ[ λ √(1 - λ^2); -√(1 - λ^2) λ ]⊗|λ⟩⟨λ|, with eigenvalues exp[±cos^-1λ] in each qubitized eigenspace. While this holds true for any QSP method, the generalized scheme goes a step further by giving an efficient means to implement the transform P(𝒲 ) = exp[- H t] so that, P(𝒲_λ) = [ e^- t cos(cos^-1 (λ)) 0; 0 e^- t cos(-cos^-1 (λ)) ]. This is not possible with many conventional QSP schemes since since P(x) is a complex polynomial of indefinite parity. An important consequence is that separate QET transformations are no longer combined using an LCU, eliminating the need for amplification to boost the success probability. We adopt the generalized GQSP scheme for our time evolution circuits [Our practical use of the GQSP formalism actually corresponds to a quantum eigenvalue transform. However, we retain the name GQSP to maintain consistency with literature. However, it is currently unclear if GQSP can be extended to the more general context of the quantum singular value transform (QSVT).]. §.§ Phase Angle Generation Our time evolution is implemented using phases that encode the polynomial P(x) up to an error threshold ϵ. We can define a degree d Chebyshev polynomial approximation to trigonometric functions using the Jacobi–Anger expansion <cit.>, e^i t cos(x)≈∑_n=-d^d ^n J_n (t) e^ n x, and refine both the complimentary polynomial Q(x) and SU(2) phase angles (λ_k, ϕ_k, θ_k) using the algorithm of Ref. <cit.>. Here J_n(t) is the n–th Bessel function of the first kind. This method can refine sequences up to 10^6 phases in roughly one minute using readily accessible classical hardware. This gives a series of 2d+1 phase tuples for a degree d polynomial approximation, which is comparable to representing x ↦exp[ x t] = cos (xt) + sin (xt) using standard methods (e.g., the symmetric phase sequence of <cit.>). The overall GQSP sequence will require 2d+1 SU(2) rotations, 2d qubitized reflections, and 2d applications of our block encoding. Taking the Jacobi–Anger expansion up to degree d = e|τ|/2 + log_10(1/ϵ) will give a truncation error bounded by ϵ. We use this fact in determining both optimization–based phases and the assignment of random phase values, which can be expeditious when constructing larger circuits. When optimizations are explicitly performed, we adopt convergence targets so that the overall phase sequence reproduces the target time–evolution unitary with fidelity F = 1 - ϵ. While we generally use random angles for resource estimation, explicit determinations can be made using the pyLIQTR software suite <cit.>. §.§ Circuit Implementation We use a straightforward representation for GQSP which interleaves SU(2) rotations between controlled applications of the walk operator 𝒲 = Z_Π U_H. Note 𝒲 alternates with its adjoint between repetitions, which is required when encoding a polynomial expansion with terms of negative degree <cit.>. The walk operator is defined by subsequent applications of a multicontrolled CZ operation and the block encoding U_H. This gives a multicontrolled SU(2) rotation between repetitions of U_H or U_H^†, similar to the qubitized rotation operator in a conventional QET sequence. The encoding U_H represents H as an LCU over Pauli strings using select and prepare oracles. We use a prepare oracle that combines QROM lookup with an alias sampling strategy <cit.>, and which tolerates dirty ancilla outside of the selection register. This scheme approximates LCU coefficients to fixed bit precision at reduced T–complexity. Our select is more conventional, affording operators through unary iteration over a control register. Circuits are compiled to the Clifford+T set before resource quantification, which permits operations with common quantum error correction layers schemes such as the surface code. All of these tasks are accomplished using the pyLIQTR software suite <cit.> and its QUALTRAN extensions <cit.>. §.§ Observable Estimation and Sampling In order to calculate ZULF spectra, we must have an initial spin population with a net magnetization along the z-axis. While a single product state like |ψ(t = 0)⟩ = ⊗_k |↑⟩_k might suffice for high–field simulations, the ZULF regime requires us to sample over an ensemble states with a net z–polarization (e.g., |100…⟩, |010…⟩, …). The most immediate way to accomplish this is by running a series of simulations that are initiated from each of these states. The resulting data can then be combined classically, 𝒮(t) = ∑_k ⟨ψ_k(t)| S^z_tot|ψ_k(t)⟩ = ∑_k ⟨ψ_0,k|𝒰^-1(t) S^z_tot𝒰(t)|ψ_0,k⟩ where U(t) exp[- H t] is the unitary time evolution operator approximated by our QET circuit. Although the number of initial states will grow exponentially in N, the variance in this estimator is much smaller for our particular problem. More specifically, rudimentary statistical arguments can show that 𝒮(t) will be well–reproduced by sampling at most N^2 of these computational basis states. Reproducing this to a precision of ϵ_meas will require O(1/ϵ_meas^2) samples, so we can presume a worst–case scaling of O(N^2 / ϵ_meas^2) for the number of required shots. A more reasonable strategy is to prepare these states in superposition on our quantum computer, e.g., by generating a uniform superposition and filtering with inequality tests as in <ref>. The correlation function 𝒮(t) = ⟨ S^z_tot(t) S^z_tot⟩ can then be extracted using phase kickback from a pair of total spin operators followed by amplitude estimation. This strategy is depicted in Fig. <ref> and Fig. <ref>. The simplest amplitude estimation strategy would use the Hadamard test with a O(1/ϵ_meas^2) overhead in the number of shots that estimate 𝒮(t). A robust error threshold might be on the order of ϵ_meas = 0.01 for a total of N_shots = 10^4 shots, while a more permissive value of ϵ_meas = 0.05 gives a markedly reduced N_shots = 400 shots. Iterative phase or amplitude estimation procedures offer an alternative strategy, though these come with an additional repetition of the overall circuit <cit.>. Another sampling concern arises when considering the timepoints that are required to reconstruct 𝒮(t) and thus our target spectrum 𝒮(ω). A naïve approach would uniformly sample ⟨ S^z_tot(t) S^z_tot(0) ⟩ with a timestep Δ t so that N_points = t_max / Δ t is much greater than N. Based on experimental data, a typical, high–resolution spectrum might require 4096 points for reconstruction. However, the frequency domain spectrum 𝒮(ω) is sparse, making it a prime candidate for compressed sensing methods <cit.>. These methods have proven effective for reducing overhead in NMR experiments <cit.>. This reduction can range from a factor of two for high–precision spectra <cit.> to a factor of 40 for crude spectra reconstructed in NISQ experiments <cit.>. Our goal is to reproduce the location of resonances and not mimic high–quality experimental data. Thus, a target of N_points = 400 seems suitable for most purposes, though as few as N_points = 100 could be permissible under many circumstances. These data suggest that we must evaluate N_shots× N_points time evolution circuits to predict a spectrum. This corresponds to between 4 × 10^4 and 4 × 10^6 evaluations depending on the target application. However, many of these evolution circuits will have lower overhead than our explicit estimates, as the latter correspond to the maximal simulation time. § RESOURCE ESTIMATES Many resource estimation tasks focus on a small, curated set of problem inputs. This can make their conclusions susceptible to statistical biases that overestimate or underestimate the true algorithmic overhead. To obtain a more accurate measure, we have generated explicit time–evolution circuits for the nuclear spin Hamiltonians of 1 × 10^4 small molecules. While this is invariably skewed by our pool of application domains, it is nonetheless more representative than a hand–selected pool of representative molecules. When possible, we have selected established molecular datasets that are curated to ensure chemical diversity <cit.>. §.§ Drug Discovery Workflow We begin by focusing on a drug–discovery workflow. The distribution of temporal overhead, which is quantified through the T–gate count, varies markedly between the difficult molecular datasets (Fig. <ref>). We can define a classically easy region based on a problem scale that will likely be limiting for Liouville–von Neumann simulators <cit.> when handling highly correlated spin Hamiltonians <cit.>. An additional scale is given by the T–complexity for factoring 2048–bit integers using Shor's algorithm, which has been a motivating application for quantum computation <cit.>. While all of our molecular classes happen to overlap with this region, roughly 79% of the small screening fragments and synthetic precursors fall within its confines (Fig. <ref>). Quantum simulations are less likely to be useful when predicting spectra for these compounds. The outlook for high–throughput screening targets and more complex intermediates in drug synthesis is better. Here, roughly 44% of molecules might benefit from quantum computation in spectral prediction. This pool lies within an order of magnitude of the T–count for factoring 2048–bit integers (Fig. <ref>), suggesting that it could be a relatively near–term application for fault–tolerant quantum computation. This situation is even more optimistic for final drug candidates and natural product leads, where nearly 75% and 93% of compounds fall into the classically difficult regime, respectively. The total number of T–gates that are required for a single QET shot are also almost entirely within two orders of the 2048–bit factoring bound for both pools. While natural products have broad magnetically–active spin systems, this is not directly reflected in the simulation complexity. Notably, the largest of these are almost twice the size of their largest pharmaceutical counterparts. This is reflected through both the number of problem qubits and the number of logical qubits that are used by the QET circuit (Fig. <ref>). Despite this, the average T–count for classically difficult natural product (9.07 × 10^10 gates) and pharmaceutical (6.69 × 10^10 gates) spin evolutions lie within a factor of two. We address the interplay between spin network complexity, entanglement structure, and simulability in a companion manuscript <cit.>. The modest logical qubit requirements and Shor–scale T–gate counts suggest that quantum computation might help enable the use of ZULF spectroscopy in drug discovery workflows on early fault–tolerant quantum computers. However, our algorithm must also be repeated in order to construct spectra. The use of compressed sensing techniques should enable spectral prediction with roughly 400 timepoints, and our error threshold would require O(1/ϵ_meas^2) repetitions to obtain these to an estimation precision of ϵ_meas. If we assume ϵ_meas = 0.01, which is likely on par with (or better than) the aggregate error in experimental data, this would naïvely necessitate 4 × 10^6 shots to predict a spectrum (or 4 × 10^r shots under more relaxed constraints). It is worth noting that only marginally greater overhead is found for very large natural products, peptide therapeutics and small proteins (Table. <ref>). The latter have become feasible for ZULF with the advent of pulse sequences for total correlation spectroscopy (TOCSY) <cit.>. These targeted methods can avoid the spectral crowding that would occur when taking `full–feature' ZULF spectra of very large systems. §.§ Molecular Fingerprinting Workflow Our second workflow imagines how quantum computation could aid ZULF NMR spectrometers that are deployed in field applications. We consider a variety of use cases, including (i) explosive and CBRN detection (focusing on organophosphate nerve agents), (ii) the detection of illicit drugs, (iii) clinical screening applications in a healthcare setting, and (iv) quality control in the manufacture of organic electronics and optoelectronics. These results are summarized in Fig. <ref>. Here, we see more limited applicability for the explosive / CBRN and forensic datasets. However, some of these systems might benefit from higher–precision spectral calculations due to the unusual structures of the underlying molecules. This means that we cannot exclude the utility of quantum computation for these spectra. Indeed, the fact that ZULF spectrometers could be used in a sequestered laboratory setting might make them advantageous when characterizing extremely hazardous or unstable materials. More optimistic prospects are seen for clinically relevant molecules and electronic materials. § CONCLUSIONS We have demonstrated how quantum computation could augment the utility of certain low–field NMR spectroscopies. A unique aspect of this work is that it addresses a large number of inputs and provides some of the only explicit, circuit based estimates (following <cit.>) for qubitized time evolutions using state–of–the-art algorithms . Based on our estimates, several classes of molecules remain at or beyond the limits of classical computation, with impacts that range from drug discovery to electronic materials. Note that a quantum computer might have utility for classically tractable spectral prediction tasks, especially if it is sufficiently inexpensive and capable of high throughput. This is largely because many of these tasks do not benefit from the same classical high–performance computing workflows that impact frontier–scale quantum chemistry and condensed matter physics problems. These limitations are not just a consequence of resource availability or numerical methodology. Since spectral predictions will compliment individual experiments, a task that requires weeks of compute time would be diminished in utility for many small molecules (though utility would still hold for challenging targets like natural products and peptides). However, the hardness and scientific merit of these other HPC–amenable problems does not necessarily equate to direct economic impact. Our effort follows several discussions of NMR simulation in a NISQ context <cit.>. Beyond low–field instruments, it has been proposed that quantum computation can find utility predicting macromolecular and solid–state materials spectra for high–field solid–state NMR. This is certainly plausible based on our observations, and is partially reflected through our peptide–based drug discovery instances. However, there are some challenges in addressing this regime, which originate in the disparate frequency scales between couplings and chemical shifts (which become relevant at high–field). This results in large Hamiltonian normalization factors and a linear increase in problem overhead, though this might be mitigated by certain mathematical transformations of the Hamiltonian. However, it also suggests a unique role for quantum computation in enabling ZULF spectrometers for this regime. Another role for quantum simulation would lie in the design of pulse sequences for spectrometers <cit.>. This is inherently a quantum control problem, and the ability to tackle large spin systems could lead to unanticipated experimental techniques. However, both this and solid–state NMR require methods for the simulation of explicitly time–dependent Hamiltonians <cit.>. While we have shown that a quantum computer could have utility in predicting low–field NMR spectra, we do not make claims regarding quantum advantage. Notably, NMR experiments are characterized by relaxation, though this is substantially slower in low– and ultralow–field regimes. The inclusion of relaxation processes can reduce the required bond dimension – and thus classical overhead — for tensor network simulations <cit.>. This can dequantize certain problems that are otherwise classically hard. While we cannot fully exclude this possibility here, it is significant that our Hamiltonians have less regularity and greater connectivity that many lattice models. This lends to greater operator entanglement and a commensurate increase computational hardness. Moreover, these features can make several classical overhead–reduction strategies inapplicable. Our estimates also use relaxation parameters that are plausible yet on the faster end of the ZULF regime, though the general experimental distribution is poorly mapped at present. Based on these facts, it is unlikely that the more complicated spectra can be dequantized as readily as recent claims regarding quantum supremacy <cit.>. We consider these and related considerations in a separate manuscript <cit.>. As a final note, our estimates are generally pessimistic regarding the overhead that is required for spectral prediction. We include a broad range of homonuclear and heteronuclear coupling terms to approach the most taxing, high–precision scenarios. However, many experiments are performed for homonuclear proton networks, with the occasionally inclusion of a few heteronuclear couplings with isotopic enrichment. This means that practical Hamiltonians would contain fewer terms, with a commensurate reduction in T–complexity. § ACKNOWLEDGEMENTS This material is based upon work supported by the Defense Advanced Research Projects Agency under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency. § APPENDICES §.§.§ Molecule Input Preparation Small–molecule structures were prepared using molecular mechanics based calculations. Initial all–atom geometries were generated from a standard library of hybridization angles, as defined by the input connectivity. Geometries were then relaxed using conjugate gradient minimization in the classical Merck molecular mechanics force field (MMFF94) <cit.>. No electrostatic or van der Waals cutoffs were applied, and optimizations were conducted to a relative energy gradient of 1 × 10^-8. This strategy is sufficient for most biomolecular simulations, where a finite environmental temperature will generally make electronic structure calculations unnecessary. These calculations were handled using the Open Babel toolkit <cit.>. Note that inputs based on solved protein NMR structures were used without geometry relaxation. 84 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Ernst et al.(1990)Ernst, Bodenhausen, and Wokaun]Ernst1990 author author R. R. Ernst, author G. Bodenhausen, and author A. Wokaun, @noop title Principles of Nuclear Magnetic Resonance in One and Two Dimensions (publisher Oxford University Press, year 1990)NoStop [Hu et al.(2021)Hu, Cheng, He, Zhang, Jiang, Jiang, Li, Wang, Yang, and Liu]Hu2021 author author Y. Hu, author K. Cheng, author L. He, author X. Zhang, author B. Jiang, author L. Jiang, author C. Li, author G. Wang, author Y. Yang, and author M. Liu, title title NMR–Based Methods for Protein Analysis, @noop journal journal Anal. Chem. volume 93, pages 1866 (year 2021)NoStop [Marušič et al.(2023)Marušič, Toplishek, and Plavec]Marusic2023 author author M. Marušič, author M. Toplishek, and author J. Plavec, title title NMR of RNA — Structure and interactions, @noop journal journal Cur. Opin. Struc. Biol. volume 79, pages 102532 (year 2023)NoStop [Fontana and Widmalm(2023)]Fontana2023 author author C. Fontana and author G. Widmalm, title title Primary Structure of Glycans by NMR Spectroscopy, @noop journal journal Chem. Rev. volume 123, pages 1040 (year 2023)NoStop [Rule and Hitchens(2006)]Rule2006 author author G. S. Rule and author T. K. Hitchens, @noop title Fundamentals of Protein NMR Spectroscopy (publisher Springer, year 2006)NoStop [Sprangers et al.(2007)Sprangers, Velyvis, and Kay]Sprangers2007 author author R. Sprangers, author A. Velyvis, and author L. E. Kay, title title Solution NMR of supramolecular complexes: providing new insights into function, @noop journal journal Nat. Methods volume 4, pages 697 (year 2007)NoStop [Weingarth and Baldus(2013)]Weingarth2013 author author M. Weingarth and author M. Baldus, title title Solid–Stte NMR–Based Approaches for Supramolecular Structure Elucidation, @noop journal journal Acc. Chem. Res. volume 46, pages 2037 (year 2013)NoStop [Rosenzweig and Kay(2016)]Rosenzweig2016 author author R. Rosenzweig and author L. E. Kay, title title Solution NMR Spectroscopy Provides an Avenue for the Study of Functionally Dynamic Molecular Machines: The Example of Protein Disaggregation, @noop journal journal J. Am. Chem. Soc. volume 138, pages 1466 (year 2016)NoStop [Puthenveetil and Vinogradova(2019)]Puthenveetil2019 author author R. Puthenveetil and author O. Vinogradova, title title Solution NMR: A powerful tool for structural and functional studies of membrane proteins in reconstituted environments, @noop journal journal J. Biol. Chem. volume 294, pages 15914 (year 2019)NoStop [Bayer et al.(2020)Bayer, Matena, and Beuck]Bayer2020 author author P. Bayer, author A. Matena, and author C. Beuck, title title NMR Spectroscopy of supramolecular chemistry on protein surfaces, @noop journal journal Beilstein J. Org. Chem. volume 16, pages 2505 (year 2020)NoStop [Reif et al.(2021)Reif, Ashbrook, Emsley, and Hong]Reif2021 author author B. Reif, author S. E. Ashbrook, author L. Emsley, and author M. Hong, title title Solid–state NMR spectroscopy, @noop journal journal Nat. Rev. Methods Primers volume 1, pages 2 (year 2021)NoStop [Cecil(2013)]Cecil2013 author author K. M. Cecil, title title Proton Magnetic Resonance Spectroscopy: Technique for the Neuroradiologist, @noop journal journal Neuroimag. Clin. N. Am. volume 23, pages 381 (year 2013)NoStop [Öz, et al.()]Oz2014 author author G. Öz, et al., title title Clinical Proton MR Spectroscopy in Central Nervous System Disorders, @noop journal Radiology , pages 658NoStop [Tognarelli et al.(2015)Tognarelli, Dawood, Sharif, Grover, Crossey, Cox, Taylor–Robinson, and McPhail]Tognarelli2015 journal author author J. M. Tognarelli, author M. Dawood, author M. I. F. Sharif, author V. P. B. Grover, author M. M. E. Crossey, author I. J. Cox, author S. D. Taylor–Robinson, and author M. J. W. McPhail, title title Magnetic Resonance Spectroscopy: Principles and Techniques: Lessons for Clinicians, @noop journal journal J. Clin. Exp. Hepatol. volume 5, pages 320 (year 2015)NoStop [Brown et al.(2014)Brown, Cheng, Haacke, Thompson, and Venkatesan]Brown2014 author author R. W. Brown, author Y.-C. N. Cheng, author E. M. Haacke, author M. R. Thompson, and author R. Venkatesan, @noop title Magnetic Resonance Imaging: Physical Principles and Sequence Design (publisher Wiley, year 2014)NoStop [Soares et al.(2016)Soares, aes, Moreira, Sousa, Ganz, Sampaio, Alves, Marques, and Sousa]Soares2016 author author J. M. Soares, author R. M. aes, author P. S. Moreira, author A. Sousa, author E. Ganz, author A. Sampaio, author V. Alves, author P. Marques, and author N. Sousa, title title A Hitchhiker's Gudie to Functional Magnetic Resonance Imaging, @noop journal journal Frot. Neurosci. volume 10, pages 515 (year 2016)NoStop [Westbrook and Talbot(2018)]Westbrook2018 author author C. Westbrook and author J. Talbot, @noop title MRI in Practice (publisher Wiley, year 2018)NoStop [Bernarding et al.(2006)Bernarding, Buntkowsky, Macholl, Hartwig, Burghoff, and Trahms]Bernarding2006 author author J. Bernarding, author G. Buntkowsky, author S. Macholl, author S. Hartwig, author M. Burghoff, and author L. Trahms, title title J–Coupling Nuclear Mangetic Resonance Spectroscopy of Liquids in nT Fields, @noop journal journal J. Am. Chem. Soc. volume 128, pages 714 (year 2006)NoStop [Blanchard et al.(2021)Blanchard, Budker, and Trabesinger]Blanchard2021 author author J. W. Blanchard, author D. Budker, and author A. Trabesinger, title title Lower than low: Perspectives on zero– to ultralow–field nuclear magnetic resonance , @noop journal journal J. Magn. Reson. volume 323, pages 106886 (year 2021)NoStop [DeVience et al.(2021)DeVience, Greer, Mandal, and Rosen]DeVience2021 author author S. J. DeVience, author M. Greer, author S. Mandal, and author M. S. Rosen, title title Homonuclear J–Coupling Spectroscopy and Low Mangetic Fields using Spin-Lock Induced Crossing, @noop journal journal ChemPhysChem volume 22, pages 2128 (year 2021)NoStop [Theis et al.(2011)Theis, Ganssle, Kervern, Knappe, Kitching, Ledbetter, Budker, and Pines]Theis2011 author author T. Theis, author P. Ganssle, author G. Kervern, author S. Knappe, author J. Kitching, author M. P. Ledbetter, author D. Budker, and author A. Pines, title title Parahydrogen–enhanced zero–field nuclear magnetic resonance, @noop journal journal Nat. Phys. volume 7, pages 571 (year 2011)NoStop [Blanchard et al.(2013)Blanchard, Ledbetter, Theis, Butler, Budker, and Pines]Blanchard2013 author author J. W. Blanchard, author M. P. Ledbetter, author T. Theis, author M. C. Butler, author D. Budker, and author A. Pines, title title High–Resolution Zero–Field NMR J–Spectroscopy of Aromatic Compounds, @noop journal journal J. Am. Chem. Soc. volume 135, pages 3607 (year 2013)NoStop [Emondts et al.(2014)Emondts, Ledbetter, Pustelny, Theis, Patton, Blanchard, Butler, Budker, and Pines]Emondts2014 author author M. Emondts, author M. P. Ledbetter, author S. Pustelny, author T. Theis, author B. Patton, author J. W. Blanchard, author M. C. Butler, author D. Budker, and author A. Pines, title title Long–Lived Heteronuclear Spin–Singlet States in Liquids at a Zero Magnetic Field, @noop journal journal Phys. Rev. Lett. volume 112, pages 077601 (year 2014)NoStop [Blanchard et al.(2015)Blanchard, Sjolander, King, Ledbetter, Levine, Bajaj, Budker, and Pines]Blanchard2015 author author J. W. Blanchard, author T. F. Sjolander, author J. P. King, author M. P. Ledbetter, author E. H. Levine, author V. S. Bajaj, author D. Budker, and author A. Pines, title title Measurement of untruncated nuclear spin interactions via zero– to ultralow–field nuclear magnetic resonance, @noop journal journal Phys. Rev. B volume 92, pages 220202(R) (year 2015)NoStop [Appelt et al.(2010)Appelt, Häsing, Sieling, Gordji–Nejad, Glöggler, and Blümich]Appelt2010 author author S. Appelt, author F. W. Häsing, author U. Sieling, author A. Gordji–Nejad, author S. Glöggler, and author B. Blümich, title title Paths from weak to strong coupling in NMR, @noop journal journal Phys. Rev. A volume 81, pages 023420 (year 2010)NoStop [Note1()]Note1 note The most straightforward digital quantum simulations will reproduce a pure state density matrix as opposed to the mixed state generated by relaxation operators. Instead, we treat relaxation through the decaying exponential factor in Eq. <ref>. Thus, the longitudinal and transverse magnetization profiles become equivalent for determining resonances in the NMR spectrum. It may be prudent to work with S^+ or S^- when developing pulse sequences due to the difference in objectives and potential methodology.Stop [Karabanov et al.(2011)Karabanov, Kuprov, Charnock, van der Drift, Edwards, and Köckenberger]Karabanov2011 author author A. Karabanov, author I. Kuprov, author G. T. P. Charnock, author A. van der Drift, author L. J. Edwards, and author W. Köckenberger, title title On the accuracy of the state space restriction approximation for spin dynamics simulations, @noop journal journal J. Chem. Phys. volume 135, pages 084106 (year 2011)NoStop [Algaba et al.(2022)Algaba, Ponce-Martinez, Munuera-Javaloy, Pina-Canelles, Thapa, Taketani, Leib, de Vega, Casanova, and Heimonen]Algaba2022 author author M. G. Algaba, author M. Ponce-Martinez, author C. Munuera-Javaloy, author V. Pina-Canelles, author M. J. Thapa, author B. G. Taketani, author M. Leib, author I. de Vega, author J. Casanova, and author H. Heimonen, title title Co–Design quantum simulation of nanoscale NMR, @noop journal journal Phys. Rev. Res. volume 4, pages 043089 (year 2022)NoStop [Seetharam et al.(2023)Seetharam, Biswas, Noel, Risinger, Zhu, Katz, Chattopadhyay, Cetina, Monroe, Demler, and Sels]Seetharam2023 author author K. Seetharam, author D. Biswas, author C. Noel, author A. Risinger, author D. Zhu, author O. Katz, author S. Chattopadhyay, author M. Cetina, author C. Monroe, author E. Demler, and author D. Sels, title title Digital quantum simulation of NMR experiments, @noop journal journal Sci. Adv. volume 9, pages 1 (year 2023)NoStop [Burov et al.(2024)Burov, Nagl, and Javerzac–Galy]Burov2024 author author A. Burov, author O. Nagl, and author C. Javerzac–Galy, title title Towards quantum utility for NMR quantum simulation on a NISQ computer, @noop journal journal arXiv:2404.17548 (year 2024)NoStop [Gidney and Ekerå(2021)]Gidney2021 author author C. Gidney and author M. Ekerå, title title How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits, @noop journal journal Quantum volume 5, pages 433 (year 2021)NoStop [Hogben et al.(2011)Hogben, Krzystyniak, Charnock, Hore, and Kuprov]Hogben2011 author author H. J. Hogben, author M. Krzystyniak, author G. T. P. Charnock, author P. J. Hore, and author I. Kuprov, title title Spinach — A software library for simulation of spin dynamics in large systems, @noop journal journal J. Magn. Reson. volume 208, pages 179 (year 2011)NoStop [Savostyanov et al.(2014)Savostyanov, Dolgov, Werner, and Kuprov]Savostyanov2014 author author D. V. Savostyanov, author S. V. Dolgov, author J. M. Werner, and author I. Kuprov, title title Exact NMR simulation of protein–size spin systems using tensor train formalism, @noop journal journal Phys. Rev. B volume 90, pages 085139 (year 2014)NoStop [Tayler and Gladden(2019)]Tayler2019b author author M. C. Tayler and author L. F. Gladden, title title Scalar relaxation of NMR transitions at ultralow magnetic field, @noop journal journal J. Magn. Reson. volume 298, pages 101 (year 2019)NoStop [Zhou et al.(2020)Zhou, Stoudenmire, and Waintal]Zhou2020 author author Y. Zhou, author E. M. Stoudenmire, and author X. Waintal, title title What Limits the Simulation of Quantum Computers?, @noop journal journal Phys. Rev. X volume 10, pages 041038 (year 2020)NoStop [Ayral et al.(2023)Ayral, Louvet, Zhou, Lambert, Stoudenmire, and Waintal]Ayral2023 author author T. Ayral, author T. Louvet, author Y. Zhou, author C. Lambert, author E. M. Stoudenmire, and author X. Waintal, title title Density–Matrix Renormalization Group Algorithm for Smulating Quantum Circuits with a Finite Fidelity, @noop journal journal PRX Quantum volume 4, pages 020304 (year 2023)NoStop [Wilzewski et al.(2017)Wilzewski, Afach, Blanchard, and Budker]Wilzewski2017 author author A. Wilzewski, author S. Afach, author J. W. Blanchard, and author D. Budker, title title A method for measurement of spin-spin couplings with sub-mHz precision using zero– to ultralow–field nuclear magnetic resonance, @noop journal journal J. Magn. Reson. volume 284, pages 66 (year 2017)NoStop [King et al.(2017)King, Sjolander, and Blanchard]King2017 author author J. P. King, author T. F. Sjolander, and author J. W. Blanchard, title title Antisymmetric Couplings Enable Direct Observation of Chirality in Nuclear Magnetic Resonance Spectroscopy, @noop journal journal J. Phys. Chem. Lett. volume 8, pages 710 (year 2017)NoStop [Barskiy et al.(2019)Barskiy, Tayler, Marco-Ruis, Kurhanewicz, Vigneron, Cikrikci, Aydogdu, Reh, Pravdivtsev, Hövener, Blanchard, Wu, Budker, and Pines]Barskiy2019 author author D. A. Barskiy, author M. C. D. Tayler, author I. Marco-Ruis, author J. Kurhanewicz, author D. B. Vigneron, author S. Cikrikci, author A. Aydogdu, author M. Reh, author A. N. Pravdivtsev, author J.-B. Hövener, author J. W. Blanchard, author T. Wu, author D. Budker, and author A. Pines, title title Zero–field nuclear magnetic resonance of chemically exchanging systems, @noop journal journal Nat. Commun. volume 10, pages 3002 (year 2019)NoStop [Ellis et al.(2023)Ellis, Picazo-Frutos, Bondar, Cavallari, Carera, Barker, Utz, Herrero-G'omez, Marco-Ruis, Taylor, Aime, Reineri, Budker, and Blanchard]Eills2023 author author J. Ellis, author R. Picazo-Frutos, author O. Bondar, author E. Cavallari, author C. Carera, author S. J. Barker, author M. Utz, author A. Herrero-G'omez, author I. Marco-Ruis, author M. C. D. Taylor, author S. Aime, author F. Reineri, author D. Budker, and author J. W. Blanchard, title title Enzymatic Reactions Observed with Zero– and Low–Field Nuclear Magnetic Resonance, @noop journal journal Anal. Chem. volume 95, pages 17997 (year 2023)NoStop [Tayler et al.(2018)Tayler, Ward-Williams, and Gladden]Tayler2018 author author M. C. Tayler, author J. Ward-Williams, and author L. F. Gladden, title title NMR relaxation in porous materials at zero and ultralow magnetic fields, @noop journal journal J. Magn. Reson. volume 279, pages 1 (year 2018)NoStop [Taylor et al.(2019)Taylor, Ward-Williams, and Gladden]Tayler2019 author author M. C. D. Taylor, author J. Ward-Williams, and author L. F. Gladden, title title Ultralow–field nuclear magnetic resonance of liquids confined in ferromagnetic and paramagnetic materials, @noop journal journal Appl. Phys. Lett. volume 115, pages 072409 (year 2019)NoStop [Jiang et al.(2018)Jiang, Wu, Blanchard, Feng, Peng, and Budker]Jiang2018 author author M. Jiang, author T. Wu, author J. W. Blanchard, author G. Feng, author X. Peng, and author D. Budker, title title Experimental benchmarking of quantum control in zero–field nuclear magnetic resonance, @noop journal journal Sci. Adv. volume 4, pages aar6327 (year 2018)NoStop [Abobeih et al.(2019)Abobeih, Randall, Bradley, Bartling, Bakker, Degen, Markham, Twitchen, and Taminiau]Abobeih2019 author author M. Abobeih, author J. Randall, author C. Bradley, author H. Bartling, author M. Bakker, author M. Degen, author M. Markham, author D. Twitchen, and author T. Taminiau, title title Atomic–scale imaging of a 27–nuclear–spin cluster using a quantum sensor, @noop journal journal Nature volume 576 (year 2019)NoStop [CBO(2021)]CBOPharma2021 @noop title Research and Development in the Pharmaceutical Industry, type Tech. Rep. (institution Congressional Budget Office, year 2021)NoStop [DDT(2023)]DDT2024 https://www.drugdiscoverytrends.com/2023-pharma-50-largest-companies/ (year 2023)NoStop [Christel(2023)]PharmaExec2023 author author M. Christel, title title 2023 Pharm Exec Top 50 Companies, @noop journal journal Pharmaceutical Executive volume 43, pages 16 (year 2023)NoStop [Hughes et al.(2011)Hughes, Rees, Kalindjian, and Philpott]Hughes2011 author author J. P. Hughes, author S. Rees, author S. B. Kalindjian, and author K. L. Philpott, title title Principles of early drug discovery, @noop journal journal Br. J. Pharmacol. volume 162, pages 1239 (year 2011)NoStop [Erlanson et al.(2016)Erlanson, Fesik, Hubbard, Jhanke, and Jhoti]Erlanson2016 author author D. A. Erlanson, author S. W. Fesik, author R. E. Hubbard, author W. Jhanke, and author H. Jhoti, title title Twenty years on: the impact of fragments on drug discovery, @noop journal journal Nat. Rev. Drug Discov. volume 15, pages 605 (year 2016)NoStop [Newman and Cragg(2020)]Newman2019 author author D. J. Newman and author G. M. Cragg, title title Natural Producs as Sources of New Drugs over the Nearly four Decades from 01/1981 to 09/2019, @noop journal journal J. Nat. Prod. volume 83, pages 770 (year 2020)NoStop [Atanasov et al.(2021)Atanasov, B.Zotchev, Dirsch, the International Natural Product Sciences Taskforce, and Supuran]Atanasov2021 author author A. G. Atanasov, author S. B.Zotchev, author V. M. Dirsch, author the International Natural Product Sciences Taskforce, and author C. T. Supuran, title title Natural products in drug discovery: advances and opportunities, @noop journal journal Nat. Rev. Drug Discov. volume 20, pages 200 (year 2021)NoStop [Kemp(1986)]Kemp1986 author author W. Kemp, @noop title NMR in Chemistry: A Multinuclear Introduction (publisher Macmillan Education Limited, year 1986)NoStop [Cremer and Gräfenstein(2007)]Cremer2007 author author D. Cremer and author J. Gräfenstein, title title Calculation and analysis of NMR spin–spin coupling constants, @noop journal journal Phys. Chem. Chem. Phys. volume 9, pages 2791 (year 2007)NoStop [Helgaker et al.(2016)Helgaker, Jaszunski, and Swider]Helgaker2016 author author T. Helgaker, author M. Jaszunski, and author P. Swider, title title Calculation of NMR Spin–Spin Coupling Constants in Strychnine, @noop journal journal J. Org. Chem. volume 81, pages 11496 (year 2016)NoStop [Low and Chuang(2017)]Low2017 author author G. H. Low and author I. L. Chuang, title title Optimial Hamiltonian Simulation by Quantum Signal Processing, @noop journal journal Phys. Rev. Lett. volume 118, pages 010501 (year 2017)NoStop [Low and Chuang(2019)]Low2019 author author G. H. Low and author I. L. Chuang, title title Hamiltonian Simulation by Qubitization, @noop journal journal Quantum volume 3, pages 163 (year 2019)NoStop [Gilyén et al.(2019)Gilyén, Su, Low, and Wiebe]Gilyen2019 author author A. Gilyén, author Y. Su, author G. H. Low, and author N. Wiebe, title title Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics, in @noop booktitle Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC 2019) (year 2019) pp. pages 193–204NoStop [Martyn et al.(2021)Martyn, Rossi, Tan, and Chuang]Martyn2021 author author J. M. Martyn, author Z. M. Rossi, author A. K. Tan, and author I. L. Chuang, title title Grand Unification of Quantum Algorithms, @noop journal journal PRX Quantum volume 2, pages 040203 (year 2021)NoStop [Berry et al.(2015)Berry, Childs, Cleve, Kothari, and Somma]Berry2015 author author D. W. Berry, author A. M. Childs, author R. Cleve, author R. Kothari, and author R. D. Somma, title title Simulating Hamiltonian Dynamics with a Truncated Taylor Series, @noop journal journal Phys. Rev. Lett. volume 114, pages 090502 (year 2015)NoStop [Martyn et al.(2023)Martyn, Liu, Chin, and Chuang]Martyn2023 author author J. M. Martyn, author Y. Liu, author Z. E. Chin, and author I. L. Chuang, title title Efficient fully-coherent quantum signal processing algorithms for real-time dynamics simulation, @noop journal journal J. Chem. Phys. volume 158, pages 024106 (year 2023)NoStop [Motlagh and Wiebe()]Motlagh2024 author author D. Motlagh and author N. Wiebe, title title Generalized Quantum Signal Processing, @noop journal arXiv:2309.01501 NoStop [Rines et al.(2019)Rines, Obenland, and Chuang]Rines2019 journal author author R. Rines, author K. Obenland, and author I. Chuang, title title Empirical determination of the simulation capacity of a near–term quantum computer, @noop journal journal arXiv:1905.10724 (year 2019)NoStop [Note2()]Note2 note Our practical use of the GQSP formalism actually corresponds to a quantum eigenvalue transform. However, we retain the name GQSP to maintain consistency with literature. However, it is currently unclear if GQSP can be extended to the more general context of the quantum singular value transform (QSVT).Stop [Dong et al.(2021)Dong, Meng, Whaley, and Lin]Dong2021 author author Y. Dong, author X. Meng, author K. B. Whaley, and author L. Lin, title title Efficient phase–factor evaluation in quantum signal processing, @noop journal journal Phys. Rev. A volume 103, pages 042419 (year 2021)NoStop [Quantum Algorithms Team()]pyLIQTRSW2024 author author M. L. L. Quantum Algorithms Team, https://github.com/isi-usc-edu/pyLIQTR title pyLIQTR: A Python library for fault–tolerant quantum algorithmsNoStop [Quantum Algorithms Team, MIT Lincoln Laboratory(2024)]pyLIQTRMS2024 author author Quantum Algorithms Team, MIT Lincoln Laboratory, title title pyLIQTR: A Python library for fault–tolerant quantum algorithms, @noop journal journal arXiv:1234.56789 (year 2024)NoStop [Babbush et al.(2018)Babbush, Gidney, Berry, Wiebe, McClean, Paler, Fowler, and Neven]Babbush2018 author author R. Babbush, author C. Gidney, author D. W. Berry, author N. Wiebe, author J. McClean, author A. Paler, author A. Fowler, and author H. Neven, title title Encoding Electronic Spectra in Quantum Circuits with Linear–T Complexity, @noop journal journal Phys. Rev. X volume 8, pages 041015 (year 2018)NoStop [AI(2024)]QualtranSW2024 author author G. Q. AI, https://github.com/quantumlib/Qualtran title QUALTRAN: A quantum algorithms translator. (year 2024)NoStop [Grinko et al.(2021)Grinko, Gacon, Zoufal, and Woerner]Grinko2021 author author D. Grinko, author J. Gacon, author C. Zoufal, and author S. Woerner, title title Iterative quantum amplitude estimation, @noop journal journal Npj Quantum Inf. volume 7, pages 52 (year 2021)NoStop [Donoho(2006)]Donoho2006 author author D. Donoho, title title Compressed sensing, @noop journal journal IEEE Trans. Inf. Theory volume 52, pages 1289 (year 2006)NoStop [Bostock et al.(2017)Bostock, Holland, and Nietlispach]Bostock2017 author author M. J. Bostock, author D. J. Holland, and author D. Nietlispach, title title Improving resolution in multidimensional NMR using random quadrature detection with compressed sensing reconstruction, @noop journal journal J. Biomol. NMR volume 68, pages 67 (year 2017)NoStop [Bostock and Nietlispach(2018)]Bostock2018 author author M. Bostock and author D. Nietlispach, title title Compressed sensing: Reconstruction of non–uniformly sampled multidimensional NMR data, @noop journal journal Concepts Magn. Reson. A volume 46A, pages e21438 (year 2018)NoStop [Robson et al.(2019)Robson, Arthanari, Hyberts, and Wagner]Robson2019 author author S. Robson, author H. Arthanari, author S. G. Hyberts, and author G. Wagner, title title Nonuniform Sampling for NMR Spectroscopy, @noop journal journal Methods Enzymol. volume 614, pages 263 (year 2019)NoStop [Delaglio et al.(2017)Delaglio, Walker, Farley, Sharma, Hoch, Arbogast, Brinson, and Marino]Delaglio2017 author author F. Delaglio, author G. S. Walker, author K. A. Farley, author R. Sharma, author J. C. Hoch, author L. W. Arbogast, author R. G. Brinson, and author J. P. Marino, title title Non-Uniform Sampling for All: More NMR Spectral Quality, Less Measurement Time, @noop journal journal Am. Pharm. Rev. volume 20, pages 339681 (year 2017)NoStop [Carr et al.(2005)Carr, Congreve, Murray, and Rees]Carr2005 author author R. A. E. Carr, author M. Congreve, author C. W. Murray, and author D. C. Rees, title title Fragment–based lead discovery: leads by design, @noop journal journal Drug. Discov. Today volume 10, pages 987 (year 2005)NoStop [Menicucci and Caves(88)]Menicucci2002 author author N. C. Menicucci and author C. M. Caves, title title Local realistic model for the dynamics of bulk–ensemble NMR information processing, @noop journal journal Phys. Rev. Lett. volume 2002, pages 167901 (year 88)NoStop [Elenewki and Kalev(2024)]Elenewski2024 author author J. E. Elenewki and author A. Kalev, title title Forthcoming, @noop (year 2024)NoStop [Khukov et al.(2020)Khukov, Kiryutin, Ferrage, Buntkowsky, Yurkovskaya, and Ivanov]Zhukov2020 author author I. V. Khukov, author A. S. Kiryutin, author F. Ferrage, author G. Buntkowsky, author A. V. Yurkovskaya, and author K. L. Ivanov, title title Total Correlation Spectroscopy across All NMR–Active Nuclei by Mixing at Zero Field, @noop journal journal J. Chem. Phys. Lett. volume 11, pages 7291 (year 2020)NoStop [Kiryutin et al.(2021)Kiryutin, Zhukov, Ferrage, Bodenhausen, Yurkovskaya, and Ivanov]Kiryutin2021 author author A. S. Kiryutin, author I. V. Zhukov, author F. Ferrage, author G. Bodenhausen, author A. V. Yurkovskaya, and author K. L. Ivanov, title title Sequential assignment of NMR spectra of peptides at natural isotopic abundance with zero– and ultra–low–field total correlation spectroscopy , @noop journal journal Phys. Chem. Chem. Phys. volume 23, pages 9715 (year 2021)NoStop [Chen et al.(2021)Chen, Kalev, and Hen]Chen2021 author author Y.-H. Chen, author A. Kalev, and author I. Hen, title title Quantum Algorithm for Time–Dependent Hamiltonian Simulation by Permutation Expansion, @noop journal journal PRX Quantum volume 2, pages 030342 (year 2021)NoStop [An et al.(2022)An, Fang, and Lin]An2022 author author D. An, author D. Fang, and author L. Lin, title title Time-dependent Hamiltonian Simulation of Highly Oscillatory Dynamics and Superconvergence for Schrödinger Equation, @noop journal journal Quantum volume 6, pages 690 (year 2022)NoStop [Tindall et al.(2024)Tindall, Fishman, Stoudenmire, and Sels]Tindall2024 author author J. Tindall, author M. Fishman, author E. M. Stoudenmire, and author D. Sels, title title Efficient Tensor Network Simulation of IBM's Eagle Kicked Ising Experiment, @noop journal journal PRX Quantum volume 5, pages 010308 (year 2024)NoStop [Halgren(1996)]Halgren1996 author author T. A. Halgren, title title Merck molecular force field. I. Basis, form, scope, parameterization, and performance of MMFF94, @noop journal journal J. Comput. Chem. volume 17, pages 490 (year 1996)NoStop [O'Boyle et al.(2011)O'Boyle, Banck, James, Morley, Vandermeersch, and Hutchinson]OBoyle2021 author author N. M. O'Boyle, author M. Banck, author C. A. James, author C. Morley, author T. Vandermeersch, and author G. R. Hutchinson, title title Open Babel: An open chemical toolbox, @noop journal journal J. Cheminform. volume 3, pages 33 (year 2011)NoStop
http://arxiv.org/abs/2406.07945v1
20240612070659
Making peace with random phases: Ab initio conical intersection dynamics in random gauges
[ "Xiaotong Zhu", "Bing Gu" ]
physics.chem-ph
[ "physics.chem-ph" ]
http://arxiv.org/abs/2406.09099v1
20240613132858
Towards a Function-as-a-Service Choreographic Programming Language: Examples and Applications
[ "Giuseppe De Palma", "Saverio Giallorenzo", "Jacopo Mauro", "Matteo Trentin", "Gianluigi Zavattaro" ]
cs.PL
[ "cs.PL", "cs.DC" ]
Università di Bologna Italy OLAS team, INRIA France Università di Bologna Italy OLAS team, INRIA France University of Southern Denmark Denmark Università di Bologna Italy OLAS team INRIA France University of Southern Denmark Denmark Università di Bologna Italy OLAS team, INRIA France Towards a Function-as-a-Service Choreographic Programming Language: Examples and Applications Gianluigi Zavattaro Received Month dd, yyyy; accepted Month dd, yyyy ============================================================================================= § INTRODUCTION Choreographic Programming (CP) is a language paradigm whereby software artefacts, called choreographies, specify the behaviour of communicating participants. Choreographic programming is famous for its correctness-by-construction approach to the development of concurrent, distributed systems. In this paper, we illustrate , a proposal for a CP language tailored for the case of serverless Function-as-a-Service (FaaS). In FaaS, developers define a distributed architecture as a collection of stateless functions, leaving to the serverless platform the management of deployment and scaling <cit.>. We provide a first account of a CP language tailored for the FaaS case via examples that present some of its relevant features, including projection. In addition, we showcase a novel application of CP. We use the choreography as a source to extract information on the infrastructural relations among functions so that we can synthesise policies that strive to minimise their latency while guaranteeing the respect of user-defined constraints. § BACKGROUND ON SERVERLESS We start with a brief overview of serverless computing and the platforms that support it. Developers make a serverless application out of software units called functions, which run in short-lived environments triggered by different kinds of events. When an event such as an HTTP request, database change, file upload or scheduled trigger occurs, the FaaS platform runs an instance of the function(s) liked to that event. The platform runs the code after initialising an execution environment, which is a secure and isolated context that provides all the resources needed for the function lifecycle, typically implemented with virtual machines and containers. We use <Ref>, which depicts a typical serverless platform architecture, to briefly introduce the components and processes behind the execution of serverless functions, useful to contextualise our contribution. The main components of a serverless platform, as shown in <Ref>, are the Controller and the Workers. The Controller can receive requests to execute functions from various media, e.g., an HTTP gateway or a publish-subscribe messaging service like the Simple Notification Service (SNS), by AWS. These media expose endpoints which entities—like Web applications, IoT devices, and databases, as well as running serverless functions—can trigger to invoke the execution of a function. The Controller handles the allocation of functions on Workers based on the latter's status (given a set of metrics like CPU and memory usage, collected by the Controller). The Controller also handles the storage/retrieval of the invocation/result from/to the caller. In particular, the scheduler determines which Worker should execute an invoked function based on factors such as current load, function requirements, and resource availability. Once it receives the request to execute a function, the Worker creates a new instance of that function, handling its execution environment lifecycle, including provisioning, scaling, and teardown. § BY EXAMPLE We introduce with the example shown in <ref>, where a function orchestrates (a simplified version of) the training of an image AI model. The whole routine is started by a user, who defines the queries to extract from some databases the labels and images used in the training. The user sends to a serverless function, called f, the queries, which f uses to access two separate databases and obtain the images and labels (ordered and paired). The function then launches the training of each pair image-label in a separate function, called g, which finally triggers a third function, called h, that acts as merger/integrator of the trained weights of the model into a third database. The training process is asynchronous, i.e., the user receives a response from the orchestrator function as soon as it terminates the launching of the training of all image-label pairs. In the choreography, we distinguish three main kinds of entities, found in the preamble at lines 1–3. We start commenting on them from line 3 upwards. The first kind is that of services, which are passive entitites that interact in the choreography providing labelled inbound request-response operations—enumerated within curly brackets in the preamble. For example, in <ref>, the service DB1 offers a request-response operation labelled getData. In general, the caller can discard the response in a request-response interaction, consuming the operation in a one-way fashion. Going up, we find FaaS stateless functions which: a) must be triggered/started by some other active entity via their media endpoints, declared within brackets (e.g., Gateway, SNS and the trigger endpoint annotations), b) provide a request-response triggering behaviour (which the triggerer can invoke in a one-way fashion, discarding their response), c) after their triggering, they cannot receive other messages (but they can send outbound requests, in both one-way and request-response fashion). The last kind of entity is that of stateful participants, which are traditional active processes (no triggering) that can interact with the other entities. In the choreography, we can import operations (e.g., from libraries), as showcased at lines 4–6, where the stateless functions f, g, and h resp. import the operations zip from the Collections library, fit and int(egrate) from the Model one. Since the import instruction has a target function (e.g., f), that functionality is available/imported only at/by that function. The statement we find at line 7 (closing at line 16) is a request-response from the user to the stateless function f, of the form [mathescape=true,numbers=none] exp1role1 <- MEDIUM -> role2 do [| opt_varrole2 |] ... end [ with exp2role2 ] From left to right, we evaluate the expression exp at role1 and send its value via the MEDIUM the function is available at (e.g., the Gateway at line 7 of <ref>) to trigger the execution of function role2. This function can optionally bind the data sent from role1 to a local variable (opt_var) and execute the code within the block until its closure (end). The function sends back a response, which is empty unless specified through the suffix of the closure with the clause with followed by an expression at that function (exprole2) evaluated to return a response—considering the body of the triggering block and the initial binding within its scope. Within the body of the block, we first find the request-response invocations (<->) to the respective operations getData of DB1 and DB2 to retrieve the data (resp. labels and images) by f.[The operation getData in the example is blocking and, thus, the second call to DB2 waits until the completion of the previous call (which might take a long time) to proceed. To increase efficiency, a simple extension of the language can include a parallel operator, like the one found in AIOCJ <cit.>, to send the two getData calls in parallel, realising a join pattern.] To bind to a variable the value received by f as the response to the request to the databases (both for DB1 and DB2), we use the forward operator . The idea behind , inspired by Choral <cit.>, is to naturally support a left-to-right reading of the interactions in a choreography. Without , one would need to write an assignment like varb = dataa <- MEDIUM -> b, forcing the user to first parse the expression on the right[Left-to-right: take the data from a, send it via MEDIUM, instantiate b and return the data sent by a to b as the expression's result.] and then go back to the assignment of the resulting value to the |var|iable on the left. Like in Choral, users can call unary functions with in a point-free style—i.e., exp f1 f2 is syntactic sugar for f2(f1(exp))—which we extend to also work as a variable assignment operator. At lines 11–15, after the retrieval of the labels and images, f zips them together and, for each pair, it triggers a new instance of the function g, sending to it the pair. Note that the triggering of g is one-way, (represented by the communication -MEDIUM-> ), which allows us to adopt the lightweight notation found, e.g., at line 12, instead of the more complex one for request-responses we commented for f, above. At triggering/reception, g performs the training (via the fit operation) and then triggers function h to integrate the data into DB3 (invoking storeData as a one-way operation). A notable characteristic of is that there is no need for coordination in constructs such as loops and conditionals when the interaction concerns only stateless functions. Indeed, choreographic languages where processes are stateful (and usually engage in a kind of session-oriented interaction) need either the enforcement of knowledge of choice or amendments such as auxiliary communications to ensure the causality/connectedness of the actions among the processes <cit.>. When conditionals/loops concern only stateless functions (which, once triggered, cannot engage in further synchronisations except for outbound request-responses) these issues do not arise. As a consequence of this triggering behaviour for stateless functions, at each loop at lines 11–15, we bind the identifiers g and h to resp. new function instances, i.e., at each loop f requests the instantiation of new copies of said functions. To further clarify the relationship between knowledge of choice and stateless functions, consider the example below if expf then f -SNS-> g else f -SNS-> h If g and h were stateful processes, we would need to inform them both on which direction the choreography shall proceed, according to the choice taken by f (resulting from the evaluation of exp). Lacking this piece of coordination, depending on the choice made by f, either g or h would wait for f's call indefinitely (since either of them does not know that f selected the other branch), exposing the program to deadlocks. On the contrary, since all roles are stateless functions, there is no need to inform, e.g., g that f is choosing the else branch, because g is not running (it is triggered by f's call) and has no risk of ending up in a deadlock state. § PROJECTION We show one of the typical applications of choreographic programming, which is the generation of local code that implements the semantics of the source choreography. In particular, this section aims to provide code examples that FaaS developers can use to get a better grasp of the semantics of the example in <ref>. In the following, we use pseudocode inspired by Ruby and Python and annotate the code to indicate to which entity it corresponds and other information useful for deployment, e.g., the name that the FaaS platform shall bind to the function and how it shall expose the function for consumption (its MEDIUM). We remind that services are passive entities which the other participants use as always-available operations, thus, the produced local code does not include the sources for DB1, DB2, and DB3. Without going too much into the details of the pseudocode, we notice the most salient features linked to serverless function programming. First, in the code of functions, note the presence of a main procedure, which is the one canonically invoked by the platform to execute the behaviour of the function. Second, we find the automatic injection of FaaS platform auxiliary functionalities (one can make these functionalities platform-agnostic by providing different implementations of the same API parametrised w.r.t. specific deployments) provided to trigger the functions, e.g., Gateway.invoke and triggerFn resp. found in the user's and functions' code. To keep the example lightweight, we did not introduce distinct syntaxes for one-ways and request-responses—e.g., one can provide the same API parametrised to either send a request and wait for a response to return it or return immediately. §.§ On Choreography Extraction togliere? Non dice nulla di nuovo e poi la estrazione è anni luce dal potere essere utilizzabile Before moving on to discuss the interaction between and APP,GIGIO: condivido che questa sottosezione si puo' togliere: oltretutto APP arriva ma senza averlo introdotto we note that their interaction is orthogonal to projection since the necessary artefact is the choreography and not its projection. Indeed, provided the usage of an amenable model for representing/implementing the local code that makes up a FaaS architecture, one can use techniques from the line of work on choreography extraction <cit.> to develop a tool able to synthesise the code corresponding to said architecture. Of course, the top-down approach has the benefit of providing the traditional correct-by-construction approach of choreographic programming <cit.>, besides being a much less complex process than choreography extraction, in computational terms. § APPLICATIONS: THE CASE OF FUNCTION SCHEDULING POLICIES The scheduling of functions, i.e., the allocation of functions over the available workers, can substantially influence their performance. Indeed, effects like code locality <cit.>—due to latencies in loading function code and runtimes—or session locality <cit.>—due to the need to authenticate and open new sessions to interact with other services—can substantially increase the run time of functions. Usually, serverless platforms implement opinionated policies that favour some performance principle tailored for one or more of these locality principles. Besides performance, functions can have functional requirements that the scheduler shall consider. For example, users might want to ward off allocating their functions alongside “untrusted” ones—common threat vectors in serverless are limited function isolation and the ability of functions to (surreptitiously) gather weaponisable information on the runtime, the infrastructure, and the other tenants. Although one can mix different principles to expand the profile coverage of a given platform-wide scheduler policy, the latter hardly suits all kinds of scenarios. This shortcoming motivated De Palma et al. <cit.> to introduce a YAML-like declarative language used to specify scheduling policies to govern the allocation of serverless functions on the nodes that make up a cluster, called Allocation Priority Policies (APP). Thanks to APP, the same platform can support different scheduling policies, each tailored to meet the specific needs of a set of related functions. As an example of an application of , we introduce a variant of APP. We extract locality principles that emerge from the choreography—e.g., the loop where f spawns many gs and hs presents a locality linked to the time it takes f to contact SNS and issue the call. Then, given a description of the infrastructure topology and possible user-defined constraints on the allocation of functions, we synthesise an APP script that strives to orient the scheduling of functions to minimise their latency of execution, while guaranteeing the respect of the constraints imposed by the user. §.§ The APP Language To define function-specific policies, APP assumes the association of each function with a tag. In our examples, we directly use the function's reference name as the tag, but the relation can be one-to-many to specify a policy shared among a set of functions. Then, APP associates a tag to a policy, so that, at runtime, the scheduler of the platform can pair each function with its APP policy and follow the latter's scheduling logic. In the APP variant we showcase, we assume to have the nodes of the cluster associated with a label, i.e., several nodes can share the same label, e.g., group1. In an APP script, users can specify a sequence of blocks (each identified by YAML's list unit -) associated with a tag. Each block indicates on which nodes the scheduler can allocate the function. At function invocation, the scheduler tries to allocate the function following the logic in the first block, passing to the next only if none of the machines specified in that block can host the function, and so on (exhausting all blocks causes the invocation's failure). In APP, these nodes take the name of workers, which is also the keyword used in the scripts to specify the label of the nodes for that block. Besides workers, APP lets users specify the strategy the scheduler shall use to select among the indicated workers (e.g., choose at random, for load-balancing) and when a worker becomes invalid (e.g., setting a maximal threshold of concurrent functions running on it). The variant we present does not use these options—but it is valid APP code nonetheless, since, when omitted, APP uses default strategy and invalidation rules. The only additional element in our variant's syntax is that of affinity. This option accepts a list of tags, where each tag can be prefixed by a !. For instance, if we have a tag a with affinity: b, !c, it means that function a is affine with b-tagged functions and anti-affine with c-tagged ones. Schedule-wise, “anti-affine” means that we cannot allocate the function we want to schedule on a worker that contains instances of any of its anti-affine functions—from the example, we cannot allocate an instance of a on workers hosting instances of c. Complementarily, “affine” means that we can allocate the function under scheduling only on workers that host at least one instance of each of its affine functions—from the example, we can allocate an instance of a only on workers which have at least one instance of b running on them. These (anti-)affinity constraints are useful to specify e.g., security concerns (like separating the execution of trusted from untrusted functions to avoid possible security risks) and performance (like the allocation of functions on the same worker to let them reuse a pool of connections to a database). In APP, affinity constraints are not symmetric, i.e., if we set f affine with g it does not imply that g is affine with f (but one can symmetrise the relation by adding the complementary constraint). §.§ Extraction of Locality Principles and Generation of APP Scripts Briefly, we define the extraction of locality principles from a choreography by attributing data locality—the principle that the closer the function is to the data the lower its latency, proportional to faster access to the data repository—to all functions that access a database (we omitted this information from <ref>, but it is simple to annotate services accordingly); call locality comes from interactions among functions, in particular the repeated ones, which can benefit from running on machines with faster access to the medium that accepts/delivers the call; code locality comes from the re-use of loaded code in a worker's memory (i.e., avoiding fetching and loading times). On the right of <ref>, we find an example of the extracted localities from <ref>. The last ingredient is the infrastructure topology and the constraints that users might want to impose on the functions. We report on the right of <ref> an example of such a schema. In the example, the writing ( a, b ): N indicates that a and b have a connection speed (symmetric) of N (we can abstract away the unit of measure, as long as all items use the same), e.g., ( DB1, group1 ): 100 means that the machines in group1 have a (fast) connection of 100 with DB1. Note that the absence of infrastructural pairs are as important as the present ones, e.g., the fact that there is no couple ( DB3, group1 ) in the schema means that no machine in group1 can reach DB3. For compactness, we understand the specification of user-defined (anti-)affinity constraints of the schema as symmetric, e.g., if (a, b) are anti-affine, we read this constraint as “neither a can run on a worker where a b is running nor b can run on a worker where an a is running”. Above, we set f and g anti-affine to avoid running f on a worker loaded with g (which performs heavy computations to train the model) and vice versa. Similarly, we avoid placing more instances of the function g on the same worker and placing the functions g and h together. Given the ingredients from <ref>, we can obtain the APP script reported in <ref>. In the script, we have two subsequent blocks for the allocation of function f. In both blocks, we try to allocate f on group1 because this is the only group of machines that can access both DB1 and DB2. Considering affinities, in the first block, we try to allocate f with other instances of the same function to exploit connection pooling to DB1 and DB2. Following the user-defined constraints, we set a negative affinity with function g (written !g). In the second block, f can run on a worker without another instance of the same function running on it (this item avoids the problem of self-affinity, which would prevent the allocation of an initial f), yet we preserve the anti-affinity with g. Since g has no “favourite” group in the infrastructure (both group1 and group2 reach the same speed w.r.t. the only infrastructural locality it presents, SNS), the policy for g allocates the function on any available worker *. Following the anti-affinity constraints specified by the user, we mark g anti-affine with f (as per the symmetric interpretation of the anti-affinity constraints above), itself, and h (similarly to the anti-affinity with f). Finally, we specify that function h can only run on machines of group2 since these are the only ones that can reach DB3. Following the user-defined constraints, we have h anti-affine with g and h affine with itself to exploit connection pooling. Note that the synthesis of the APP script does not include all extracted localities (cf. <ref>). For instance, call locality did not influence the script, due to the fact that group1 and group2 have the same speed w.r.t. SNS. Another example is the code locality between g and h, which share the same code dependency (Model, cf. <ref>) but which we could not indicate as affine in the APP script due to the user-defined anti-affinity constraints (which have priority over the extracted localities). § CONCLUSION We illustrate , a language proposal for exploring the design space of applying CP to FaaS programming. Besides showcasing relevant features of a CP language for FaaS, we provide application examples that target projection and the management of the scheduling of functions. In the future, we plan to conduct a formal investigation into the expressiveness and limitations of , considering other use cases taken from realistic FaaS architectures and deepening the analysis of the interplay between services, stateful participants, and stateless functions. Another interesting direction is to formally analyse the processes of extraction of locality principles from choreographies and the mechanisation of the synthesis of APP scripts. ACM-Reference-Format
http://arxiv.org/abs/2406.08743v1
20240613020322
Generalizable Implicit Neural Representation As a Universal Spatiotemporal Traffic Data Learner
[ "Tong Nie", "Guoyang Qin", "Wei Ma", "Jian Sun" ]
cs.LG
[ "cs.LG" ]
^a Department of Traffic Engineering, Tongji University, Shanghai, China nietong@tongji.edu.cn, 2015qgy@tongji.edu.cn, sunjian@tongji.edu.cn ^b Department of Civil and Environmental Engineering, The Hong Kong Polytechnic University, Hong Kong SAR, China wei.w.ma@polyu.edu.hk ^* Corresponding authors Extended abstract accepted for presentation at the Conference in Emerging Technologies in Transportation Systems (TRC-30) September 2-3, 2024, Crete, Greece Generalizable Implicit Neural Representation As a Universal Spatiotemporal Traffic Data Learner Tong Nie^a, b, Guoyang Qin^a, Wei Ma^b, * and Jian Sun^a, * June 17, 2024 =============================================================================================== 0.5pt Keywords: Implicit neural representations, Traffic data learning, Spatiotemporal traffic data, Traffic dynamics, Meta-learning firststyle [] [] [TRC-30]TRC-30 [Original abstract accepted for presentation]Original abstract accepted for presentation [] firststyle fancy Nie, Qin, Ma^*, and Sun^* [] [TRC-30]TRC-30 [Original abstract accepted for presentation]Original abstract accepted for presentation [] § INTRODUCTION The unpredictable elements involved in a vehicular traffic system, such as human behavior, weather conditions, energy supply and social economics, lead to a complex and high-dimensional dynamical transportation system. To better understand this system, Spatiotemporal Traffic Data (STTD) is often collected to describe its evolution over space and time. This data includes various sources such as vehicle trajectories, sensor-based time series, and dynamic mobility flow. The primary aim of STTD learning is to develop data-centric models that accurately depict traffic dynamics and can predict complex system behaviors. Despite its complexity, recent advances in STTD learning have found that the dynamics of the system evolve with some dominating patterns and can be captured by some low-dimensional structures. Notably, low-rankness is a widely studied pattern, and models based on it assist in reconstructing sparse data, detecting anomalies, revealing patterns, and predicting unknown system states. However, these models have two primary limitations: 1) they often require a grid-based input with fixed spatiotemporal dimensions, restricting them from accommodating varying spatial resolutions or temporal lengths; 2) the low-rank pattern modeling, fixed on one data source, may not generalize to different data sources. For instance, patterns identified in one data type, such as vehicle trajectories, may not be applicable to differently structured data, such as OD demand. These constraints mean that current STTD learning depends on data structures and sources. This limits the potential for a unified representation and emphasizes the need for a universally applicable method to link various types of STTD learning. To address these limitations, we employ a novel technique called implicit neural representations (INRs) to learn the underlying dynamics of STTD. INRs use deep neural networks to discern patterns from continuous input <cit.>. They function in a continuous space and take domain coordinates as input, predicting the corresponding quantity at queried coordinates. INRs learn patterns in implicit manifolds and fit processes that generate target data with functional representation. This differentiates them from low-rank models that depend on explicit patterns, enhancing their expressivity, and enabling them to learn dynamics implicitly. Consequently, they eliminate the need for fixed data dimensions and can adjust to traffic data of any scale or resolution, allowing us to model various STTD with a unified input. In this work, we exploit the advances of INRs and tailor them to incorporate the characteristics of STTD, resulting in a novel method that serves as a universal traffic data learner (refer to Fig. <ref>). Our proof-of-concept has shown promising results through extensive testing using real-world data. The method is versatile, working across different scales - from corridor-level to network-level applications. It can also be generalized to various input dimensions, data domains, output resolutions, and network topologies. This study offers novel perspectives on STTD modeling and provides an extensive analysis of practical applications, contributing to the state-of-the-art. To our knowledge, this is the first time that INRs have been applied to STTD learning and have demonstrated effectiveness in a variety of real-world tasks. We anticipate this could form the basis for developing foundational models for STTD. The unpredictable elements involved in a vehicular traffic system, such as human behavior, weather conditions, energy supply, and social economics, lead to a very complex and high-dimensional dynamical system. Spatiotemporal traffic data (STTD) is one of the measurable quantities to describe the evolution of this dynamical system in space and time, such as vehicle trajectories, sensor time series, and mobility flow. To better understand this system, a primary objective of studying STTD is to use noisy measurements to construct data-centric models that can predict the behaviors of traffic system. Despite its complexity, the dynamics of the system is supposed to evolve with some dominating patterns that can be captured by some low-dimensional structures. One of the representative low-dimensional methods is the low-rank model. There is particular interest in using low-rank models to reconstruct sparse data, detect anomalies, discover interpretable patterns, and estimate unobserved system states. While great progress has been made on STTD modeling, limitations persist. They have focused either on structural priors that are applicable to specific data types or have only demonstrated state-of-the-art results with task-dependent parameters that cannot be generalized to different scenarios. For instance, low-rank patterns may vary across different data scales. Additionally, the fitted matrix cannot generalize beyond the current resolutions. Therefore, the development of a universally applicable method for general STTD analysis remains a formidable challenge and a significant gap in contemporary research. To narrow this gap, we attempt to design a general traffic data representation model from the perspective of learning the underlying dynamics of STTD. However, it seems impossible to derive authentic governing equations or explicit regularities of these complex traffic dynamics in all scenarios. This difficulty prompts us to develop data-driven implicit techniques. Recently, there has been a rise in the prominence of implicitly defined, continuous, and expressive data representation models parameterized by deep neural networks <cit.>. The so-called implicit neural presentations (INRs) are defined in the continuous function space, input the coordinate of the definition domain, and predict the quantity of interest at given coordinates. Additionally, due to the continuous nature of the representation, INRs are resolution-agnostic and thereby adaptable for STTD ordered in arbitrary scales and dimensions, allowing it to model a variety of STTD even beyond grid resolution. To this end, we leverage the advancements of INRs and customize them to incorporate the characteristics of STTD. This results in a novel method that can serve as a versatile traffic data learner (see Fig. <ref>). As a proof-of-concept, we show the effectiveness of the approach in various well-designed benchmarks, covering scales ranging from corridor to network. Furthermore, it is demonstrated how it can be generalized to different input conditions, data domains, output resolutions, and network topology. In relation to existing work, this study contributes to the state-of-the-art with novel perspectives on STTD modeling and an extensive study of its practical applications. To our knowledge, this is the first modeling paradigm that integrates essential properties for universal STTD learning and is generalizable to a variety of real-world tasks. We hope this will lay the groundwork for developing foundational models for STTD. § METHODOLOGY To formalize a universal data learner, we let MLPs be the parameterization θ. Concretely, the function representation is expressed as a continuous mapping from the input domain to the traffic state of interest: Φ_θ(x,t):𝒳×𝒯↦𝒴, where 𝒳⊆ℝ^N is the spatial domain, 𝒯⊆ℝ^+ is the temporal domain, and 𝒴⊆ℝ is the output domain. Φ_θ is a coordinate-based MLP (Fig. <ref> (b)). §.§ Encoding high-frequency components in function representation High-frequency components can encode complex details about STTD. To alleviate the spectral bias of neural network towards low-frequency patterns, we adopt two advanced techniques to enable Φ_θ to learn high-frequency components. Given the spatial-temporal input coordinate 𝐯=(x,t)⊆ℝ×ℝ^+, the frequency-enhanced MLP can be formulated as: 𝐡^(1) = (𝐖^(0)γ(𝐯)+𝐛^(0)),  𝐡^(ℓ+1) = sin(ω_0·𝐖^(ℓ)𝐡^(ℓ)+𝐛^(ℓ)),  Φ(𝐯) = 𝐖^(L)𝐡^(L)+𝐛^(L), where 𝐖^(ℓ)∈ℝ^d_(ℓ)× d_(ℓ+1),𝐛^(ℓ)∈ℝ^d_(ℓ+1) are layerwise parameters, and Φ(𝐯)∈ℝ^d_out is the predicted value. sin(·) is the periodic activation function with frequency factor ω_0 <cit.>. γ(𝐯) is the concatenated random Fourier features (CRF) <cit.> with different Fourier basis frequencies 𝐁_k∈ℝ^d/2× c_in sampled from the Gaussian 𝒩(0,σ_k^2): γ(𝐯)=[sin(2π𝐁_1𝐯),cos(2π𝐁_1𝐯),…,sin(2π𝐁_N_f𝐯),cos(2π𝐁_N_f𝐯)]^𝖳∈ℝ^dN_f. By setting a large number of frequency features N_f and a series of scale parameters {σ^2_k}, we can sample a variety of frequency patterns in the input domain. The combination of these two strategies achieves high-frequency, low-dimensional regression, empowering the coordinate-based MLPs to learn complex details with high resolution. §.§ Factorizing spatial-temporal variability Using a single Φ_θ to model entangled spatiotemporal interactions can be challenging. Therefore, we decompose the spatiotemporal process into two dimensions using variable separation: Φ(𝐯)=Φ_x(v_x)Φ_t(v_t)^𝖳, Φ_x:𝒳↦ℝ,  v_x↦Φ_x(v_x)∈ℝ^d_x,  Φ_t:𝒯↦ℝ,  v_t↦Φ_t(v_t)∈ℝ^d_t, where Φ_x and Φ_t are defined by Eq. (<ref>). Eq. (<ref>) is an implicit representation of matrix factorization model. But it can process data or functions that exist beyond the regular mesh grid of matrices. To further align the two components, we adopt a middle transform matrix 𝐌_xt∈ℝ^d_x× d_t to model their interactions in the hidden manifold, which yields: Φ(𝐯) = Φ_x(v_x)𝐌_xtΦ_t(v_t)^𝖳. §.§ Generalizable representation with meta-learning Given a STTD instance, we can sample a set containing M data pairs 𝐱={(𝐯_i,𝐲_i)}_i=1^M where 𝐯_i∈ℝ^c_in is the input coordinate and 𝐲_i∈ℝ^c_out is the traffic state value. Then we can learn an INR using gradient descent over the loss min_θℒ(θ;𝐱)=1/M∑_i=1^M‖𝐲_i-Φ_θ(𝐯_i) ‖_2^2. As can be seen, a single INR encodes a single data domain, but the learned INR cannot be generalized to represent other data instances and requires per-sample retraining. Given a series of data instances 𝒳={𝐱^(n)}_n=1^N, we set a series of latent codes for each instance {ϕ^(n)∈ℝ^d_latent}_n=1^N to account for the instance-specific data pattern and make Φ_θ a base network conditional on the latent code ϕ <cit.>. We then perform per-sample modulations to the middle INR layers: 𝐡^(ℓ+1) = sin(ω_0·𝐖^(ℓ)𝐡^(ℓ)+𝐛^(ℓ)+𝐬^(n)), 𝐬^(n) = h_ω^(ℓ)(ϕ^(n))=𝐖^(ℓ)_sϕ^(n)+𝐛^(ℓ)_s, where 𝐬^(n)∈ℝ^d_(ℓ) is the shift modulation of instance n at layer ℓ, and h_ω^(ℓ)(·|ω∈Θ):ℝ^d_latent↦ℝ^d_(ℓ) is a shared linear hypernetwork layer to map the latent code to layerwise modulations. Then, the loss function of the generalizable implicit neural representations (GINRs) is given as: min_θ,ϕℒ(θ,{ϕ^(n)}_n=1^N;𝒳)=𝔼_𝐱∼𝒳[ℒ(θ,ϕ^(n);𝐱^(n)]=1/NM∑_n=1^N∑_i=1^M‖𝐲^(n)_i-Φ_θ,h_ω(ϕ)(𝐯_i^(n);ϕ^(n)) ‖_2^2. To learn all codes, we adopt the meta-learning strategy to achieve efficient adaptation and stable optimization. Since conditional modulations 𝐬 are processed as functions of ϕ, and each ϕ represents an individual instance, we can implicitly obtain these codes using an auto-decoding mechanism. For data n, this is achieved by an iterative gradient descent process: ϕ^(n)←ϕ^(n)-α∇_ϕ^(n)ℒ(Φ_θ,h_ω(ϕ),{(𝐯_i^(n),𝐲_i^(n))}_i∈ M), where α is the learning rate, and the above process is repeated in several steps. To integrate the auto-decoding into the meta-learning procedure, inner-loop and outer-loop iterations are considered to alternatively update Φ_θ, and ϕ. § RESULTS We conduct extensive experiments on real-world STTD covering scales from corridor to network, specifically including: (a) Corridor-level application: Highway traffic state estimation; (b-c) Grid-level application: Urban mesh-based flow estimation; and (d-f) Network-level application: Highway and urban network state estimation. We compare our model with SOTA low-rank models and evaluate its generalizability in different scenarios, such as different input domains, multiple resolutions, and distinct topologies. We also find that the encoding of high-frequency components is crucial for learning complex patterns (g-h). Fig. <ref> briefly summarizes our results. § SUMMARY We have developed a new method for learning spatiotemporal traffic data (STTD) using implicit neural representations. This involves parameterizing STTD as deep neural networks, with INRs trained to map coordinates directly to traffic states. The versatility of this representation allows it to model various STTD types, including vehicle trajectories, origin-destination flows, grid flows, highway networks, and urban networks. Thanks to the meta-learning paradigm, this approach can be generalized to a range of data instances. Experimental results from various real-world benchmarks show that our model consistently surpasses conventional low-rank models. It also demonstrates potential for generalization across different data structures and problem contexts. We present a new spatiotemporal traffic data (STTD) representation method based on implicit neural representations (INRs). By parameterizing STTD as deep neural networks, we train INRs to directly map coordinates to traffic states. Due to the generality of this representation, it can be exploited to model a variety of STTD, such as vehicle trajectory, origin-destination flows, grid flows, highway sensor networks, and urban networks. By virtue of the meta-learning paradigm, it is generalizable to a series of data instances. Experimental results on various real-world benchmarks indicate that our model consistently outperforms traditional low-rank models. It also has the potential to generalize across different data structures and problem settings. authordate1
http://arxiv.org/abs/2406.09171v1
20240613143141
Schur Quantization and Complex Chern-Simons theory
[ "Davide Gaiotto", "Jörg Teschner" ]
hep-th
[ "hep-th", "math.QA" ]
A Hybrid Modelling of a Water and Air Injector in a Subsonic Icing Wind Tunnel This work was supported by the Dispositif Recherche of Métropole Rouen Normandie through the COPOGIRT project. For this reason and the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission. César Hernández-Hernández, Thomas Chevet, Rihab el Houda Thabet, and Nicolas Langlois The authors are with Université de Rouen Normandie, ESIGELEC, IRSEEM, 76000 Rouen, France {cesar.hernandez, thomas.chevet, rihab.hajrielhouda, nicolas.langlois}@esigelec.fr Received—- ; accepted—- ======================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Schur correlation functions are a special class of protected quantities in four-dimensional N=2 Supersymmetric Quantum Field Theories which have attracted considerable attention in the last few years <cit.>. A main goal of this paper is to employ Schur correlation functions to define an interesting collection of quantum mechanical systems whose properties are determined by the parent 4d SQFTs. The procedure is closely related to a previous construction of quantum mechanical systems whose properties are determined by 3d N=4 SCFTs <cit.>. The operator algebra for the quantum mechanical system associated to a four-dimensional N=2 supersymmetric quantum field theory is the *-algebra double [] ≡ A[] × A[]^ , with a *-structure defined below. Here A^ denotes the algebra with the same elements and addition as A but opposite multiplication and A[] is the “quantum K-theoretic Coulomb branch algebra”[We caution the reader that some mathematical papers incorrectly refer to the “K-theoretic Coulomb branch algebra of a 3d N=4 SQFT” when describing the algebra associated to a 4d theory with the same field content.] <cit.>, which describes the fusion of (K-theory classes of) half-BPS line defects in . This algebra inherits many remarkable properties from the webs of dualities typical of these SQFTs. Our goal is to identify a natural choice of Hilbert spaces [] on which [] is represented unitarily. The quantum mechanical system defined in this way will reflect important properties of the parent SQFT. Schur correlation functions can be defined as Witten indices of certain spaces of local operators, as reviewed below, or equivalently as supersymmetric partition functions on an S^3 × S^1 geometry, with line defects wrapping the S^1 factor. Intuitively, the relation with a quantum mechanical system arises from an unusual factorization of the 4d geometry along a supersymmetric S^2 × S^1 slice including line defect insertions <cit.>, leading to a representation of correlation functions as expectation values of elements of between states produced by the path integral over half of the geometry. This intuition is motivated by some formal properties enjoyed both by the explicit localization formulae which compute the Schur index of Lagrangian gauge theories and by the conjectural IR localization formulae which compute the Schur index of any SQFT with a Seiberg-Witten effective description. In both cases, the correlation function takes the form of an expectation value of certain operators acting on an auxiliary Hilbert space _aux. The relations between the auxiliary Hilbert spaces associated to dual presentations of the same theory can be far from obvious in general, although one may identify unitary operators which intertwine different presentations in a few specific cases. One would like to argue that these auxiliary Hilbert spaces carry different presentations of a structure which is canonically associated to the SQFT itself.[It is a bit challenging to make this intuition precise. The natural way to give a physical construction of an Hilbert space equipped with a positive-definite inner product is to consider a unitary supersymmetric quantum mechanical system and project to its ground states. The original S^3 × S^1 geometry is not equipped with an isometry which could play the role of an Hamiltonian for the S^2 × S^1 slice. Presumably, one may seek a family of rigid supergravity backgrounds which interpolates between the original S^3 × S^1 geometry and a situation where the required SQM setup can be defined at least locally around the S^2 × S^1 slice. To the best of our knowledge, the required tools have not yet been developed. We will not attempt to do so.] We will produce a candidate via a GNS-like construction which only employs the Schur correlation functions and an expected positivity property which has been verified in great generality. The candidate is defined as the closure of A under a certain positive-definite inner product, equipped with the natural left- and right- action of A on itself. In particular, it is equipped with a dense collection of states |a⟩ labelled by elements a∈A and generated from a “spherical” vector |1⟩ associated to the identity element. Intuitively, |1⟩ represents the path integral over half of the S^3 × S^1 geometry and |a⟩ a path integral with an extra line defect insertion. The |a⟩ states are not orthonormal, but have inner products explicitly given by the Schur correlation functions. The pair (,) reflects various important properties of the parent SQFT. The formal factorization of localization formulae can be recast as the existence of an isometry mapping to the corresponding auxiliary Hilbert spaces _aux. One may then investigate if the isometry may actually define an isomorphism, intertwining conjectural equivalences between different auxiliary descriptions.[Analogously, if an alternative physical Hilbert space _phys with the desired properties can be defined as in the previous footnote, it will necessarily include analogous vectors |1⟩_phys and |a⟩_phys with the same inner products as |a⟩ and thus will include an isometric image of .] An important application this approach is to give an uniform characterization of the quantum mechanical systems associated to theories of class <cit.>. Recall that theories of class are labelled by the data of an ADE Lie algebra and a Riemann surface C, possibly decorated in a manner we will not review here <cit.>. This data is used to define a supersymmetric compactification on C of the six-dimensional (2,0)-SCFT labelled by , leading to a 4d N=2 theory T[,C]. Remarkably, the corresponding K-theoretic Coulomb branch algebra A[,C] has a geometric description in terms of skeins on C labelled by finite-dimensional representations <cit.>. We will derive a dual description of the Schur correlation functions as C × D^2 correlation functions in the four-dimensional Kapustin-Witten theory <cit.>. Factorization along a diameter allows us to identify as the Hilbert space of a Chern-Simons theory with complex gauge group <cit.>. The algebra maps to the algebra of space-like skeins of Wilson line operators in Chern-Simons theory and the spherical vector to the boundary state for a very special topological boundary condition. The construction is somewhat analogous to the quantum double construction of conventional 3d TFTs <cit.>. When =𝔰𝔩_2, we expect the construction to be related to a Lorentzian de Sitter variant of the Ponzano-Regge model <cit.>. There are strong similarities with 3d loop quantum gravity constructions <cit.> but the unitary structure appears to be novel. We will verify this proposal for several four-dimensional N=2 supersymmetric quantum field theories which have a simple class description with = 𝔰𝔩_2, and compare it with a more conventional approach to the quantization of complex Chern-Simons theory in a typical example. For reason of space, we will focus on UV localization formulae in this paper and make connections to IR formulae in a companion paper <cit.>. We will also give a general comparison between our “Schur quantization” approach to complex Chern-Simons theory and previous approaches <cit.>. Some of the existing approaches to complex Chern-Simons theory are topological in nature. Constructions based on the 3d-3d correspondence <cit.> also implicitly or explicitly employ the relation to the 6d (2,0)-SCFTs and are obviously closely related to this work. Comparison with these approaches will be mostly be postponed to our companion paper <cit.>, as IR formulae play a crucial role. Another approach is based on a quantum deformation of the Lorentz group SL(2,) <cit.>. Remarkably, we will find that the quantum theory defined by Schur correlation functions is related to a quantum deformation of SL(2,) that is different from the one used in <cit.>. Both appear to fit into a larger family associated to Schur correlation functions decorated by surface defects <cit.>, but we expect the construction we propose to be special within this larger class of options: the surface defects will generically not be canonical nor invariant under dualities. Another approach to the quantization of complex Chern-Simons theory uses the splitting of flat connections in to (1,0) and (0,1)-parts defined by a complex structure on C <cit.>. As discussed in the companion paper <cit.>, one is thereby led to a quantization scheme related to the non-compact WZW model with target G_/G_ c, with G_c the compact real form of G_, and to a one-parameter deformation of the analytic Langlands correspondence. The relation to complex Chern-Simons theory suggests that the complex-structure dependent quantization is equivalent to the topological quantization. The relations with class theories furthermore predict an equivalence with the quantum theories defined by the Schur correlation functions. The rest of the introduction will draw a somewhat more detailed picture. §.§ Schur indices The Schur indices <cit.> of four-dimensional N=2 supersymmetric quantum field theories decorated by half-BPS line defects <cit.> represent the physical basis of our proposal. References <cit.> review of some of the properties of half-BPS line defects, and <cit.> introduces the holomorphic-topological twist as a tool to study them. A mathematical definition of a monoidal, ^*-equivariant category expected to capture the properties of half-BPS line defects in Lagrangian gauge theories has more recently been given in <cit.>. We expect that an analogous category 𝖫𝗂𝗇𝖾𝗌[] exists for any 4d N=2 SQFT T. Decorated Schur indices only depend on ^*-equivariant K-theory classes of line defects, which define the algebra A_[] ≡ K_^*(𝖫𝗂𝗇𝖾𝗌[]) over [, ^-1], where is the ^*-equivariant parameter but also plays the role of the spin fugacity in the Schur index context. From now on, whenever we mention a line defect, we usually refer to its K-theory class. Given two half-BPS line defects L_a and L_b, one may consider the space of local operators which may appear at a junction between L_a and L_b, i.e. the space of line defect-changing local operators. The line defect Schur index I_a,b() can be defined as the Witten index of this space of local operators, graded by Spin(2) rotation quantum numbers with fugacity <cit.>.[If thus gives the equivariant character of morphisms in 𝖫𝗂𝗇𝖾𝗌[], with being the equivariant parameter for the ^* action on the category.] The Schur indices often admit an interpretation as a partition function of superconformal N=2 supersymmetric quantum field theories on the euclidean four-manifold S^1 × S^3. Schur indices can also be defined for theories which are not super-conformal and are expected to still admit an S^1 × S^3 interpretation for some rigid supergravity background. To the best of our knowledge, such a background has not yet been described in detail yet, though its existence follows from general considerations about the holomorphic-topological twist <cit.> of the theory.[Indeed, the Schur index can be computed in the HT twist of the theory placed on a quotient of ^2× by a dilatation which acts on by a factor of .] The Schur indices I_a,b() give a pairing on A_. Our main conjecture is that I_a,a()>0 for all a∈ A_, a≠ 0. (Positivity) for 0<^2<1. This conjecture will be checked in many examples later in this paper. Conjecture positivity implies that the hermitian form on the complexification of A_ defined by ⟨ a|b⟩=I_a,b() is positive definite, and therefore defines a scalar product on A_. The L^2 closure of A_ under such pairing defines the Hilbert space _ of interest here: L^2-normalizable linear combinations of the vectors |a⟩ associated to the line defects L_a. The representation of A_ on _ has remarkable properties. The space _ contains a distinguished vector |1⟩∈_ associated to the unit element of A_. There are two natural actions of A_ on _, associated to left- and right multiplication in A_, W_a|b⟩=|ab⟩, W_a|b⟩=|ba⟩, respectively. It is clear that |1⟩ is cyclic with respect to these actions, in the sense that the space spanned by the vectors W_a |1⟩ is dense in _. From Wa-tildeWa it follows that W_a|1⟩=W_a|1⟩ . Vectors |1⟩ satisfying sphdef will be called spherical. General properties of the Schur index also predict the Hermiticity properties of the inner products: there exist an automorphism ρ:A_→ A_, defined over [, ^-1] and naturally extended to be anti-linear over , such that W_a^†=W_ρ(a) , and thus W_a^†=W_ρ^-1(a). We will discuss the physical interpretation of ρ in the main text. This makes the representation of _≡ A_× A_^ on _ unitary with respect to the *-algebra structure defined by a^∗=ρ(a), using the notation a for the element of A^ corresponding to a∈ A. The spherical condition implies that the expectation values a ≡ I_1,a() = ⟨ 1|W_a|1⟩ , define a twisted trace a b = ρ^2(b) a . The positivity condition can be written as ρ(a) a > 0 . We will later argue that there is a one-to-one correspondence between positive traces on algebras A_ and unitary representations of _ containing a spherical vector |1⟩. Both descriptions involve the automorphism ρ as a characteristic piece of data.[We will see in the main text that the construction of _ can be modified by the insertion of surface defects in the Schur index. This can lead to positive traces on A_ twisted by automorphisms ρ' distinct from ρ. They lead to spherical unitary representations of the corresponding *-algebra doubles '_. We will discuss in the main text the relation between the *-algebras _ and '_ and their unitary representations. ] Mathematically, one can identify a linear space of possible twisted traces on the algebra A_ for any given automorphisms ρ. Characterizing the convex cone of positive traces is an interesting mathematical problem. The mathematical problem to classify positive traces of potential relevance for Abelian gauge theories has been studied in <cit.>. The choice of ρ from Schur quantization appears to be distinguished by two properties: a positive ρ^2-twisted trace exists and is unique. It would be very interesting to find generalizations of this result. We expect that the supergravity backgrounds representing the Schur indices as partition functions on S^1 × S^3 are reflection positive, implying positivity on general grounds. However, as this has not been demonstrated yet, we will later verify positivity in many examples by direct computations based on Lagrangian descriptions of the theories . We should also observe that positivity is built into the conjectural IR formulae for the Schur indices <cit.>. It should be noted that the theories may admit several Lagrangian descriptions, leading to different formulae for the Schur indices of one and the same theory . The fact that the Schur indices do not depend on the couplings suggests that all these different formulae represent the same function of . This is a highly non-trivial property which is challenging to prove even in simple examples. §.§ Schur quantization of K-theoretic Coulomb branches The quantum system abstractly defined by the above construction has an intimate connection with the K-theoretic Coulomb branch [T], i.e. the moduli space of Coulomb vacua of the four-dimensional N=2 supersymmetric quantum field theories compactified on a circle while preserving all supercharges. The moduli space [T] is a hyper-Kähler manifold which is a complex integrable system in one of the complex structures <cit.>. Half-BPS line defects wrapping the circle provide a basis of the commutative algebra A_cl of holomorphic functions on [T] <cit.> in a different (generic) complex structure. The algebra A_cl is isomorphic to the classical limit → 1 of A_.[There are actually two classical limits →± 1 and two closely related versions _±[T] of the K-theoretic Coulomb branch <cit.>, depending on the circle-compactification being twisted by the fermion number or by the center of the SU(2)_R symmetry of the theory.] A precise mathematical definition of the K-theoretic Coulomb branches of quiver gauge theories has been given in <cit.>, leading to powerful techniques for the computation of difference operator realisations of A_ <cit.> compatible with localization formulae for the Schur indices. The quantum system (_,_) defined from Schur indices defines a quantization of the complex symplectic space space [] as a real phase space, with = e^- ħ for real ħ, henceforth called Schur quantisation. The *-algebra quantizes the classical Poisson algebra generated by holomorphic and anti-holomorphic functions on [T].[The classical definition of the automorphism ρ which appears in the *-structure is subtle and interesting. The moduli space [T] is hyper-Kähler, with a circle worth of complex structures which give essentially the same complex manifold. An holomorphic function a on [T] can be “hyper-Kähler rotated” along this circle and mapped to an holomorphic function in the opposite complex structure. Complex conjugation maps it back to an holomorphic function ρ(a). ] Schur quantization inherits extra structures from a larger collection of protected Schur correlation functions. In particular, Schur “half-indices” which count protected local operators supported on half-BPS boundary conditions or interfaces for T can be interpreted as distributional states or kernels in _. The physical interplay between lines and boundaries/interfaces equips these states/kernels with a specific action of _. For example, certain interfaces implement unitary equivalences associated to dualities or RG flows of T <cit.>. Schur quantization can also be regarded as a four-dimensional uplift of the “sphere quantization” introduced in <cit.> for the Coulomb branch of three-dimensional N=4 SCFTs. It is furthermore related to brane quantization <cit.>. §.§ Class examples Explicit descriptions of the algebras A_ are also known whenever the four-dimensional N=2 supersymmetric quantum field theories are in class <cit.>. Such theories can, by definition, be described as compactifications of the (2,0)-supersymmetric six-dimension­al theory on Riemann surfaces C. This description implies a description of the K-theoretic Coulomb branches of the moduli spaces of vacua associated to such theories as moduli spaces (G,C) of flat complex G_-connections on C.[The reader may be confused by the jump from the ADE Lie algebra labelling T[,C] to the global form of a group G_ in (G,C). There are some subtleties concerning T[,C] being a relative theory <cit.> which we will neglect as much as possible in this paper.] The Poisson algebra Sk(C,G) of algebraic functions on (G,C) is generated by the W_a,cl trace functions W_a,cl≡_R Pexp∮_ℓ labelled by pairs a=(R,ℓ), with ℓ being a simple closed curve ℓ on C, and R being a finite-dimensional representation R of G, as well as functions labelled by more general networks a of holonomies along open paths on C contracted by intertwining maps. The Poisson bracket relations among the functions W_a,cl on (C,G) admit a simple diagrammatical description via skein manipulations. A lot is known about the quantization of such moduli spaces on the algebraic level. The quantization of the Poisson algebra Sk(C,G) is essentially canonical. It yields the skein algebra Sk_(C,G), a non-commutative algebra having generators W_a, satisfying explicitly known diagrammatic relations.[We will ignore here some interesting subtleties about (G,C) being related to the → 1 or → -1 classical limits.] The representation theory of the algebra Sk_(C,G) is highly non-trivial. It depends heavily on the allowed range of values of the parameter . We are here interested in the case 0<^2<1 and in unitary representations of A_ where the generators of Skein_(C,G) will get represented by normal operators on a Hilbert space _. Schur quantization of theories of class gives us precisely such a quantization which is conjecturally canonical, i.e. it only depends on C and G. The representations of interest in the context of Schur quantisation are distinguished from previously studied representations by the existence of a cyclic spherical vector. Later in the paper, we will discuss in a typical example a more conventional approach to the quantisation of (C,G), and show how a spherical vector can be constructed in this approach. Once a spherical vector is found, expectation values ⟨ 1|W_a|1⟩ give a positive twisted trace. We will show that ⟨ 1|W_a|1⟩ coincides with Schur indices I_1,a() derived using Lagrangian descriptions of the associated theory of class .[The check is relatively straightforward, as the coordinate system traditionally used to quantize (G,C) happens to be compatible with the localization procedure employed in the calculation of the Schur index. It is nevertheless instructive.] Observe that a mathematical proof of the uniqueness of positive twisted traces on Sk_(C,G) with the correct ρ would allow one to streamline the quantization of (G,C), making many of the properties suggested by the connections to theories of class and their Schur indices manifest. §.§ Lift to Kapustin-Witten theory and a dictionary to Schur quantization There is a relation between Schur quantization and complex Chern-Simons theory which can be motivated by a chain of dualities involving six-dimensional maximally-supersymmetric SCFTs, as discussed in more detail in our companion paper <cit.>. The first half of the duality chain maps the Schur index of a class theory to a partition function of the Kapustin-Witten twist <cit.> of N=4 Supersymmetric Yang Mills gauge theory with gauge group G, which is placed on the product of C with a disk D^2 having Neumann boundary conditions. The original half-BPS line defects in the Schur index map to Wilson lines wrapping skeins in C, placed at the boundary of the disk[We are working in the generic KW twist, which does not admit bulk line defects.] in the same order as in the trace.[A disk geometry is a very natural way to define a trace of boundary local operators in a 2d TFT. In general, there is a whole collection of possible traces labelled by insertions of one bulk operators in the middle of the disk. Here that would necessarily be some 4d bulk local operator placed at points in C or a bulk surface defect wrapping C. Back along the duality chain this would map to the insertion of a surface defect in the Schur index, transverse to the plane supporting the line defects. The insertion of surface defects appear to modify ρ. Positivity properties may still hold, see <cit.> for some Abelian examples, but a physical explanation is more challenging.] The second part of the duality chain cuts the disk along a segment. The space of states which the KW theory associates to the segment appears in a natural embedding of complex Chern-Simons theory into the KW twist <cit.>. This is similar to the duality chains previously considered in <cit.> for the case of partition functions on deformed S^4, leading to segment compactifications of KW theory with suitable choices of boundary conditions. A related approach had previously been discussed in <cit.>.[The 4d geometry can also be seen as a 4d uplift of a 2d qYM construction <cit.> and it would be interesting to formulate Schur quantization (and in particular positivity) directly in that language.] The KW path integral on each half disk is then predicted to produce a specific state |1⟩ in complex CS theory, so that a Schur correlation function maps to an expectation value: I_1,a() = ⟨ 1|a|1⟩ . The justification for this statement is somewhat non-trivial, involving the deformation of the half-disk to a quotient × [1,-1] by a _2 reflection of both factors. §.§ Relation with complex Chern-Simons theory In this way one arrives at a conjectural representation of the Schur indices in terms of complex Chern-Simons (CS) theory. One may recall that the classical equations of motion of Chern-Simons theory require the complex connection to be flat. On a compact two-dimensional surface C, the theory has a finite-dimensional phase space, the moduli space (C,G) of flat G_ connections on C, equipped with a symplectic form proportional to i ∫_C [δ∧δ - δ∧δ] . Finite-dimensional descriptions of (C,G) can offer a convenient starting point to the quantization using some convenient coordinate systems, but establishing independence on the choices of coordinates may require additional work. Topological invariance of the Chern-Simons functional suggests that the complex CS theory should associate a Hilbert space _ CS(C,G) to any surface C, with _ CS(C,G) depending only the topological type of C. The algebra of observables should coincide with Sk_(C,G)×Sk_(C,G)^ op, with the first factor generated by the quantized holomorphic trace functions W_a (aka space-like Wilson lines for ) and the second factor generated by the quantized anti-holomorphic trace functions W_a (aka space-like Wilson lines for ). The path integral over three-manifolds M_3 having boundary C is expected to define states |M_3⟩∈_ CS(C,G). One may also consider path integrals over three-manifolds of the form ^+ × C, with boundary conditions B imposed at 0× C, in order to define distributions |B⟩. It was argued in <cit.> there should exist a distinguished boundary condition B_c characterized by the condition that the holonomy of , restricted to the boundary C, is unitary. It should define a state |1⟩∈_ CS(C,G) which satisfies W_a|1⟩=W_ a|1⟩. Here we assume having chosen labelling conventions in such a way that we have W_a,cl=W_a,cl when the connection is unitary. This corresponds to a specific Hermiticity condition W_a = W_ρ(a)^†.[In the absence of irregular singularities, we have ρ^2=1. For =𝔰𝔩_2, ρ=1. Irregular singularities on C will complicate the story. Based on the properties of Schur indices and of class theories, we expect ρ to act on line defects ending on irregular singularities by shifting the endpoint from one Stokes sector to the next one around the puncture. i.e. a “pop” in the notation of <cit.>. In the class theory, this corresponds to an anomalous U(1)_r rotation by π, which leads to θ-angle shifts and Witten effect on dyonic lines <cit.>. It would be nice to have a clearer understanding of this point. We will continue the discussion in <cit.>.] Furthermore, it was argued that B_c arises in the chain of dualities mentioned above as the path integral of KW theory on an half-disk. As a consequence, expectation values ⟨ 1|W_a|1⟩ are predicted to match Schur indices I_1,a(), giving an isometry _→_ CS(C,G) which is compatible with the action of Sk_(C,G)×Sk_(C,G)_ op. Analogous arguments predict that the isometry should be compatible with * The action of the mapping class group of C. Indeed, the mapping class group is simply the duality group of T[,C] <cit.>. * The collection of states |M_3⟩ labelled by three-manifolds <cit.>. * A richer collection of TFT structure based on the factorization properties of quantum group representations (see <cit.> for a brief review and further references), which can be expressed in terms of physical operations on theories of class <cit.>. We conjecture that the isometry is an isomorphism and thus Schur quantization of theories of class provides a consistent quantization of complex Chern-Simons theory. A crucial aspect of this conjecture is that it requires the states W_a|1⟩ created from B_c decorated by boundary skeins to be dense in the Hilbert space of the theory. One should, of course, compare this approach to previous approaches to the quantisation of complex Chern-Simons theory. We will briefly review the comparison to 2d CFT-based methods, previously discussed in <cit.>, later in this paper. We also refer to <cit.> for a review of cluster algebra-based quantization strategies and to our upcoming work <cit.> for a comparison based on the IR description of Schur indices. In both cases, the comparison proceeds by identifying canonical analogues of the spherical vector |1⟩ to build an isometry from _. §.§ Relations to quantum groups Relations to quantum group theory have played an important, in many cases a basic role in most of the previous studies of quantum CS theories associated to compact or non-compact groups. Quantum group representation theory in particular represents the foundation of the approach to quantum CS theory pioneered by Reshetikhin-Turaev <cit.>. Quantum groups furthermore represent the quantisation of the residual gauge symmetries in the Hamiltonian quantisation of Alekseev-Grosse-Schomerus <cit.>. Deeply related connections to quantum group representation theory have been observed in quantum Teichmüller theory in <cit.>. Quantum Teichmüller theory is related to a sector of the PSL(2,) CS-theory. The modular double of U_q(𝔰𝔩_2) can serve as a crucial link between quantum cluster variables associated to triangulations, and the modular functor structure associated to pants decompositions in this context <cit.>. A generalisation to higher Teichmüller theory has been developed in <cit.>. The factorization algebra approach reviewed in <cit.> unifies and streamlines many of these conceptual threads and connects them directly to KW theory along the lines of <cit.>: the category of representations of quantum groups can be used to describe the theory algebraically, as a generalized Crane-Yetter theory <cit.>. Quantisation of complex Chern-Simons theory has previously been studied in the regime ∈ of our interest in particular in <cit.>. The approach taken in <cit.> follows the strategy of Alekseev-Grosse-Schomerus, using the quantum group U_q(SL(2,)_) constructed in <cit.> and further studied in <cit.> instead to U_q(SU(2)). We are here going to present evidence that Schur quantisation defines a quantisation of complex Chern-Simons theory related to a quantum deformation of the group SL(2,). However, we will see that the quantum group relevant in this context is different from the quantum group used in <cit.> to construct a quantisation of complex Chern-Simons theory. The variant of U_q(SL(2,)_) coming from Schur quantisation deserves further study. It should, in particular, help to develop the quantisation of complex Chern-Simons theory in close analogy to the quantum Teichmüller theory. §.§ Relation with conformal field theory An alternative strategy to quantize complex Chern-Simons theory is to pick a complex structure on C and use it to polarize the phase space, treating the (0,1) part of the connection as coordinates and the (1,0) part as momenta <cit.>. Essentially, one focusses on a family of distributional states ⟨𝐱| associated to certain boundary conditions B_WZW for the 3d theory, which fix the gauge equivalence class of _z̅, or equivalently a holomorphic bundle on C. We are using 𝐱 as the notation for a collection of parameters labelling a family of holomorphic bundles on C. States |ψ⟩∈_ CS can thereby be represented by wave-functions ψ(𝐱)=⟨𝐱|ψ⟩. One may naturally consider the space of L^2-normalizable twisted half-densities, ^dR_s(C,G):= L^2(_G, |Ω|^1 + i s/2κ_c), on the space/stack _G of G-bundles on C <cit.>.[The Hilbert space itself can be defined in terms of twisted half-densities on some convenient non-singular open patch in _G. The intricacies of _G, though, can affect the definition of a rigged Hilbert space and of distributional states.] In order to see that this is a natural scalar product one may first note that variations in the complex structure of C are represented by the projectively flat KZB connection <cit.>. One may furthermore check that the parallel transport defined by the KZB connection is formally unitary in L^2(_G, |Ω|^1 + i s/2κ_c). This suggests, in particular, that the KZ connection can be integrated to a unitary representation of the mapping class group. As discussed in more detail in <cit.>, one may then consider the wave-functions 𝒵(𝐱)=⟨𝐱|1⟩, or, more generally 𝒵_a(𝐱)=⟨𝐱|W_a|1⟩. One of the main objectives of <cit.> is to propose a definition of the wave-functions 𝒵_a(𝐱) based on conformal field theory. We conjecture, in particular, that the wave-functions 𝒵(𝐱) can be identified with the partition functions of the WZW models with target G_/G <cit.>. This CFT has a partition function 𝒵_ WZW which can be represented by a twisted half-density on _G satisfying the KZB equations <cit.>. 𝒵_ WZW should in particular be invariant under the mapping class group of C. If the WZW level κ satisfies κ-κ_c ∈ i, we expect that the partition functions 𝒵_ WZW represent elements of ^dR_s(C,G), though normalizability is not obvious. As furthermore discussed in <cit.>, it is natural to modify the partition functions 𝒵_ WZW(𝐱) by the insertion of Verlinde line operators. Representing 𝒵_ WZW(𝐱) as an integral over products of holomorphic and anti-holomorphic contributions allows us to define two types of Verlinde line operators, labelled by the same data a as used to label trace functions, defining modified partition functions (_a𝒵_ WZW)(𝐱) and (_a𝒵_ WZW)(𝐱), respectively. The main proposal made in <cit.> is the correspondence (_a𝒵_ WZW)(𝐱)=⟨𝐱|W_a|1⟩_ CS, (_a𝒵_ WZW)(𝐱)=⟨𝐱|W_a|1⟩_ CS. The crucial consistency condition (_a𝒵_ WZW)(𝐱)=(_a𝒵_ WZW)(𝐱) can be verified with the help of CFT technology. In order to round off the discussion let us note that the physics background outlined above predicts that ⟨ 1|W_a |1⟩_Schur = ⟨𝒵_ WZW, _a𝒵_ WZW⟩_ dR, using the notation ⟨ .,.⟩_ dR for the scalar product in ^dR_s(C,G). This is a rather non-trivial prediction. It would be nice to check it directly. Further support is provided by the relation between the WZW model and Liouville theory <cit.>. When G_ = SL(2,), the space (SL(2,),C) admits a second presentation as a twisted cotangent bundle to the space M_g,n of complex structures on C: a choice of complex structure τ together with a choice of oper for that complex structure. Correspondingly, there are oper boundary conditions <cit.> for the 3d CS theory labelled by a choice of complex structure. We expect the corresponding distributional states τ to satisfy ⟨τ|1⟩ = 𝒵_Liouville(τ): the partition function of Liouville theory with central charge c_s = 13 + 6 i (s - s^-1).[It would be interesting to explore the dS/CFT interpretation of this statement <cit.>.] The relation ⟨ 1|W_a |1⟩_Schur = ⟨𝒵_Liouville, _a𝒵_Liouville⟩_M_g,n, involving an integral on M_g,n gives two non-trivial predictions: the right hand side is finite and is computable as a power series in by Schur indices. §.§ Structure of the paper In Section <ref> we discuss Schur quantization in greater detail. In Section <ref> we present a series of examples of increasing complexity where the rank of the gauge group is 1. In Section <ref> we discuss in greater detail the occurrence of complex quantum groups in Schur quantization. In Section <ref> we discuss a relevant example of quantization of complex character varieties based on Fenchel-Nielsen coordinates. Section <ref> discusses the relation to complex Chern-Simons theory. Section <ref> presents a tentative “real” generalization of Schur quantization, with algebra of observable = A_ equipped with some *-structure τ. It should be applicable to a quantization of complex Chern-Simons theory on surfaces with boundaries or cross-caps. We conclude with two Appendices containing some useful formulae for gauge theories with U(N) gauge groups. § SCHUR QUANTIZATION OF K-THEORETIC COULOMB BRANCHES For the sake of clarity, we begin by briefly reviewing a crucial relation between two mathematical structures which can be associated to an algebra A defined[We could relax the condition to A being defined over and ρ being anti-linear with respect to scalar multiplication. The definitions below can be adjusted accordingly.] over and equipped with an invertible automorphism ρ: A → A: * Positive twisted traces, i.e. linear maps : A → which satisfy a b = ρ^2(b)a ρ(a) a >0 . * Spherical unitary representations of the *-algebra[A *-algebra 𝔇 is an algebra equipped with a star-structure. A star-structure is an involutive antilinear map ∗:𝔇→𝔇, ∗(a)=:a^∗, satisfying, (ab)^∗=b^∗ a^∗. Unitary representations of a star-algebra 𝔇 are representations of 𝔇 on an Hilbert space ℋ by operators W_a such that W_a^†=W_a^∗. ] “double” defined as 𝔇 = A ⊗ A^ with star structure a^∗_=ρ(a), using the notation a for the element of A^ corresponding to a∈ A and with ρ being an automorphism of A.[In defining the *-algebra double , we take the underlying vector space of A and A^ to be literally the same. With this choice, ρ is intrinsic to the definition and the spherical condition below is natural. If one forgets the choice of isomorphism of the underlying vector spaces, the *-algebras associated to the same A and different ρ's are equivalent and the choice of ρ only affects the definition of spherical vectors.] Denoting the normal operators representing a∈ A and a∈ A^ by W_a and W_a, respectively, unitarity requires W_a^†=W_ρ(a). The term “spherical” refers to the existence of a spherical vector, a cyclic[I.e. a vector |1⟩ such that |1⟩ is dense in .] vector |1⟩∈ satisfying W_a |1⟩ = W_a |1⟩. We will use the notation |a⟩=W_a|1⟩, a∈ A. It is useful to observe that spherical-def relates the representation of A^ on to the right action of A on itself, W_b |a⟩= W_bW_a |1⟩=W_a W_b |1⟩=W_aW_b|1⟩=W_ab|1⟩=|ab⟩. It is straightforward to see how spherical unitary representations define positive traces: Tr a= ⟨ 1|W_a|1⟩ defines a positive twisted trace. Positivity follows immediately from ρ(a) b = ⟨ 1|W_ρ(a) W_b|1⟩= ⟨ 1|W_a^† W_b|1⟩ =⟨ a|b⟩, and the twisted trace condition is also straightforward: Tr ρ^2(b) a=⟨ρ(b)|a⟩=⟨ 1|W_ρ(b)^†|a⟩=⟨ 1| W_b |a⟩ =⟨ 1|ab⟩=Tr ab. We would also like to argue that positive traces canonically define spherical unitary representations. The first step is to make the underlying vector space of A into a module for A ⊗ A^. In order to avoid confusion, we denote as |a⟩ the element of the module corresponding to the element a ∈ A and thus as |1⟩ the element corresponding to the identity. We will use the canonical left and right actions of A on itself in order to introduce the structure as a A⊗ A^-module, using the notations W_a W_c |b⟩ := |a b c⟩. Obviously, the vector |1⟩ is cyclic for the module and satisfies spherical-def. The key step is to define the positive-definite inner product ⟨ a|b⟩≡ ρ(a) b. We may then define an Hilbert space as the L^2 closure of A under the inner product. The algebra A ⊗ A^ acts on by densely-defined operators. We may observe that ⟨ a|W_ρ(b)|c⟩ = ρ(a)ρ(b) c = ρ(ab) c = ⟨ ab|c⟩ =⟨ a | W_b^† |c⟩, indicating that the hermitian conjugation defined by the scalar product def-scalar makes the representation of A ⊗ A^ on into a spherical unitary representation of . One should note, however, that the operators W_a and W_a defined in WwtW-def will be unbounded, in general. We will not attempt to determine under which conditions W_a and W_a admit extensions defining normal operators on . A classical example of this construction is the definition of spherical principal series representations of complex reductive Lie algebras _ starting from the unique traces on the central quotients of U(). This example and many more occur in the context of sphere quantization <cit.>: the positive twisted traces are provided by protected correlation functions of 3d N=4 SCFTs and are studied mathematically in the context of “short star products” <cit.>. Schur quantization similarly produce candidate positive twisted traces on many algebras of interest, including central quotients of U_q() with q=^2. It includes trigonometric deformations of the classical representation theory results found in 3d N=4 SCFTs and much more. We will sometimes use the notation [A,ρ] to denote the *-algebra double of a given algebra A with automorphism ρ. §.§ Schur correlation functions as a twisted trace The Schur index I() was originally introduced as a specialization of the superconformal index of four-dimensional N=2 SCFTs <cit.>. It can either be interpreted as a supersymmetric partition function on a “S^1 ×_^2 S^3” geometry,[For real 0<<1, this denotes a geometry where the radius of S^1 is -log ||^2 times the radius of the sphere, decorated by some extra complexified R-symmetry backgrounds to preserve a specific amount of supersymmetry.] or as a graded Witten index of the space of local operators. Compared with the reference <cit.>, we define = q^1/2 to avoid square roots in our formulae. The Schur index can be generalized to a family of line defect Schur indices I_a,b(), graded Witten indices of the space of local operators intertwining between supersymmetric line defects L_a and L_b. In terms of partition functions, this matches a correlation function with two line defect insertions in S^1 ×_^2 S^3: the defect L_b is inserted at a specific point in the sphere and wraps S^1, while L_a is inserted at an antipodal point on the sphere and wraps S^1 in the opposite direction. The line defect Schur indices can be generalized further to a collection of “Schur correlation functions” I_a_1 ⋯ a_n(), with insertions of L_a_i line defects wrapping S^1 at a cyclic sequence of points along a great circle of S^3 <cit.>. These can also be understood as graded Witten indices for spaces of local operators sitting at the junction of multiple line defects. The notation reflects the fact that Schur correlation functions only depend on the relative order of the insertion points along the great circle, up to an important subtlety we discuss next. Supersymmetric line defects break the the U(1)_r R-symmetry of the SCFT and thus occur in one-dimensional families L^ϑ_a rotated into each other by U(1)_r rotations. Different members of the same family preserve different linear combinations of the bulk super-charges. The Schur correlation functions are defined by placing L^ϑ_i_a_i at the locations ϑ_i on the great circle, so that a line defect will move along the family as the location of the line defect insertion is transported along the great circle of S^3. A full circuit along the great circle implements a U(1)_r rotation by 2 π. In Lagrangian SCFTs, the U(1)_r charges which occur in the theory are integral, so that a 2π rotation is trivial. Accordingly, the 2 π rotation brings the line defect back to itself. In other SCFTs, such as Argyres-Douglas theories, a 2 π U(1)_r rotation is non-trivial and gives a defect which preserves the same SUSY as the original one but may be different. We denote the effect of the 2π rotation as a map a →ρ^2(a), so that L^ϑ_a ≡ L^ϑ+ 2 π_ρ^2(a). Then cyclic invariance is twisted as I_a_1 ⋯ a_n() = I_ρ^2(a_n) a_1 ⋯ a_n-1() . The line defect Schur indices are special cases of Schur correlation functions. The precise relation requires accounting for the opposite orientation of a line defect along the S^1 factor. A U(1)_r rotation by π applied to a line defect wrapping S^1 in the opposite direction gives a map a →ρ(a), so that I_a,b() coincides with a Schur correlation function of L_ρ(a) and L_b: I_a,b() = I_ρ(a) b() = I_bρ^-1(a)() . We will now introduce a notation which anticipates another property of the Schur correlation functions: parallel line defects can be fused and the correlation functions are compatible with the fusion operation. A proper definition of the notion of fusion of line defects requires some care <cit.>. We will review some salient aspects momentarily. For now, we recall that one can define a “quantized K-theoretic Coulomb branch” algebra A_ with coefficients in [,^-1], i.e. Laurent polynomials in with integral coefficients, and that wrapped supersymmetric line defects L_a map to elements in A_, which we will denote with the same symbol a and refer to as the K-theory class of L_a. Then all correlation functions are encoded in 1pt functions I_a() via the algebra relations: I_a_1 ⋯ a_n() = I_(a_1 ⋯ a_n)() . and ρ is an algebra automorphism. We will thus define a twisted trace on A_ simply as a ≡ I_a() The trace is twisted by ρ^2: a b = ρ^2(b) a , We are now ready to make a non-trivial claim, supported by the known explicit UV and IR formulae for line defect Schur indices: the pairing ⟨ a|b ⟩≡ ρ(a) b = I_a,b() is positive definite if ∈ [-1,1]. This claim should follow from reflection positivity of the associated Schur two-point functions. With a slight abuse of notation, we will also denote as A_ the algebra over the real numbers obtained from ⊗ A_ by specializing the variable to a real number between -1 and 1. According to our initial discussion, we immediately gain a spherical unitary representation of the algebra double _ = A_× A_^ on a real Hilbert space _ defined as the closure of A_ under this inner product. §.§ Non-conformal examples and holomorphic-topological twist The formulae employed to compute the Schur index and correlation functions apply equally well to non-conformal SQFTs and satisfy the properties described above, with an appropriate choice of ρ. This may be surprising, as the original superconformal index only makes sense in the conformal case. Intuitively, this happens because the Schur index does not make use of U(1)_r, which is broken for general SQFTs, but only of the Cartan subgroup of the SU(2)_R R-symmetry, which is generically unbroken in the vacuum. A sharper justification employs the Holomorphic-Topological (HT) twist of 4d N=2 SQFTs <cit.>. A reader interested only in algebraic aspects of our construction can safely skip this discussion and simply keep in mind that the Schur correlation functions technology applies to non-conformal theories as well. The HT twist is a canonical modification of the physical theory which treats a specific nilpotent supercharge as a BRST charge. Accordingly, three out of four translation generators become gauge symmetries and the twisted theory treats two directions as topological and the remaining two as holomorphic. The Schur index “counts” local operators in the HT-twisted theory and is thus defined for generic SQFTs as long as the Cartan sub-algebra of the SU(2)_R R-symmetry is unbroken. Although the HT twist is the natural setting for discussing many properties of the Schur index, a full discussion of this interesting topic goes well beyond the scope of this paper. We will briefly discuss here some expected properties of the HT twist, leaving a full discussion to future work. Even if the original physical theory is not conformal, the HT-twisted theory still enjoys a scale symmetry. Indeed, denoting the holomorphic coordinate as z, the only non-trivial part of a scale transformation is the re-scaling of z, which is implemented by the same z∂_z generator which implements rotations of the holomorphic plane. The rotation generator in the twisted theory is the combination of the physical rotation generator and of the Cartan R of the SU(2)_R R-symmetry . It is also useful to employ conventions where the ghost number grading/homological degree/fermion number is shifted by the R, so that the role of “fermion number” in indices is played by (-1)^R. In these conventions, the Schur index is the Euler character of the complex of local operators in the HT twist of the physical theory <cit.>. The partition function interpretation should also be available within the HT twist: the quotient of ^2_t ×_z by (t,z,z̅) → (||^2 t, ^2 z , ^2 z̅) endows S^1 ×_^2 S^3 with an HT structure. Supersymmetric line defects map to topological line defects wrapping lines in the topological plane in the HT twisted theory. We will always consider line defects supported at the origin in the complex plane and keep track of the ^* rotation symmetry of the complex plane. Schur correlation functions can be defined as counting local operators at junctions of such topological defects or as correlation functions of circle-wrapped topological defects in S^1 ×_^2 S^3. Recall that topological line defects generically form a category, with morphisms consisting of defect-changing local operators. Essentially by definition, circle-wrapped line defects only remember the K-theory class of the corresponding objects and so Schur correlation functions take as inputs K-theory classes of line defects in an HT twist of the physical theory. We identify the K-theory as A_ and identify ^2 as the equivariant parameter for rotations of the complex plane. In the presence of a transverse topological direction, as is the case here, the category has a monoidal structure controlling the fusion of parallel line defects. It is also possible to rotate the support of a line in the topological plane and define a dualization functor ρ which maps a line to a line rotated by π. A topological theory may be framed, in which case a rotation ρ^2 of 2 π fails to be the identity. Accordingly, A_ is an algebra over [, ^-1] endowed with an endomorphism ρ. A full mathematical treatment of the category of line defects for the HT twist of Lagrangian gauge theories and of the associated Schur indices can be found in a series of papers <cit.>. This construction accounts nicely for all the expected properties of the Schur index for conformal or non-conformal theories, except for the crucial positivity property: reflection positivity is a property of the physical theory but not necessarily of the twisted theory. A proof for non-conformal theories would thus require one to write an explicit supergravity background defining the supersymmetric S^1 ×_^2 S^3 partition function for the physical theory and verify reflection positivity. We leave this to future work. Experimentally, positivity holds for non-conformal examples as well, with a ρ discussed below. §.§ Schur quantization as a quantization of the K-theoretic Coulomb branch(es) The algebra A_ has classical limits →± 1. In these limits, it reproduces the Poisson algebra of holomorphic functions on two versions _± of the moduli space of 3d Coulomb vacua for supersymmetric circle compactifications of the 4d theory. The two versions differ by the choice of spin structure and central SU(2)_R holonomy placed on the circle <cit.>. We will usually disregard this subtlety. The moduli space is a complex symplectic manifold. Keeping track of the star structure, the classical limit of _ reproduces the combined Poisson algebra of holomorphic and anti-holomorphic functions on . Keeping track of the leading non-commutativity, we see that the holomorphic and anti-holomorphic Poisson brackets have opposite signs, i.e. they arise from the Poisson bracket defined in terms of the imaginary part of the complex symplectic form on .[Notice that the real part of the complex symplectic form is not exact and thus would not give rise to a continuous family of quantizations.] We conclude that the quantum system (_, _) provides a quantization of as a real phase space equipped with the imaginary part of the complex symplectic form. Following considerations similar to these we employ for theories of class later in the paper, one may argue that the Schur correlation functions of a 4d theory can be recast as disk correlation functions in an A-twist of the 2d supersymmetric sigma model with target . Ultimately, this presents Schur quantization as a computable example of brane quantization <cit.>. §.§ Lagrangian building blocks The Schur index for Lagrangian SQFTs receives contributions from hypermultiplets and vectormultiplets. It can be readily understood as counting gauge-invariant local operators built from BPS letters in the physical theory <cit.>, or superfields of the HT theory <cit.>. An hypermultiplet contributes holomorphic derivatives ∂^n X and ∂^n Y of the complex scalar fields, with twisted spin n+1/2 and (-1)^R =-1, while vectormultiplets contribute two sets of fermionis generators ∂^n U and ∂^n V, with with twisted spin n+1 and (-1)^R =-1 as well. Putting it all together, we get I(;μ) = 1/|W_G|∮_|ζ_i|=1∏_i (^2)^2_∞ dζ_i/2 π i ζ_iΔ(ζ) ∏_α (^2ζ^α;^2)^2/∏_w,w_f (-ζ^w μ^w_f;^2) , where ζ_i are valued in the Cartan torus of the gauge group and the products run over roots α and gauge and flavour weights (w,w_f) of hypermultiplet scalars. The factor Δ(ζ) is the appropriate Vandermonde determinant Δ(ζ) ≡∏_α>0(ζ^α/2 - ζ^-α/2)^2 for projecting on the character of gauge-invariant operators. We included flavour fugacities μ for completeness. In the following we will assume |μ|=1.[This condition seems sufficient, but not strictly necessary for positivity of the Schur indices I_ab. Other reality conditions may also work. We will not explore this phenomenon.] As a first step towards discussing positivity, observe that both roots and non-zero weights occur in opposite pairs, so that the integrand is positive definite when |ζ_i|=1 and |μ|=1. The simplest class of supersymmetric line defects are Wilson lines, labelled by an unitary representation R of G. The insertion of a wrapped Wilson line w_R in a Schur index results in the insertion in the integral (<ref>) of the character χ_R(ζ) of the corresponding representation. The Wilson lines satisfy ρ(w_R) = w_R^∨ and χ_R^∨(ζ) = χ_R(ζ) on the |ζ_i|=1 integration locus. As a consequence, ρ(w_R) w_R = ∮_|ζ_i|=1∏_i (^2)^2_∞ dζ_i/2 π i ζ_iΔ(ζ) ∏_α (^2ζ^α;^2)^2/∏_w,w_f (-ζ^w μ^w_f;^2)|χ_R(ζ)|^2 , is positive and more generally ρ(a) a >0 manifestly for any linear combination a of wrapped Wilson lines. The most general class of supersymmetric line defects in a Lagrangian gauge theory are 't Hooft-Wilson lines ℓ_λ_m, λ_e. Naively, these are labelled by a labelled by a pair (λ_m, λ_e) of magnetic and electric weights modulo the action of the Weyl group. In practice, monopole bubbling makes the definition subtle.[And so does the option to introduce more elaborate couplings between the gauge fields in the neighbourhood of the defect and auxiliary degrees of freedom supported on the defect. The notion of “Koszul-perverse coherent sheaf” from <cit.> appears to satisfactorily handle these subtleties in the HT twist of the theory. Simple Koszul-perverse coherent sheaves are labelled by the pair (λ_m, λ_e) modulo Weyl.] The calculation of Schur correlation functions with insertions of non-zero magnetic weight is somewhat intricate and requires one to introduce some more formalism. K-theory classes of 't Hooft-Wilson lines and of the resulting K-theoretic quantum Coulomb branch algebra A_ can be handled via the BFN formalism <cit.>: elements of A_ are represented as equivariant K-theory classes on a variant of the affine Grassmanian and the product is defined through certain correspondences. In practice, the generators of A_ can be presented as multiplicative difference operators D_a acting on a collection of formal variables v_i <cit.>. In the context of class S theories with a Lagrangian description, these differential operators match <cit.> the description of the Skein algebra A_ = Sk_ in terms of quantum Darboux coordinates (u_i, v_i) u_i u_j = u_j u_i u_i v_j = ^2 δ_ij v_j u_i v_i v_j = v_j v_i , of Fenchel-Nielsen type. Each difference operator D_a is a sum of terms of the form D^(n)_a(v) u^n, which shift v_i → v_i ^2 n_i <cit.>. In particular, the n=0 term D^(0)_a(v) is some rational function of the v_i. The prescription to compute a Schur correlation function with a single line defect insertion is then straightforward: insert D^(0)_a(ζ) in the integral (<ref>). Correlation functions of multiple lines can be computed by first composing the respective difference operators. We should remark that the original BFN construction requires the matter representation M for the hypermultiplet scalar fields to be of cotangent form, i.e. to be a direct sum T^*N of a representation N and its dual N^∨. The resulting K-theoretic Coulomb branch algebra is independent of this choice, but explicit “Abelianized” expressions as difference operators do depend on it. We will discuss momentarily how this dependence cancels out in index calculations. Matter of non-cotangent type can be handled by more refined means <cit.>. Cyclicity of the resulting twisted trace is far from obvious from this prescription. Based on examples, it should follow from contour deformations which are only unobstructed thanks to delicate cancellations between the poles and zeroes in the integrand and in the D_a'. It would be very nice to formulate an abstract proof in the BFN language.[The proof is perhaps already implicitly given by the combination of dualizability results in <cit.> and the relation to the Schur index in <cit.>.] Notice that the integrand in the Schur index is a ratio of θ functions: θ(x;^2) ≡ (- x;^2)_∞ (- x^-1;^2)_∞ , which transform well under shifts θ(^2 x;^2) ≡ (-^3 x;^2)_∞ (-^-1 x^-1;^2)_∞ = ^-1 x^-1θ(x;^2) , up to an overall factor. The overall factors of gauge fugacities accumulated under the shift from the numerators and denominators cancel out in a conformal theory. In a non-conformal theory, they combine to reproduce a non-trival ρ^2 twist of the trace expected from the following gauge theory considerations. Namely, the conformal symmetry anomaly in a 4d N=2 gauge theory is closely associated to the anomaly in the U(1)_r conformal symmetry. The sort of 2 π U(1)_r rotation which would control the framing anomaly in the HT theory can be mapped to a shift in the θ angle of the theory. By the Witten effect, that results in a shift of the electric charge λ_e of a 't Hooft-Wilson line by an amount proportional to the magnetic charge λ_m and to the anomaly. The π rotation functor ρ flips the signs of both (λ_m, λ_e) and shifts the electric charge by a certain multiple of the magnetic charges, depending on the specific value of the mixed U(1)_r-gauge anomaly coefficients and on precise labelling conventions for the line defects. We refer the reader to concrete examples in the next section. Another feature which we see in concrete examples is that positivity of ρ(a) a can also be demonstrated by a contour deformation to a contour where the measure is manifestly positive. This suggests that a combinatorial proof of positivity in the BFN language may be possible as well. §.§ An useful isometry It is often the case that protected partition functions such as the Schur indices can be factored out into pieces which correspond to a decomposition of the underlying geometry <cit.>. In particular, the Schur index can be factored into two “half-indices” II_B(ζ) associated to hemi-spheres with Dirichlet boundary conditions II_B(ζ) = δ_B,0∏_i (^2)_∞∏_α (^2ζ^α;^2)/∏^N_w,w_f (-ζ^w μ^w_f;^2) , glued together (as in a 3d superconformal index for a 3d G gauge theory) by a ζ contour integral and a sum over magnetic charges B on S^2: I(;μ) = 1/|W_G|∑_B ∈Λ∮_|ζ_i|=1∏_i dζ_i/2 π i ζ_iΔ_B(ζ) II_B(ζ^-1) II_B(ζ) . Here Λ is the lattice of magnetic weights of G. The product ∏^N in the definition of the half-index indicates that we have assumed of cotangent type T^*N, with N being a representation of G, and we only include the weights for the N half. In particular, II_B(ζ) is invariant under simultaneous Weyl reflection of ζ and B.[If matter is not of cotangent type, the gauge theory has a potential anomaly. If the anomaly cancels, the K-theoretic Coulomb branch and Schur indices are well-defined but there are no Dirichlet boundary conditions which preserve the full G symmetry, making the 3d gluing interpretation of the factorized formula unavailable. Nevertheless, the analysis below essentially goes through even if N is not a representation of G. The main difference is that Weyl reflections will implemented via non-trivial transformations R_N,N' described below. We expect that difference operator realizations of K-theoretic Coulomb branch generators preserving this modifiel Weyl symmetry will be available.] The factor Δ_B(ζ) is a modification of the Vandermonde measure: Δ_B(ζ) ≡∏_α>0(v^α/2 - v^-α/2)( v^α/2 - v^-α/2) , where v= ^-Bζ and v = ^Bζ. This factorization resembles an inner product ⟨ II|II⟩ in an auxiliary Hilbert space ^aux_≡ L^2(T×Λ)^W_G , where T is the Cartan torus, Λ the magnetic weight lattice, we use the modified Vandermonde measure Δ_B(ζ) and consider Weyl-invariant wavefunctions only. Remarkably, such a factorization works well with the insertion of line defects. One can formally define multiplication and shift operators (u^m ψ)_B(ζ) =ψ_B-m(^m ζ) (v ψ)_B(ζ) = ^-Bζψ_b(ζ) ( u^m ψ)_B(ζ) = ψ_B-m(^-mζ)⟩ ( v ψ)_B(ζ) = ^Bζψ_b(ζ) , and specific expressions for D_a and D_a in terms of these operators such that a = ⟨ II|D_a|II⟩ . The specific expressions for D_a and D_a depends on the choice of N. Formally, the expected properties of the trace/Schur correlators should follow from a non-trivial interplay between the functional form of the half-index and the D_a's D_a|II⟩ = D_a |II⟩ as well as a formal adjoint-ness property D(ρ(a))^† = D(a), which involves a non-trivial shift of the ζ integration contour. A concise way to express these relations is to say that the map A_→^aux_ π: a → D_a|II⟩ is an isometry π with respect of the inner product ⟨ a|b ⟩. Taking the closure of A_, this gives an isometry π: _→^aux_: |a⟩→ D_a|II⟩ Bubbling phenomena make it hard to give any more detail about the D_a which is theory-independent. The exception is the part of D_a which contains the largest Abelian magnetic charges, which we could denote as U_λ = F_λ(v) u^λ for a magnetic charge λ. Up to an overall monomial, F_λ precisely cancels all the factors in u^λ II_B(ζ) which would obstruct a contour deformation. Namely, u^λ II_B(ζ) = δ_B,λ∏_i (^2)_∞∏_α (^2+λ·αζ^α;^2)/∏^N_w,w_f (-^1+λ· wζ^w μ^w_f;^2) and F_λ(v)= f_λ(v) ∏^N_w,w_f∏_n=λ· w+1^-1 (1+^1+2 n v^w μ^w_f)/∏_α∏_n=λ·α^-1(1-^2+ 2n v^α) where we only include factors with λ· w<0 or λ·α<0 and f_λ(v) is some monomial. Hence U_λ II_B(ζ) = f_λ(v) δ_B,λ∏_i (^2)_∞∏_α (^2+|λ·α|ζ^α;^2)/∏^N_w,w_f (-^1+|λ· w|ζ^w μ^w_f;^2) The monomials f_λ(v) satisfy some constraints described below, but express a potential ambiguity in deciding which dyonic line defects should be considered ”bare” 't Hooft lines with no electric charge: a change in conventions would redefine f_λ(v) by a power of v. Powers of would similarly represent an ambiguity in defining rotation generators in the presence of the defect. The definition of U_λ has the same structure, so that the auxiliary condition U_λ |II⟩ = U_λ |II⟩ reduces to f_λ(^-λζ) = f_λ(^λζ) which can be satisfied by using the same monomials and adjusting the power of . On the other hand, when we check the adjointness properties we will compare U_λ^† and U_-λ. The former contains factors involving, say, ( v^†)^w= v^-w for positive λ· w, as tilde variables shift fugacities in the opposite manner. The latter contains factors involving v^w for negative -λ· w. This is not a problem, as (1+x^a v^b) = x^a v^b(1+x^-a v^-b), but the comparison generates an extra monomial for each factor, which ultimately feed into the non-trivial definition of ρ. The Abelianized formulae for D_a are often described in the literature directly in terms of shift operators analogue to the U_λ's, constrained by U_λ U_λ' = F_λ(v) F_λ'(^2 λ v)/F_λ+λ'(v)U_λ+λ' If λ· w and λ' · w are both positive, the resulting factors does not enter in the ratio. If they are both negative, the resulting factors cancel out in the ratio. The ratio thus only contains contributions from λ· w>0 and λ' · w<0 or viceversa. In particular, the auxiliary algebra formed by U_λ and v can be opposite to the algebra formed by U_λ and v, even though the F_λ factors have a structure analogous to that of F_-λ rather than F_λ. If λ and λ' are in the same alcove, so that λ· w and λ' · w have the same sign for all possible w, then it is natural to impose f_λ f_λ' = f_λ + λ', compatible with a convention where products of 't Hooft lines with no electric charge give back a 't Hooft line with no electric charge. Another reasonably natural requirement is to have Weyl-invariant expressions. We will see in examples that it may be useful to relax the latter requirement slightly in order to avoid unpleasant square roots of fugacities. We should also observe that multiplicative unitary transformations by factors such as ζ^w· B can be readily employed to redefine u's by powers of v's and thus f_λ(v)'s by some monomial to the power of λ, allowing for some irreducible freedom in choosing the f_λ's. §.§ Changing N We can briefly discuss the dependence of this construction on the choice of N. We need a bit of notation to compare different ways to split the matter contributions in two halves: II_B(ζ;N) = δ_B,0∏_i (^2)_∞∏_α (^2ζ^α;^2)/∏^N_w,w_f (-ζ^w μ^w_f;^2) We should also distinguish the representations D_a(N) and D_a(N) suitable for this choice and the corresponding isometry π_N. We then define a collection of “reflection” unitary transformations on ^aux_ which acts as multiplication by R_N,N'≡∏^N'_w,w_f (-ζ^w μ^w_f;^2)/∏^N_w,w_f (-ζ^w μ^w_f;^2) As each factor is either shared between numerator and denominator or appears with opposite fugacities in numerator and denominator, this is manifestly a phase. It satisfies II_B(ζ;N) = R_N,N' II_B(ζ;N') but also intertwines the corresponding representations of the D_a's and D_a's as difference operators and thus the isometries π_N and π_N'. §.§ Wilson line spectral decomposition There is another, powerful perspective on this isometry. The Wilson lines w_R and w_R are a collection of commuting normal operators acting on _. The images χ_R(v) and χ_R( v) act diagonally on ^aux_, with one-dimensional distributional eigenspaces labelled by points in T×Λ/W_G , where T is the Cartan torus and Λ the lattice of magnetic weights, and common eigenvalues χ_R(^-Bζ) χ_R(^Bζ) We can attempt a direct diagonalization of the action of Wilson lines on . This is possible because we have a lot of information on the products of Wilson lines with more general line defects. We can start from the ring of Wilson lines, reproducing the representation ring: w_R w_R' = w_R⊗ R' The spectrum of this ring is the complexified Cartan torus modulo Weyl and we can easily write infinite formal linear combinations |0;ζ⟩ of |w_R⟩'s such that w_R |0;ζ⟩ = w_R |0;ζ⟩ = χ_R(ζ) |0;ζ⟩ Hermiticity imposes |ζ|=1. It is easy to see that the |0;ζ'⟩ states are delta-function normalizable: they literally map to multiples of δ-function distributions in ^aux,W_G_ supported at B=0 and ζ =ζ' and Weyl images of that. More generally, if we label line defects D_m,e by a magnetic weight m and an electric weight e, we have w_R D_m,e = ∑^R_λ^- m ·λ D_m,e+λ + ⋯ D_m,ew_R = ∑^R_λ^m ·λ D_m,e+λ + ⋯ where the sum is over weights in R and the ellipsis denote terms with smaller magnetic charge. We can use a triangularity argument to recursively build states |m;ζ⟩ as linear combinations of |D_m,e⟩, corrected by terms of lower magnetic charge, which are formal eigenvectors of w_R with eigenvalues χ_R(^-mζ). The triangularity of the relation between |D_m,e⟩ and |m;ζ⟩ strongly suggests that these states exhaust the spectrum and that the isometry _→^aux,W_G_ is really an isomorphism and gives the spectral decomposition of into one-dimensional distributional eigenspaces of the Wilson lines. §.§ Schur quantization and gauging We will now extend and generalize further the spectral decomposition statement. Consider now a generic theory with global symmetry G and a theory /G obtained by gauging G. A general feature of Coulomb branches is that line defects of are inherited by /G, except that Weyl-invariant combinations of flavour parameters for the G symmetry are promoted to the corresponding G Wilson lines. In order to express this fact, denote as A_[,G] the result of promoting the Weyl-invariant combinations of flavour parameters in A_[] to central elements. Then we have an algebra embedding A_[,G] → A_[/G]. We can also promote the Schur trace _ on A_[] to a family of traces ^μ_,G on A_[,G] which just maps the central elements back to specific values μ. Then the trace of inherited operators is simply _/G a = ∮dζ/2 π i ζΔ(ζ;) ^ζ_,G a a ∈ A_[,G] Here we denoted for brevity the full vectormultiplet contribution Δ(ζ;) ≡ (^2;^2)_∞^2 rk_GΔ(ζ) ∏_α (^2ζ^α;^2)^2 We will now attempt to give a general characterization of the Schur quantization for /G in terms of the Schur quantization of . We begin with the case of Abelian G. If G is Abelian, general operators in A_[/G] will carry quantized magnetic charge m so that they lie in A_[,G] if m=0 and in some Harish-Chandra-like bimodules M^(m)_[,G] otherwise. The G Wilson lines of charge e are multiplied by appropriate powers ^-2 m · e when brought across an operator of given magnetic charge. The trace will vanish unless the total magnetic charge vanishes and magnetic charge is additive under multiplication. If a has magnetic charge m and b has magnetic charge -m, we expect the comparison between a b and ρ^2 (b)a to require a contour integral shift of ζ→ζ^2m. When checking positivity for a of magnetic charge m, we expect that contour integral for the inner product can be shifted to an intermediate contour ζ→ζ^m so that _/Gρ(a) a = ∮∏_i (^2;^2)_∞^2 dζ_i/2 π i ζ_i^^-mζ_,Gρ(a) a = ∮(^2;^2)_∞^2 dζ_i/2 π i ζ_i^^mζ_,G a ρ^-1(a) has a positive integrand. Accordingly, we expect a positive-definite inner product ^^-Bμ_,Gρ(a) a = ⟨ a|b⟩_μ,B on M^(m)_[,G], leading to the definition of Hilbert spaces _[;G]_μ,B via L^2 completion.[As in the case of sphere quantization, a less hand-waving demonstration of positivity can likely be given by identifying elements of M^(m)_[,G] as K-theory classes of line defects which end a “vortex” surface defect and the inner product as a Schur correlation function decorated by the vortex defect.] In practice, we have re-written the inner product in _[/G] as a direct sum/integral ⟨ a|b⟩ = ∑_B ∮∏_i (^2;^2)_∞^2 dζ_i/2 π i ζ_i⟨ a^(B)|b^(B)⟩_ζ,B where the superscript denotes the part of magnetic charge B. This gives an explicit spectral decomposition of _[/G] in eigenspaces of G Wilson lines: _[/G] = ∮^⊕_(S^1 ×)^rk_G∏_i (^2;^2)_∞^2 dζ_i/2 π i ζ_i_[;G]_ζ,B and predicts again that the Wilson line spectrum should be supported on the sequence of circles (S^1 ×)^rk G, with w_R = χ_R(^Bμ) and w_R = χ_R(^-Bμ). If G is not Abelian, we still expect an Abelianized presentation of A_[/G] to be available, where operators are written as difference operators in v whose coefficients are some sort of meromorphic elements in M^(m)_[,H], with H being the Cartan subgroup of G. We also expect a spectral decomposition of _(/G) under the action of G Wilson lines, with a spectrum supported on T×Λ/W_G and eigenspaces built from states formally associated to Weyl-invariant combinations of u^m v^e Abelian operators. The contour integral computing _/G from ^μ_,G should be identified with the spectral decomposition of the inner product as a direct sum/integral of inner products in individual eigenspaces. §.§ Dualities and spectral problems Supersymmetric gauge theories often enjoy dualities, relating the same or different theories at different values of the couplings. Dualities typically reorganize the line defects of the theory, resulting in non-trivial algebra morphisms between the associated A_ algebras. The Schur index is independent of couplings and thus the identifications extend to identifications between traces and associated Hilbert spaces _. In particular, the Wilson lines of one theory will map to some collection of non-trivial commuting difference operators in the dual theory. Integrable systems which arise in such manner include the relativistic open Toda chain and the trigonometric quantum Ruijsenaars-Schneider model. As we know the spectrum of Wilson lines in one description, we immediately gain a prediction for the joint spectrum of the dual collection of commuting difference operators, thus completely solving the spectral problem for these complex quantum integrable systems. §.§ Boundary conditions and states The definition of Schur index and Schur correlators can be extended to a situation where an half-BPS boundary is present. The Schur “half-index” counts BPS boundary local operators and is associated to an S^1_^2× HS^3 partition function, where HS^3 is an hemisphere. Half-BPS line defects can be added at points on a half-great circle in HS^3 intersecting the boundary S^2 at the poles. Quarter-BPS boundary line defects can also be added at the poles of the boundary S^2. The boundary line defects for a given choice of boundary give a left module M_ and a right module M_ for A_. A Schur correlation function II_ m_0 a_1 ⋯ a_n m_n+1 will depend on a sequence of wrapped lines of the form m_0 a_1 ⋯ a_n m_n+1 which is consistent with the algebra and module operations. In other words, it gives some linear map M_⊗_A_ M_→[[]]. By definition, the II_ m a m correlation function gives a collection of distributional states ⟨ m; m| in A^∨_ such that the correlation function equals ⟨ m; m|a ⟩. Also by definition, ⟨ b m; m c|a ⟩ = ⟨ m; m|cab ⟩ = ⟨ m; m|c b|a ⟩ so this definition a collection of distributional “boundary states”, a map M_⊗ M_→ A^∨_ which commutes appropriately with the A_× A^_ action. In a Lagrangian gauge theory with a Lagrangian boundary condition, these Schur half-indices can be readily computed. For example, for Neumann boundary conditions half of the integrand of the usual Schur index is replaced by the 3d superconformal index of the boundary degrees of freedom. For theories of class S, interesting boundaries and interfaces can be associated to certain three-manifolds M_3 with boundary C, possible decorated by skeins reproducing M_ as Skein modules Sk_(M_3) <cit.>. The above collection of states has the properties expected from the path integral of complex Chern-Simons theory on M_3, decorated with appropriate skeins m and m of holomorphic and anti-holomorphic Wilson lines. §.§ 3d limits The 3d Coulomb branch algebra A_ for 4d Lagrangian gauge theories is a “trigonometric” version of the Coulomb branch for 3d N=4 gauge theories with the same gauge group and matter content. In practice, the difference operators which represent the Coulomb branch of the 3d theory can be obtained by a specific → -1 limit from these for the 4d theory. If we write = - e^- π R v_a= e^-2 π R V_a and take an R → 0 limit at constant V_a, factors such as (1- (-)^n v_a) become 2 π R (V_a + n/2) and the difference operators which would multiply v_a by ^n effectively shift V_a by n/2. The “trigonometric” D_a difference operators are thus mapped to “rational” versions D^3d_a. In the BFN language, this is the limit taking equivariant K-theory classes to equivariant cohomology classes. These define the quantized Coulomb branch algebra A^3d_ħ = 2 π R for the 3d theory. In computing the R → 0 limit of the Schur index, it is important to observe that the integrand can be expressed as a ratio of products of θ functions θ_4(iR z,τ = i R) or θ_1(i R z,τ = i R) and η(τ = i R), where ζ = e^-2 π R z is a product of ζ_a and μ's. These functions behave well under modular transformations τ→ - τ^-1, z → z τ^-1, so that the integrand can be re-written in terms of θ_2(z,i R^-1) or θ_1(z,i R^-1) and η(i R^-1). These functions, in terms, have a simple R → 0 behaviour at finite z: up to an overall 2^m exp (2 π n/R) prefactor which we can drop, they go to cosπ z or sinπ z and 1. These are the building blocks for a “Coulomb branch” protected sphere correlation function of the 3d theory. As long as the Lagrangian gauge theory has “enough matter”, so that the 3d limit is not “bad” in the sense of <cit.>, the integrand is exponentially small along the |ζ|=1 integration contour outside the range of finite z, so that the Schur correlation functions limit to the protected sphere correlation function of the 3d theory, which provide a positive twisted trace on A^3d_2 π R. The positive trace on A^3d_2 π R can be used to define a “Coulomb branch” sphere quantization associated to the 3d theory, with an Hilbert space ^3d_2 π R defined as the closure of A^3d_2 π R under the inner product given by the trace <cit.>. We conclude that the → 0 limit in this situation maps the Schur quantization to Coulomb branch sphere quantization, in such a way that the spherical vector and the |a⟩ dense basis go to the corresponding dense basis of ^3d_2 π R. It is often the case that a 3d N=4 gauge theory admits a “mirror” description, with the Coulomb branch mapped to the “Higgs branch” of the mirror theory. Correspondingly, the Coulomb branch sphere quantization associated to the original theory maps to an “Higgs branch” sphere quantization in the mirror theory, which can be described geometrically. The Coulomb and Higgs presentations of the algebra A^3d_2 π R and the Hilbert space ^3d_2 π R are typically very different. In particular, the Higgs branch presentation can take a geometric form, with an algebra of holomorphic differential operators acting on L^2-normalizable half-densities on some auxiliary space. Among the examples discussed in the next section, the cases of SQED_1, SQED_2 and SU(2) with N_f=4 are particularly instructive in a 3d limit: * The SQED_1 sphere quantization leads to a Weyl algebra A^3d_2 π R acting as holomorphic polynomial differential operators on L^2(). The spherical vector becomes a Gaussian wavefunction e^-|x|^2. * The SQED_2 sphere quantization leads to an algebra A^3d_2 π R which is the central quotient B_m of U(𝔰𝔩_2), with quadratic Casimir -1/4(1+m^2). The Hilbert space gives the corresponding irreducible spherical principal series representation of SL(2, ), possibly realized as L^2( P^1,|K|^1+ i m). The “spherical vector” is the unique SU(2)-invariant wavefunction on P^1. (1+|x|^2)^-2-2 i m We will momentarily employ the SL(2, )-twisted spherical vector ψ_m(x;a b c d) ≡ (d + c x + b x̅ + a |x|^2)^-2-2 i m depending on a point a b c d∈SL(2, )/SU(2)≡ H_3^+. * The case of SU(2) with N_f=4 is particularly rich. The algebra is the SL(2) quantum Hamiltonian reduction of the product B_m_1+ m_2× B_m_1- m_2× B_m_3+ m_4× B_m_3- m_4 with elementary generators identified with products J_i · J_j of 𝔰𝔩_2 generators from different factors. Correspondingly, Hilbert space consists of twisted half-densities on the moduli space of four points on P^1 modulo SL(2, ). The spherical vector is given as an average over H^+_3: |1⟩_3d = ∫_H_3^+ dVol_h ψ_m_1+m_2(x_1;h) ψ_m_1-m_2(x_2;h) ψ_m_3+m_4(x_3;h) ψ_m_3-m_4(x_4;h) An analogous formula holds for all theories of class A_1 associated to a sphere with regular punctures. Crucially, this coincides with the large s “minisuperspace approximation” of the WZW partition function <cit.>, which is the candidate spherical vector |1⟩_Hol. This verifies our conjectural identification of Schur and Holomorphic quantizations in the s →∞ limit. §.§ Surface defects and alternative twists. The Schur correlation functions could be further modified by the insertion of surface defects along a circle which links the great circle where the line defects are supported. In the HT twist picture, these would wrap the holomorphic plane at the origin of the topological plane. Localization formulae in gauge theories are modified in a minimal way by the insertion of the elliptic genus Θ(ζ;^2) of the extra 2d dof. If the 2d dof are compact, such as a collection of charged fermions, Θ(ζ;^2) will not have poles as a function of ζ. It will also be quasi-periodic under shifts ζ→^2 ζ. Such a surface defect insertion will thus almost preserve the trace condition but modify ρ^2 to some other (ρ')^2 in a manner similar to what extra 4d matter fields would accomplish. There is no obvious reason for a surface defect insertion to preserve positivity. A necessary condition is likely that Θ(ζ;^2) is positive on the unit circle. We do not know a sufficient condition, even in physical terms. In concrete Abelian examples, positivity can be proven rigorously for certain choices of ρ' and Θ(ζ;^2) <cit.>. We will not explore the matter in depth here, but it will appear in some examples and in a comparison to the literature on complex quantum groups. § EXAMPLES OF SCHUR QUANTIZATION This section contains several examples of K-theoretic Coulomb branch algebras and Schur quantization. Further examples can be found in the Appendices. The first sequence of examples have U(1) gauge group and a variable number of charged hypermultiplets. They illustrate the role of the quantum torus algebra and the effect of matter on ρ. The last example, SQED_2, has the remarkable property that A_ = U_(𝔰𝔩_2) and thus will provide us with an interesting family of spherical unitary representations of a real form of the *-algebra double 𝔇_ S≡ U_(𝔰𝔩_2) × U_(𝔰𝔩_2)^ . defined via a specific choice of ρ. This theory (and analogues for other Lie algebras) helps explain the well-known appearance of quantum groups in the quantization of character varieties and Chern-Simons theories. We will discuss the quantum groups relevant for complex quantization here and in Section <ref>. The second sequence of examples have SU(2) gauge group. It includes class theories associated to the four-punctured sphere and one-punctured torus, which are the crucial examples in the quantization of character varieties. See also Section <ref>. We also discuss gauging some extra U(1) symmetries to give interesting quantum group representations. §.§ Example: Pure U(1) Gauge Theory This is a somewhat trivial example, but it introduces the quantum torus algebra Q_, which is a building block for all UV and IR constructions. All fields are gauge-neutral, so the Schur index is just I() = (^2)^2_∞. The K-theoretic Coulomb branch of the theory is ^* ×^*, parameterized by the classical vevs u and v of BPS 't Hooft and Wilson line defects. The complex symplectic form is d log u ∧ d log v. The imaginary part of the complex symplectic form d log u ∧ d log v- d logu̅∧ d logv̅ = d ( log |u|^2 d logv/|v| - log |v|^2 d logu/|u|) . presents ^* ×^* as the cotangent bundle T^*T^2. Our circle of ideas is completed by identifying = (GL(1),T^2) as the space of ^* flat connections on a two-torus C=T^2, aka the phase space of complex Chern-Simons theory with gauge group ^* compactified on C=T^2. This identification also matches the Lagrangian submanifold _c(GL(1),T^2) of flat U(1) connections with the base |u|=|v|=1 of T^*T^2. The natural quantization of is the space _ = L^2(T^2) of L^2-normalizable wavefunctions on T^2, with log |u|^2 and log |v|^2 acting as derivatives and _c(U(1),T^2) quantized as the constant wavefunction on T^2. Schur quantization will give an equivalent answer in a Fourier-transformed presentation _ = L^2(^2). Indeed, K-theory classes x_m,e≡ [L_m,e] of BPS 't Hooft-Wilson line defects in the theory are labelled by an electric charge e and a magnetic charge m, both integral. The resulting algebra A_ = Q_[^2] is the quantum torus algebra x_a,b x_c,d= ^a d - b c x_a+c,b+d We can also introduce generators u = x_1,0 v = x_0,1 which satisfy u v = ^2 v u , and x_a,b= ^- a b u^a v^b = ^a b v^b u^a Following our prescription, we get x_a,b=δ_a,0 (^2)^2_∞∮d ζ/2 π i ζζ^b = (^2)^2_∞δ_a,0δ_b,0 with ρ(x_a,b) = x_-a,-b and thus ρ^2=1. The corresponding inner product becomes ⟨ a,b|c,d⟩ =(^2)^2_∞δ_a,cδ_b,d We thus recognize _ = L^2(^2) with a constant measure (^2)^2_∞. The dense image of Q_ in _ consists of compactly-supported wavefunctions in L^2(^2). In particular, the spherical vector |1⟩ = |0,0⟩ is supported at the origin. The unitary action of the *-algebra double [Q_,ρ] is written explicitly as x_a,b |m,e⟩ = ^a e- b m |m+a,e+b⟩ x_a,b |m,e⟩ = ^-a e+ b m |m+a,e+b⟩ via normal operators which satisfy x_a,b = x_-a,-b^†. In particular, u |m,e⟩ = x_1,0 |m,e⟩ = ^e |m+1,e⟩ v |m,e⟩ = x_0,1 |m,e⟩ = ^-m |m,e+1⟩ u |m,e⟩ = x_1,0 |m,e⟩ = ^-e |m+1,e⟩ v |m,e⟩ = x_0,1 |m,e⟩ = ^m |m,e+1⟩ In this basis, the spherical condition is clearly solved by |1⟩ only. A full Fourier transform L^2(^2) → L^2(T^2) reproduces the natural quantization of T^* T^2 and maps the spherical vector to the constant wave-function on T^2. It is perhaps useful to point out that the natural domain of definition of u and v becomes more subtle in that description and involves functions on T^2 which can be analytically continued to a certain domain in ^* ×^*. Electric-magnetic duality is an important symmetry of Abelian gauge theories. Here it acts as an SL(2,) transformation on the (m,e) charge vector of the BPS line defects. It is a manifest symmetry of the quantum torus algebra and acts on _ = L^2(^2) unitarily. Via Fourier transform, it is mapped to a unitary mapping-class group action on T^2. It preserves the spherical vector. Indeed, the spherical vector is the only SL(2,)-invariant normalizable state. Other basis vectors belong to orbits labelled by the mcd of (m,e). It can also be useful to do a partial Fourier-transform L^2(^2) → L^2(S^1 ×), mapping states to wavefunctions ψ_B(ζ): |m,e⟩→ζ^e δ_B,m The spherical vector now maps to a wave-function δ_B,0 and A_ to wave-functions which are compactly supported on and Laurent polynomials on S^1. The elementary operators act as u ψ_B(ζ) =ψ_B-1(ζ) v ψ_B(ζ) =^-Bζψ_B(ζ) u ψ_B(ζ) = ψ_B-1(^-1ζ) v ψ_B(ζ) = ^Bζψ_B(ζ) This representation of Q_ via difference operators and multi-variable generalizations thereof are the basic building blocks of many constructions below. §.§.§ Spaces of positive traces It is also instructive to characterize the trace algebraically. In the absence of twist, i.e. ρ^2=1, the trace condition x_a,b x_c,d= ^2a d - 2b c x_c,d x_a,b =^2a d - 2b c x_a,b x_c,d immediately implies that x_a,b≃δ_a,0δ_b,0, so the trace is essentially unique and it happens to be positive if the overall coefficient is positive. It is instructive to see what happens if we modify the choice of ρ. For example, consider ρ(x_a,b) = x_-a, -n a -b for some integer n, so that ρ^2(x_a,b) = x_a, b+ 2 n a. Then it is easy to see that x_a,b≃δ_a,0 t_b for some t_b. Furthermore, t_b = ^-b' x_1,b x_-1,0 = ^-b' x_-1,- 2 n x_1,b = ^2 n-2b t_b-2 n . This is solved by t_b = t'_b ^-b^2/2n where t'_b = t'_b-2n. Notice that the behaviour of the coefficients for large b is sharply different in the n>0 and n<0 cases. The corresponding inner product is ' ρ(x_a,b) x_c,d =δ_a,c' x_-a, -n a -b x_c,d = δ_a,c^n a^2/2-(d-b)^2/2nt'_d-b-n a We can restrict our attention to ' ρ(x_0,2 n r) x_0,2 n s =^-2n (s-r)^2t'_0 Computing some determinants of sub-matrices easily show that this inner product fails to be positive definite if ^-2n≥ 1, i.e. n>0. For n<0, we can Fourier-transform the answer to write the inner product as an integral involving a theta function: x_a,b=δ_a,0 (^2)^2_∞∮d ζ/2 π i ζζ^b Θ(ζ;) and express the positive-definiteness condition in terms of the location of the zeroes of Θ(ζ;) <cit.>, with families of solutions. This integral expression can be given an interpretation in terms of a Schur index decorated by a surface defect. We will not pursue this point further in this example, but it illustrates how the standard Schur trace is a unique edge case in the space of positive twisted traces. §.§ Example: SQED_1. This example illustrates how matter fields modify the properties of wrapped 't Hooft lines. The contribution to the Schur index of a single hypermultiplet is I_hyper(ζ;) = 1/(-ζ;^2)_∞ (-ζ^-1;^2)_∞ = 1/∏_n=0^∞ (1+ ^2n+1ζ)(1+ ^2n+1ζ^-1) The Schur index itself evaluates to I_ = ∮_|ζ|=1dζ/2 π i ζ (^2)^2_∞ I_hyper(ζ;) = 1 - ^2 + ^6 - ^12 + ^20-^30 + ⋯ i.e. [This formula and the one below is related to bosonization of a βγ system.] I_ = ∑_n=0^∞ (-1)^n ^n(n+1) We find the expectation value of a single Wilson line w_k of charge k by inserting ζ^k in the integral: w_k = ∑_n=|k|^∞ (-1)^n ^n(n+1)-k^2 = (-)^|k|∑_n=0^∞ (-1)^n ^n(n+2|k| +1) We anticipate that ρ maps Wilson lines to Wilson lines of the opposite charge. The matrix ρ(w_i) w_j = w_j-i is positive definite by construction, as it controls integrals of the form I_ = ∮_|ζ|=1dζ/2 π i ζ |f(ζ)|^2 (^2)^2_∞ I_hyper(ζ;) where f(ζ) is a Laurent polynomial in ζ and the integration measure is manifestly positive. §.§.§ The algebra A_[SQED_1]. In order to describe the insertion of 't Hooft defects, we need an explicit description of the K-theoretic Coulomb branch algebra A_. We denote as u_± = [L_± 1,0] the K-theory classes of elementary 't Hooft operators of magnetic charge ± 1 and as v=[L_0,1] the K-theory class of an elementary Wilson line with electric charge 1. Then w_n = [L_0,n] = v^n and we have relations: u_± v = ^± 2 v u_± u_+ u_- = 1 + v u_- u_+ = 1 + ^-1 v We will also use the following relations, which follow from a repeated application of the basic ones: u^k_+ u^k_- = (1 + ^2k-1 v) ⋯ (1 + v) u^k_- u^k_+ = (1 + ^-2k+1 v)⋯ 1 + ^-1 v These relations are enough to reduce any polynomial in u_± and v^± 1 to a -dependent linear combination of D_a,b≡^- a b u_+ ^a v^b D_-a,b≡^a b u_-^a v^b a ≥ 0 . We identify these with K-theory classes of generic 't Hooft-Wilson lines L_a,b, giving a linear basis for A_. We will describe ρ momentarily. §.§.§ The norm of 't Hooft operators The trace defined by the Schur index is only non-vanishing if the total magnetic charge vanishes. We can compute D_a,b D_-a,c = ^a c+ a b (1 + ^2a-1 v) ⋯ (1 + v) v^b +c D_-a,c D_a,b = ^-a c- a b (1 + ^-2a+1 v) ⋯ (1 + ^-1 v) v^b +c =^-a c- a b- a^2 (1 + ^2a-1 v^-1) ⋯ (1 + v^-1) v^b +c+a , for a≥ 0. When we insert these expressions in the trace, these factors cancel factors in the denominator, and allow a shift the integration contours by a factor of ^± a <cit.> to D_a,b D_-a,c = ∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞/∏_n=0^∞ (1+ ^2n+a+1ζ)(1+ ^2n+a+1ζ^-1)ζ^b +c D_-a,c D_a,b = ∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞/∏_n=0^∞ (1+ ^2n+a+1ζ)(1+ ^2n+a+1ζ^-1)ζ^b +c+a These formulae are fully compatible with positivity if we take ρ(D_-a,-b) = D_a,b ρ(D_a,b) = D_-a,-a-b a ≥ 0 . Then ρ^2(D_a,b) = D_a,a+b , as expected from the U(1)_r anomaly and Witten effect. The choice of ρ^2 is also compatible with cyclicity: D_a,b D_-a,c = ρ^2(D_-a,c) D_a,b = D_-a,-a+c D_a,b In a situation like this, where ρ^2 is not the identity, we cannot interprete the spherical vector as the quantization of an actual Lagrangian submanifold of phase space: the classical constraints u_± = u_±, v = v and the reality conditions v = v̅^-1 u_+ = v̅^-1u̅_- u_- = u̅_+ do not define a Lagrangian sub-manifold of .[This is not uncommon: for example, the state e^- x^2/ħ in quantum mechanics on the real line satisfies complexified equations p = i x which do not define an actual Lagrangian submanifold of phase space.] This theory has a (somewhat subtle) class description where C is a plane with an irregular singularity of rank 2 at infinity. The non-trivial action of ρ has a specific geometric meaning in that context, rotating the Stokes sectors at the irregular singularity by one step. §.§.§ Two useful isometries Presenting _ as the closure of A_ is a bit cumbersome, as the natural linear basis in A_ is not orthogonal under the inner product. The integral expressions above and our general discussion suggest defining first an isometry A_→ L^2(× S^1) by |D_a,b⟩ = δ_B,aζ^b (^2)_∞/∏_n=0^∞ (1+ ^2n+|a|+1ζ) These vectors are related by an invertible triangular change of basis to the orthogonal basis δ_B,aζ^b in L^2(× S^1) and thus should give an identification of _ with L^2(× S^1). This isometry maps the spherical vector to |1⟩ = δ_B,0(^2)_∞/∏_n=0^∞ (1+ ^2n+1ζ) We can now introduce the same operators u,v and u, v acting on (a dense domain in) L^2(× S^1) which we introduced in pure U(1) gauge theory. It is easy to see that the isometry intertwines the action of “v” in _ and L^2(× S^1). We would like to relate the actions of u_± in _ and u^± 1 in L^2(× S^1). This is straightforward. If a >0, we have u |D_a,b⟩ = δ_B,a+1^b ζ^b (^2)_∞/∏_n=0^∞ (1+ ^2n+a+2ζ) = ^b |D_a+1,b⟩ u |D_-a,b⟩ = δ_B,-a+1^b ζ^b (^2)_∞/∏_n=0^∞ (1+ ^2n+a+2ζ) = ^b (1+ v) |D_-a+1,b⟩ i.e. u |D_a,b⟩ = |u_+ D_a,b⟩ , for all a and b. On the other hand, |u_- D_a,b⟩ = (1+ ^-1 v) u^-1 |D_a,b⟩ , Similarly, |D_a,bu_- ⟩ = u^-1 |D_a,b⟩ |D_a,bu_+ ⟩ =(1+ ^-1 v) u |D_a,b⟩ We have thus mapped the action of A_⊗ A_^ on to an action via difference operators on L^2(× S^1): u_+ = u u_- = (1+ ^-1 v) u^-1 u_+ = (1+ ^-1 v) u u_- = u^-1 These are two natural Abelianized BFN presentations of the K-theoretic Coulomb branch algebra. The natural domain of definition of these operators is the space of finite linear combinations of the |D_a,b⟩. It would be interesting to compare this with natural choices of domain which could arise in a direct attempt at quantizing . Observe that these expressions can be interpreted as a morphism of *-algebras _→𝔔_≡[Q_,ρ] composed with the unitary action of 𝔔_ on L^2(× S^1). Perhaps confusingly, this is expressed as two distinct algebra morphisms A_→ Q_ and A^_→ Q^_. This is essentially unavoidable. This construction is a simple example of the IR formalism discussed in our companion paper <cit.>. Here we discussed one of two natural isometries → L^2(× S^1). There is a second isometry given by |D_a,b;-⟩= δ_B,aζ^b+max(a,0)(^2)_∞/∏_n=0^∞ (1+ ^2n+|a|+1ζ^-1) which instead satisfies u^-1 |D_a,b;-⟩ = |u_- D_a,b;- ⟩ (1+ v) u |D_a,b;-⟩ = |u_+ D_a,b;- ⟩ (1+^-1 v^-1) u^-1|D_a,b;-⟩ = |D_a,bu_- ;- ⟩^-1 v u |D_a,b;-⟩ = |D_a,bu_+ ;- ⟩ . The manifestly unitary transformation on L^2(× S^1) defined by the complex quantum dilogarithm multiplication kernel Φ_B(ζ) = ζ^max(B,0)∏_n=0^∞1+ ^2n+|B|+1ζ/1+ ^2n+|B|+1ζ^-1 = ∏_n=0^∞1+ ^2n-B+1ζ/1+ ^2n-B+1ζ^-1 intertwines the two isometries. §.§.§ Other positive traces There is a general theory of positive traces for Abelian K-theoretic Coulomb branch algebras. Consider a modification of the integral formula for the Schur correlation function where we insert a theta function Θ[ζ;] in the measure. This modification changes sightly the behaviour of Hermitean conjugation on shift operators and thus gives rise to a new automorphism ρ' and (ρ')^2. In particular, this gives (ρ')^2(D_a,b) = λ^a D_a,b- n a , for non-negative integer n and appropriate constant λ. For example, the insertion of θ(μζ;^2) = ∏_n=0^∞ (1+ ^2n+1μζ)(1+ ^2n+1μ^-1ζ^-1) gives a trace with n=0 and non-trivial λ. This extra measure factor is positive either for |μ|=1 or for real μ. Identifying a range of values which gives a positive trace requires more work. §.§.§ The q-deformed Weyl algebra and q-deformed metaplectic representation. The quantized Coulomb branch algebra for the 3d version of SQED_1 is the Weyl algebra. This is a key example of 3d mirror symmetry. Sphere quantization presents L^2() as a spherical representation for a *-algebra double of the Weyl algebra <cit.>. The Weyl algebra contains a specific central quotient of U(𝔰𝔩_2) as the sub-algebra fixed by a reflection of the generators. Sphere quantization thus also provides a spherical unitary representation of a *-algebra double [U(𝔰𝔩_2),ρ]≡ U(𝔰𝔩(2,)_), where ρ reflects the generators. This coincides with the representation-theoretic notion of a spherical unitary representation of 𝔰𝔩(2,), which contains a cyclic vector which is invariant under the compact SU(2) subgroup of SL(2,). All of these properties persist in a q-deformed manner in the Schur quantization of SQED_1, with q = ^2. The algebra A_ can be interpreted as a q-deformed version W_ of the Weyl algebra, albeit with an extra property usually not included in the definition. Indeed, u_± satisfy a q-deformed commutation relation: ^-1 u_+ u_- - u_- u_+ = ^-1 - and v can be reconstructed from the combination u_+ u_- -1: ^-1 u_+(u_+ u_- - 1) = u_+( u_- u_+ - )= (u_+ u_- - 1) u_+ The existence of an inverse v^-1 appears to extend the naive definition of W_ in a natural manner. For example, a typical representation of the q-deformed commutation relations involves the (Jackson) q-derivative: ∂_^2 f(x)≡f(^2 x) - f(x)/^2 x - x i.e. u_+ = (1-^2)∂_^2, u_-=x, and gives v f(x) = f(^2 x) which is invertible. The automorphism ρ(v) = v^-1 ρ(u_-) = u_+ ρ(u_+) = v^-1 u_- defining the *-algebra double [W_,ρ] explicitly uses v^-1. The action on _ is a q-deformation of the representation on L^2(). We can even find a q-deformation of the metaplectic representation: W_ includes U_q^2(sl_2) generators: E =u_- v^-1 u_-/^-2 - ^2 K = v F = u_+^2/^2 - ^-2 with fixed Casimir element E F + ^-2K + ^2 K^-1/(^-2 - ^2)^2 = - + ^-1/(^-2 - ^2)^2 and ρ(E) = - ^2 K F ρ(K) = K^-1ρ(F) = - ^2 K^-1 E , which defines a quantum group *-algebra double [U_q^2(sl_2),ρ]. Here we encounter for the first time the “Schur” version of a quantum group U_q^2(𝔰𝔩(2,)_) to be associated to SL(2,) Chern-Simons theory. The conditions satisfied by the spherical vector |1⟩ can be re-written as ( ^2 E (K^†)^-1 + F^†) |1⟩ =0 K K^† |1⟩ =|1⟩( ^2 F (K^†)+ E^†) |1⟩ =0 . We are going to show in Section <ref> that the combinations of generators appearing in qgrp-sph can be interpreted as quantum deformations of the generators of the compact subgroup in a SL(2,) representation. The appearance of the quantum group in this example is somewhat exceptional. Next, we consider an example which is instead instrumental to understand the relation between [U_q(sl_2),ρ] and SL(2, ) Chern-Simons theory. Notice the different power of q in the deformation parameter! §.§ Example: SQED_2. General Abelian gauge theories work in a very similar way as SQED_1. The next simplest example, U(1) gauge theory with two flavours, will allow us to discuss an example with flavour. It also has a neat relation to the theory of representations of quantum groups. The Schur index is I_(μ) = ∮_|ζ|=1dζ/2 π i ζ (^2)^2_∞ I_hyper(μζ;)I_hyper(μ^-1ζ;) §.§.§ Algebraic structure The algebra A_ is now expressed in terms of w_n = v^n and two difference operators u_+ and u_-, acting as u_± v = ^± 2 v u_± , which also satisfy u_+ u_- = (1 + μ v)(1 + μ^-1 v) u_- u_+ = (1 + ^-1μ v)(1 + ^-1μ^-1 v) We see here a factor for each hypermultiplet. This is an example of a general formula valid for all Abelian gauge theories. The u_± generators represent elementary 't Hooft lines of charge ± 1. The full set of 't Hooft-Wilson lines can be written as D_a,b≡^- a b u_+ ^a v^b D_-a,b≡^a b u_-^a v^b a ≥ 0 . This gives a linear basis for A_. We also have ρ(D_-a,-b) = D_a,b and ρ(D_a,b) = D_-a,-2 a-b. §.§.§ Schur correlators and _. All formulae for the Schur correlation functions are obvious variations of these for SQED_1. E.g. D_a,b D_-a,c = ∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞/∏_n=0^∞ (1+ ^2n+a+1μ^±ζ^±)ζ^b +c D_-a,c- 2 a D_a,b = ∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞/∏_n=0^∞ (1+ ^2n+a+1μ^±ζ^±)ζ^b +c where the ± notation in the denominators indicates a product over four factors with all possible signs. We can also define an isometry A_→ L^2(× S^1) by |D_a,b⟩ = δ_a,Bζ^b (^2)_∞/∏_n=0^∞ (1+ ^2n+|a|+1μζ) (1+ ^2n+|a|+1μ^-1ζ) such that u_+ maps to u and u_- to u^-1. More explicitly, u_+ = u u_- = (1+ ^-1μ v)(1+ ^-1μ^-1 v) u^-1 u_+ = (1+ ^-1μ v) (1+ ^-1μ^-1 v) u u_- = u^-1 Again, the triangular form of |D_a,b⟩ indicates that they will be dense in L^2(× S^1), identifying this auxiliary Hilbert space with _. There are actually four natural isometries to L^2(× S^1), intertwined by Φ_B(μ^± 1ζ). In each isometry, denominator factors capture half of the contribution of one hypermultiplet to the full integrand in the Schur index. §.§ Relation to quantum groups. The SQED_2 theory has an exceptional feature: A_ coincides with the central quotient of U_q(𝔰𝔩_2), with quadratic Casimir controlled by μ. In order to make this explicit, observe [ v^-1 u_-,u_+] =(^-1- )(v- v^-1) , so that we could define, say, E = v^-1 u_-/^-1- K = v F =u_+/-^-1 to get the standard quantum group generators. The remaining relation sets the Casimir to be proportional to μ + μ^-1. We have again ρ(E) =- K F ρ(K) = K^-1ρ(F) =- K^-1 E so the *-algebra double is the central quotient of [U_q(sl_2),ρ]. We have obtained a spherical unitary representation of [U_q(sl_2),ρ] on a _ which will be identified in Section <ref> as a quantum deformation of the spherical principal series representation of SL(2,). This is to be expected, as the latter arises from sphere correlation functions of the 3d version of SQED_2 <cit.>. The spherical vector |1⟩ in _ is annihilated by certain combinations of the U_q(𝔰𝔩_2) and U^_q(𝔰𝔩_2) generators, cf. qgrp-sph. It will furthermore be shown in Section <ref> that the combinations of generators annihilating the spherical vector in qgrp-sph generate a quantum deformation of the Lie-algebra of the compact sub-group of SL(2,). This will be shown to imply an algebraic structure of the representation on _ akin to the structure of principal series representation of SL(2,) as direct sum of finite-dimensional representations of a compact SU(2) subgroup. We can do more. We can gauge a U(1) global symmetry acting on one of the two hypermultiplets, mapping the system to two copies of SQED_1. Accordingly, we map A_[SQED_2]→ A_[SQED_1]× A_[SQED_1], with μ mapping to a Wilson line: μ v = v_1 μ^-1 v = v_2 u_+ = u_+,1 u_+,2 u_- = u_-,1 u_-,2 We can diagonalize the Wilson line v_1 v_2 acting _[SQED_1]×_[SQED_1], say in an auxiliary description as L^2(× S^1) × L^2(× S^1). We have eigenvalues μ^2 ^-M: _[SQED_1]×_[SQED_1] = ∑_M ∈∫_μ∈ S^1_M,μ Each summand _M,μ can be identified with a copy of L^2(× S^1) equipped with the action u_+ = u u_- = (1+ ^-1^- M/2μ v)(1+ ^-1^M/2μ^-1 v) u^-1 u_+ = (1+ ^-1^M/2μ v) (1+ ^-1^- M/2μ^-1 v) u u_- = u^-1 . The decomposition specdec1 is expected to be a q-analogue of the decomposition of L^2(^2) into principal series representations of SL(2,), each appearing twice. Each _M,μ gives an unitary representation of [U_q(sl_2),ρ], with with Casimirs built from ^∓ M/2μ. In our companion paper <cit.> we will describe their braided monoidal structure in analogy to <cit.>. As for the case of quantum Teichmüller theory, this will allow us to use quantum groups to describe the braided monoidal category of line defects in complex Chern-Simons theory. This theory is an elementary building block in an important construction. Consider any theory which contains an SU(2) gauge group coupled to both SQED_2 and to some other degrees of freedom, described by a theory with SU(2) global symmetry. The K-theoretic Coulomb branch algebra Â_ for will then contain both A_ and U_q(𝔰𝔩_2), with the mass parameters in A_ promoted to SU(2) Wilson lines and identified with the center of U_q(𝔰𝔩_2). The automorphism ρ for will act as the standard ρ on both sub-algebras. Schur quantization will thus provide a simultaneous unitary representation of both [U_q(sl_2),ρ] and [A_,ρ]. In a class S context, will typically be associated to a Riemann surface with an irregular singularity of rank 1 and to the same Riemann surface with the irregular singularity replaced by a regular singularity. The U_(𝔰𝔩_2) generators quantize the Stokes data at the puncture and the Casimir generator quantizes the holonomy around the puncture <cit.>. The statement generalizes to other ADE groups, leading to analogous consequences for the representation theory of complex quantum groups.[More precisely, one expects the existence of a family of theories T_4d[] which can play the same role for U_q(). They are only known for 𝔰𝔩_n as 4d lifts of T[SU(n)]. See <cit.> for some details and more citations.] This leads to a variety of constructions which give a physical interpretation to the relation between quantum groups and the quantization of character varieties and Chern-Simons theory, see e.g. <cit.>. Schur quantization leads to analogous statements about [U_q(sl_2),ρ] and complex Chern-Simons theory. The Hilbert space _[T̂] will have a spectral decomposition into eigenspaces of Wilson lines for the new SU(2) gauge group. We expect the spectral decomposition to take the form _[T̂] = ∫_S^1 ×/_2_M,μ⊗^M,μ_[T] with _M,μ being the above principal series representations of [U_q(sl_2),ρ] and ^M,μ_[T] defining a larger class of representations for [,ρ]. §.§ Pure SU(2) gauge theory In the Appendices we discuss the example of a pure U(N) gauge theory. For SU(2) or PSU(2) gauge group, one encounter subtleties related to the choice of global form of the gauge group and of a collection of mutually local line defects. We will ignore these subtleties, at the price of square roots of entering formulae and occasional negative signs appearing is unexpected places (but not spoiling positivity). We will assume >0 for simplicity, so that ^1/2 is real. Essentially, we consider an algebra A_ which has sub-algebras which correspond to the K-theoretic Coulomb branch algebras for either SU(2) or PSU(2) gauge theories. The algebra is very well-understood, allowing us to present an explicit full linear basis. Recall that this is a class example with Lie algebra 𝔰𝔩_2 and C being a cylinder with irregular singularities of “rank 1/2” at both ends. Wilson lines in the SU(2) gauge theory map to traces of holonomies around the cylinder, while 't Hooft lines map to regularized holonomies from one end to the other of the cylinder. See <cit.> for details. The Schur index is I_ = 1/2∮_|ζ|=1dζ/2 π i ζ (1-ζ^2)(1-ζ^-2)(^2)^2_∞ (^2 ζ^2;^2)^2_∞ (^2 ζ^-2;^2)^2_∞ = =1 + ^4 + ^12 + ^24 + ⋯ = ∑_n=0^∞^2 n(n+1) The insertion of Wilson lines w_n of spin n/2 adds a character ζ^n + ζ^n-2 +⋯ + ζ^-n to the integrand. E.g. w_1=0 and w_1^2 = 1/2∮_|ζ|=1dζ/2 π i ζ (ζ + ζ^-1)^2 (1-ζ^2)(1-ζ^-2)(^2)^2_∞ (^2 ζ^2;^2)^2_∞ (^2 ζ^-2;^2)^2_∞ = =2 I_ - ∑_n=-∞^∞^2 n^2 = ∑_n=1^∞^2 n(n-1)(1-^2n)^2 §.§.§ The algebra The “Abelianized” description of A_ involves auxiliary generators v^± 1, u_± such that Wilson lines map to characters w_n = v^n + v^n-2+⋯ + v^-n and the following relations hold: u_± v = ^± 1 v u_± u_+ u_- = 1/(v - v^-1)( v - ^-1 v^-1) u_- u_+ = 1/(v - v^-1)(^-1 v - v^-1) Notice the single factor of in the first relation. This is precisely due to the choice to include both “minimal” electric and magnetic charges. The algebras for SU(2) or SO(3) gauge theories will be obtained by dropping either 't Hooft operators of Wilson lines of odd charge. The 't Hooft-Wilson operators of minimal magnetic charge do not suffer of monopole bubbling effect and are simply written as H_a = ^a/2 v^a u_+ + ^a/2 v^-a u_- in terms of the auxiliary variables. The 't Hooft-Wilson line defects of higher magnetic charge have more complicated rational expressions, which can be recovered by from products of H_a's. We will come back to these momentarily. For now, we compute H_a H_b= ^a/2+3b/2 v^a+b u_+^2 + ^a/2-b/2v^b-a/(v - v^-1)(^-1 v - v^-1) +^a/2-b/2v^a-b/(v - v^-1)( v - ^-1 v^-1) + ^a/2+3 b/2 v^-a-b u_-^2 The two middle terms are inserted in the integral expression for H_a H_b, leading to two contributions: 1/2∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞^a/2-b/2+1ζ^b-a-2∏_n=0^∞ (1-^2nζ^2) (1-^2n+2ζ^-2) (1-^2n+2ζ^2) (1-^2n+4ζ^-2) 1/2∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞^a/2-b/2+1ζ^a-b+2∏_n=0^∞ (1-^2n+2ζ^2) (1-^2nζ^-2) (1-^2n+4ζ^2) (1-^2n+2ζ^-2) The integration contours can be shifted to 1/2∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞ζ^b-a-2∏_n=0^∞ (1-^2n+1ζ^2) (1-^2n+1ζ^-2) (1-^2n+3ζ^2) (1-^2n+3ζ^-2) 1/2∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞ζ^a-b+2∏_n=0^∞ (1-^2n+1ζ^2) (1-^2n+1ζ^-2) (1-^2n+3ζ^2) (1-^2n+3ζ^-2) and combined to find H_a-2 H_b = ∮_|ζ|=1dζ/2 π i ζ(^2)^2_∞1/2 (ζ^b-a+ ζ^a-b ) (ζ^2;^2)(ζ^-2;^2)(^3 ζ^2;^2)(^3 ζ^-2;^2) This is compatible with the expected ρ(H_a) = H_a-2 , which implies norms H_a-2 H_a will have a positive integrand. We can look more carefully at products of two H's to understand 't Hooft operators of non-minimal charge. We repeat here the crucial formula: ^a/2-b/2 H_a H_b= ^a+b v^a+b u_+^2 + ^a-bv^b-a+1- v^a-b-1/v - v^-1 +^-1v^a-b+1- v^b-a-1/v - v^-1/(^-1 v - v^-1)( v - ^-1 v^-1) + ^a+b v^-a-b u_-^2 If we specialize to a=b, we get an elementary 't Hooft operator H^(2)_2a of magnetic charge 2 and even electric charge: H^2_a= ^2a v^2a u_+^2 + +^-1/(^-1 v - v^-1)( v - ^-1 v^-1) + ^2a v^-2a u_-^2 . If we specialize to b=a + 1 we get an elementary 't Hooft operator H^(2)_2a+1 of magnetic charge 2 and odd electric charge: ^-1/2 H_a H_a+1= ^2a+1 v^2a+1 u_+^2 + (v+v^-1) /(^-1 v - v^-1)( v - ^-1 v^-1) + ^2a+1 v^-2a-1 u_-^2 = ^1/2 H_a+1 H_a . In both cases, if we were to directly compute these expressions we we would easily predict the first and last term while the middle term would require a careful analysis of bubbling contributions as a smooth monopole configuration screens the bare magnetic charge. Other H_a H_b products do not give anything new. For example, H_a H_a-2 = 1+ ^-1 H_a-1^2 More generally, if b ≥ a+2 we have ^a/2-b/2 H_a H_b- ^1+a/2-b/2 H_a+1 H_b-1= ^a-b+1 w_b-a-2 Conversely, if b≤ a-2, ^a/2-b/2 H_a H_b- ^a/2-b/2-1 H_a-1 H_b+1= ^a-b-1 w_a-b-2 The simple commutation relations between H_a and H_a+1 suggest considering combinations D_b+c;a(b+c)+c≡^- 1/2 bc H_a^b H_a+1^c ∼^a (b+c)^2 + c(b+c) v^a (b+c) + c u_+^b+c + ⋯ Although we employed three integers a,b,c in the definition, D_m,e has an unique realization for any m>0: we define c as e modulo m in the range 0,m, b=m-c and then a=(e-c)/m. The leading term in the expression identifies this with a (K-theory class of) a 't Hooft-Wilson loop of charge (m,e). Recall that UV line defects are labelled by a pair of a magnetic weight and a weight for the gauge group modulo the action of the Weyl group. Here we fixed the Weyl symmetry by setting m≥ 0. If m=0, we set D_0,e≡ w_e. This exhausts the space of expected charges. Accordingly, we expect D_m,e to be a linear basis for A_. In particular, it is easy to verify that the product of any number of H_a's and w_n's can be recursively reduced to a finite linear combination of D_m,e's: w_n H_a can be expanded in a linear combination of H_a+k with |k| ≤ n and any H_a H_b combination can be replaced by H_a+b/2^2 or H_a+b-1/2H_a+b+1/2 up to terms with lower magnetic charge. We have ρ(D_m,e) = D_m,e-2m In conclusion, the algebra A_ and the double _ are defined by the relations w_1 H_a = ^-1/2 H_a+1 + ^1/2 H_a-1 H_a w_1 = ^1/2 H_a+1 + ^-1/2 H_a-1 H_a H_a+1 = H_a+1 H_a H_a+1 H_a-1 = 1 + ^-1 H_a^2 H_a-1 H_a+1 = 1 + H_a^2 D_n+m,an+am+m≡^-n m/2H_a^n H_a+1^m ρ(D_m,e) = D_m,e-2m . §.§.§ Norms and auxiliary Hilbert space As we compute the norm of |D_m,e⟩, we can attempt to systematically shift the integration contours as we did above to reach a manifestly positive expression. This is not difficult. For brevity, we integrate the analysis into the presentation of the isometry from A_ to an auxiliary Hilbert space L^2(× S^1)^_2. We use a magnetic Vandermonde measure (v^-1-v)( v^-1 - v) , in the definition of the auxiliary space and maps u_+ = ^1/2 v^2/v^2-1 u u_- = ^1/2/v^2-1 u^- 1 u_+ = ^1/2/1- v^2 u u_- = ^1/2 v^2/1- v^2 u^- 1 with half the usual normalization: u v = v u and expected ρ(u_+) = u_+^† = ^-1 v^2 u_- ρ(u_-) = u_-^† = ^-1 v^-2 u_+ The half-index/image of the spherical vector becomes II_B(ζ) = δ_B,0 (^2;^2)_∞ (^2 ζ^2;^2)_∞ (^2 ζ^-2;^2)_∞ which is Weyl symmetric. Here we encounter another manifestation of the SU(2)/SO(3) subtleties. If we define the _2 Weyl symmetry as B → -B, ζ→ζ^-1, the minimal 't Hooft operators are odd under the Weyl symmetry. A simple way around this obstruction is to include an extra multiplicative factor of (-1)^B in the definition of the _2 action, so that states of odd B are odd under B → -B, ζ→ζ^-1. Then the 't Hooft operators act within L^2(× S^1)^_2 and we obtain the desired isometry. [Notice that we cannot just change the relative sign in the definition of u_±: that would make H_a H_a-2 negative.] We compute u_+ II_B(ζ) = u_+ II_B(ζ) = ^1/2δ_B,1 (^2;^2)_∞ (^3 ζ^2;^2)_∞ (^3 ζ^-2;^2)_∞ u_- II_B(ζ) = u_- II_B(ζ)=- ^1/2δ_B,-1 (^2;^2)_∞ (^3 ζ^2;^2)_∞ (^3 ζ^-2;^2)_∞ which verifies the spherical condition: II_B(ζ) is the image of |1⟩ in the auxiliary Hilbert space. As discussed in the general case, the isometry diagonalizes the Wilson lines w_n and w_n, with eigenvalues χ_n(^-m/2ζ). Obviously, L^2(× S^1)^_2 includes a single eigenstate in each eigenspace, labelled by (m,ζ) modulo _2. §.§.§ Inverting the isometry This example is sufficiently simple that we can invert the isometry, by diagonalizing the action of Wilson lines directly in _. Diagonalizing the action of Wilson lines on Wilson lines is straightforward: |0;μ⟩ = ∑_n χ_n(μ) |w_n ⟩ have the same eigenvalue χ_1(μ) for w_1 and w_1. The charge 1 't Hooft operators can be reorganized as |1;μ⟩ = ∑_a μ^a |H_a ⟩ As w_1 H_a = ^-1/2 H_a+1 + ^1/2 H_a-1 H_a w_1= ^1/2 H_a+1 + ^-1/2 H_a-1 , this is a simultaneous w_1 eigenvector with eigenvalue μ^1/2 + μ^-1^-1/2 and w_1 eigenvector with eigenvalue μ^-1/2 + μ^-1^1/2. It is delta-function normalizable on the unit circle: ⟨ 1;μ|1;ν⟩ = ∑_a,bμ^-aν^b H_a-2 H_b = (μ^2;^2)(μ^-2;^2)(^3 μ^2;^2)(^3 μ^-2;^2) ∑_b (μ^-bν^b ) . At magnetic charge 2 we have a mixing with charge 0 w_1 H^(2)_2a = ^-1 H^(2)_2a+1 + H^(2)_2a-1 H^(2)_2a w_1= H^(2)_2a+1 + ^-1 H^(2)_2a-1 w_1 H^(2)_2a+1 = ^-1 H^(2)_2a+2 + H^(2)_2a +1 H^(2)_2a+1 w_1= H^(2)_2a+2 + ^-1 H^(2)_2a +1 w_1 w_n = w_n+1 + w_n-1 n>0 As for the charge 2 sector, we can effectively strip off the bubbling contributions by defining auxiliary states |2;2a⟩ ≡ | H^(2)_2a⟩ + | + ^-1/( + ^-1)^2 - w_1^2⟩ |2;2a+1⟩ ≡ | H^(2)_2a+1⟩ + |w_1/( + ^-1)^2 - w_1^2⟩ where the second terms are defined as sums over ( + ^-1)^-b-1 |w_1^b⟩. Then w_1 |2;a⟩ = ^-1|2;a+1⟩+ |2;a-1⟩ and then |2;μ⟩ = ∑_a μ^a |2;a ⟩ , which is an w_1 eigenvector with eigenvalue μ+ μ^-1 and w_1 eigenvector with eigenvalue μ + μ^-1. Following this route, we can build abstractly a spectral decomposition of over the expected (S^1 ×)/_2 spectrum, with one-dimensional distributional eigenspaces. §.§.§ Diagonalizing 't Hooft operators We can give another interesting alternative description of _ by simultaneously diagonalizing H_0 and H_1. Recall the definition of the complex quantum dilogarithm, aka tetrahedron index: Φ_B(ζ) = ∏_n=0^∞1+ ^2n+1 v/1+ ^2n+1 v^-1=∏_n=0^∞1+ ^2n-B/2+1ζ/1+ ^2n-B/2+1ζ^-1 We now define a second set of variables σ, S, s, t, etc. analogous to ζ, B, v, u, etc. and consider the kernel U_B,S(ζ,σ) = e^i π/2BΦ_B + S(σζ) Φ_-B + S(σζ^-1) We have (u_+ U)_B,S(ζ,σ) =i ^1/2 v(v+s)/v^2-1 (t U)_B,S(ζ,σ) (u_- U)_B,S(ζ,σ) = -i ^1/2(1+s v) /v^2-1 (t U)_B,S(ζ,σ) so that (H_0 U)_B,S(ζ,σ) = i ^1/2 (t U)_B,S(ζ,σ) and (H_-1 U)_B,S(ζ,σ) = - i (s t U)_B,S(ζ,σ) as well as ( H_1 U)_B,S(ζ,σ) = i ( s^-1 t^-1 U)_B,S(ζ,σ) and ( H_2 U)_B,S(ζ,σ) = - i ^3/2 ( t^-1 U)_B,S(ζ,σ) Clearly, if U is the kernel of an unitary transformation between L^2(× S^1)^_2 and the L^2(× S^1) space of wavefunctions in σ and S, these relations will give us the spectrum of 't Hooft operators. In order to prove such a statement, it is useful to avoid dealing with delta-function normalizability by diagonalizing operators with a discrete spectrum: | H_1|^2 = H_1 ρ(H_1) = H_-1 H_1 | H_2|^2 = H_2 ρ(H_2) = H_0 H_2 We have (| H_1|^2 U)_B,S(ζ,σ) = (|st|^2 U)_B,S(ζ,σ) (| H_2|^2 U)_B,S(ζ,σ) = ^2 (|t|^2 U)_B,S(ζ,σ) and thus a Fourier transform in σ gives tentative wave-functions with fixed eigenvalues for | H_1|^2 and | H_2|^2: U_B;S,T(ζ) ≡ e^i π/2B∮dσ/2π i σ^T+1Φ_B + S(σζ) Φ_-B + S(σζ^-1) The available range for the parameters S and T is constrained by the requirement that the integration contour can be deformed as needed to simplify the action of the 't Hooft operators. It would be nice to verify that both parameters are constrained to be integers and that this set of eigenfunctions is complete. We will continue the discussion in our companion paper <cit.>, as this is closely related to IR formulae for the Schur index. The distributional kernel employed above can be identified with the contribution to Schur correlation functions of an RG interface <cit.>. §.§ N=2^* SU(2) gauge theory. The next simplest example is the case of N=2^* SU(2) gauge theory. This is a theory of class with algebra 𝔰𝔩_2 for a one-punctured torus. The Schur index is I_(μ) = 1/2∮_|ζ|=1dζ/2 π i ζ (1-ζ^2)(1-ζ^-2)(^2)^2_∞ (^2 ζ^2;^2)^2_∞ (^2 ζ^-2;^2)^2_∞/(-μ^±;^2)_∞ (-μ^±ζ^2;^2)_∞ (-μ^±ζ^-2;^2)_∞ where for reason of space we condensed the denominator products as (x μ^±;^2)_∞ = (x μ;^2)_∞(x μ^-1;^2)_∞. §.§.§ The algebra The insertion of Wilson lines w_n = v^n + v^n-2+⋯ + v^-n is straightforward. In order to describe A_, we can introduce u_± v = ^± 1 v u_± , which also satisfy u_+ u_- = (1+μ v^2)(1+μ^-1 v^2)/(1-v^2)(1-^2 v^2) u_- u_+ = (1+μ^-1 v^2)(1+μ^-1^-1 v^2)/(1-v^2)(1-^-2 v^2) Again, we will enlarge the A_ algebra by including also 't Hooft operators of minimal charge, which would strictly speaking make sense only for an SO(3) gauge theory. The algebras for SU(2) or SO(3) gauge theories will be obtained by dropping either 't Hooft operators of Wilson lines of odd charge. The 't Hooft operators of minimal charge do not suffer of monopole bubbling effect and are simply written as H_a = ^a/2 v^a u_+ + ^a/2 v^-a u_- We can compute H_a H_b = ^a/2+3b/2 v^a+b u_+^2 + ^a/2-b/2 v^b-a(1+μ^-1 v^2)(1+μ^-1^-1 v^2)/(1-v^2)(1-^-2 v^2) + + ^a/2-b/2 v^a-b(1+μ v^2)(1+μ^-1 v^2)/(1-v^2)(1-^2 v^2) + ^a/2+3 b/2 v^-a-b u_-^2 The two terms appearing in H_a H_b are 1/2∮_|ζ|=1dζ/2 π i ζ^a/2-b/2ζ^b-a(^2)^2_∞ (ζ^2;^2)_∞ (^2ζ^-2;^2)_∞(^2 ζ^2;^2)_∞ (^4 ζ^-2;^2)_∞/(μ^±;^2)_∞ (μ^±ζ^2;^2)_∞ (^3 μ^±ζ^-2;^2)_∞ 1/2∮_|ζ|=1dζ/2 π i ζ^a/2-b/2ζ^a-b(^2)^2_∞ (^2ζ^2;^2)_∞ (ζ^-2;^2)_∞(^4 ζ^2;^2)_∞ (^2 ζ^-2;^2)_∞/(μ^±;^2)_∞ (^3 μ^±ζ^2;^2)_∞ (μ^±ζ^-2;^2)_∞ The integration contours can be shifted and the integrals combined H_a H_b =1/2∮_|ζ|=1dζ/2 π i ζ(ζ^b-a+ ζ^a-b ) (^2)^2_∞ (ζ^2;^2)_∞ (ζ^-2;^2)_∞(^3 ζ^2;^2)_∞ (^3 ζ^-2;^2)_∞/(μ^±;^2)_∞ (^2 μ^±ζ^2;^2)_∞ (^2 μ^±ζ^-2;^2)_∞ The automorphism ρ acts trivially here and this expression is fully compatible with positivity. We can write some relations: w_1 H_a = ^-1/2 H_a+1 + ^1/2 H_a-1 H_a w_1 = ^1/2 H_a+1 + ^-1/2 H_a-1 ^-1/2 H_a H_a+1 - ^1/2 H_a+1H_a = (^-1 - ) w_1 H_a-1 H_a+1 = H_a^2 + μ + μ^-1 + ^-1 w_1^2 - ^-1- H_a+1 H_a-1 = ^-1 H_a^2 + μ + μ^-1 + w_1^2 - ^-1- The algebra is expected to enjoy an SL(2,) S-duality symmetry generated by T: H_a → H_a+1 and S: H_0 ↔ w_1. We expect generators D_m,e=D_-m,-e with an obvious SL(2,) action, organized in orbits generated from w_n with n being the common divisor of m and e. We set D_0,1 = w_1 and D_1,0 = H_0. Then D_1,a = H_a. The relation D_1,0 D_0,1 = ^1/2 D_1,1 + ^-1/2 D_1,-1 predicts D_a,b D_c,d = ^1/2 D_a+c,b+d + ^-1/2 D_a-c,b-d a d - b c = 1 Analogously, D_1,1 D_1,-1 = ^-1 D_2,0 + D_0,2 + μ + μ^-1 predicts D_a+c,b+d D_a-c,b-d = ^-1 D_2a,2b + D_2c,2d + μ + μ^-1 a d - b c = 1 We can use these relations to both define D_m,e and test the SL(2,) symmetry expectations. For example, we can define D_2,2a = H_a^2 - 1 and D_2,2a+1 = ^-1/2 H_a H_a+1- ^-1 w_1 . Analogously, we can define D_3,3a = H_a^3 - 2 H_a and D_3,3a+1 = ^-1/2 H_a D_2,2a+1- ^-1 H_a+1 D_3,3a+2 = ^-1/2 D_2,2a+1H_a+1 - ^-1 H_a . Etcetera. This is a well-known quantization of the SL(2) character variety for a 1-punctured torus. §.§.§ The auxiliary Hilbert space In order to give an isometry to _^aux, we use a magnetic Vandermonde measure (v^-1-v)( v^-1 - v) , and identify with some work the expressions for the generators u_+ = v^2+^-1μ/v^2-1 u u_- = μ^-1+^-1v^2/v^2-1 u^-1 u_+ = 1+^-1μ v^2/1- v^2 u u_- = ^-1 + μ^-1 v^2/1- v^2 u^-1 compatible with ρ and a candidate spherical vector: II_B(ζ) = δ_B,0(^2;^2)_∞ (^2 ζ^2;^2)_∞ (^2 ζ^-2;^2)_∞/(-μ;^2)_∞ (-μζ^2;^2)_∞ (-μζ^-2;^2)_∞ . In order to have a naive action of Weyl symmetry, we would need to correct these expressions by powers of μ^1/2. Instead, we can include a factor of (-μ)^B in the definition of the _2 Weyl symmetry, in the same spirit (and including) the sign fix we used for pure SU(2). §.§.§ More on S-duality S-duality is a very non-trivial symmetry of Schur correlation functions. E.g. we can verify experimentally that H_a^2 = w_1^2. A full proof can be given with the help of S-duality interfaces <cit.>. It is worth discussing this explicitly. The S-duality kernel is a small variation of the one employed to diagonalize 't Hooft operators in pure SU(2) <cit.>. We define a second set of variables σ, S, s, t, etc. analogous to ζ, B, v, u, etc. and consider the kernel U_B,S(ζ,σ) = σ^-2Sμ^-S+Bζ^-2BΦ_B + S(σζ) Φ_-B + S(σζ^-1) Φ_-B - S(-μσ^-1ζ^-1) Φ_B - S(-μσ^-1ζ) Then μ^-1 s (1- μ s^-1 v)(1- μ s^-1 v^-1) t^-1 U_B,S(ζ,σ) = s^-1(1+ s v)(1+s v^-1) t U_B,S(ζ,σ) i.e. (v+v^-1) U_B,S(ζ,σ) = ( 1/t^-1 -ts (t + μ^-1 t^-1) + 1/t^-1 -t s^-1(t+μ t^-1) )U_B,S(ζ,σ) which essentially maps the Wilson line to a 't Hooft operator built from t and s. We also have (1+s v)t U_B,S(ζ,σ) = ( s v-μ) u^-1 U_B,S(ζ,σ) (1+s v^-1)t U_B,S(ζ,σ) = (-s v^-1μ^-1+1) u U_B,S(ζ,σ) e.g. s v (t-u^-1) U_B,S(ζ,σ) =(-μ u^-1-t) U_B,S(ζ,σ) s v^-1(t+μ^-1 u) U_B,S(ζ,σ) = (u-t) U_B,S(ζ,σ) which implies v (^-1 t-u^-1)(u-t) U_B,S(ζ,σ) =v^-1(^-1 t+μ^-1u) (-μ u^-1-t) U_B,S(ζ,σ) i.e. 1/v^2-1[(v^2 + μ^-1)u + ( v^2 + μ)u^-1]U_B,S(ζ,σ) = ( t^-1+ t) U_B,S(ζ,σ) which, up to a μ→μ^-1 convention change, maps the 't Hooft loop to a simple difference operator which is diagonalized by Fourier transform. A similar formula holds for the tilde variables. The distributional kernel employed above can be identified with the contribution to Schur correlation functions of a duality interface defined via T[SU(2)] <cit.>. §.§ Intermission: SU(2) vs U(2) SQCD The next natural set of examples would be SU(2) gauge theories with N_f = 1,2,3,4 flavours. These have a nice class S interpretation. An unpleasant challenge is that the minimal allowed charge for monopole operators is 2, requiring one to address directly bubbling. There is a trick to sidestep this computation: consider instead U(2) gauge theories, which admit 't Hooft operators of minimal charge. An important feature of gauge theories is that 't Hooft operators which are not charged under some factor of the gauge group have the same expression as difference operators as if the factor was not gauged. We can thus write down U(2) 't Hooft operators of minimal charges, combine them into 't Hooft operators with SU(2) charge only, and carry them over to SU(2) gauge theory. We refer to the Appendices for details. §.§ Abelianized algebras For N_f=1 we get. H_2a = ^2a v^2a u_+ + + ^-1 + μ v + μ v^-1/( v - ^-1v^-1)(^-1 v - v^-1) + ^2a v^-2au_- H_2a+1 = ^2a+1 v^2a+1 u_+ + ( + ^-1)μ + v + v^-1/( v - ^-1v^-1)(^-1 v - v^-1) + ^2a+1 v^-2a-1 u_- Here u_± v = ^± 2 v u_± , and u_+ u_- = (1+μ v)(1+μ^-1 v^-1)/(v-v^-1)( v - ^-1v^-1)^2(^2 v - ^-2v^-1) u_- u_+ = (1+μ^-1 v)(1+μ v^-1)/(v-v^-1)(^-1 v - v^-1)^2(^-2 v - ^2v^-1) More generally, for N_f flavours we need u_+ u_- = ∏_i (1+μ_i v)(1+μ_i ^-1 v^-1)/(v-v^-1)( v - ^-1v^-1)^2(^2 v - ^-2v^-1) u_- u_+ = ∏_i (1+μ_i ^-1 v)(1+μ_i v^-1)/(v-v^-1)(^-1 v - v^-1)^2(^-2 v - ^2v^-1) and the tentative numerator in H_2a becomes (^-1 v - v^-1) ∏_i (1+ μ_i v) + ( v - ^-1v^-1)∏_i (1+ μ_i v^-1)/v-v^-1 and in H_2a+1 becomes ^-1v^-1 (^-1 v - v^-1) ∏_i (1+ μ_i v) + v ( v - ^-1v^-1)∏_i (1+ μ_i v^-1)/v-v^-1 For specific N_f, we can simplify the expressions by subtracting some Wilson lines. For N_f=2 we get H_2a = ^2a v^2a u_+ + (+ ^-1)(1+μ_1 μ_2) + (μ_1+μ_2) (v + v^-1) /( v - ^-1v^-1)(^-1 v - v^-1) + ^2a v^-2au_- H_2a+1 = ^2a+1 v^2a+1 u_+ + (+ ^-1) (μ_1+μ_2)+ (1+μ_1 μ_2)(v + v^-1)/( v - ^-1v^-1)(^-1 v - v^-1) + ^2a+1 v^-2a-1 u_- For N_f=3 we get H_2a = ^2a v^2a u_+ + (+ ^-1)(1+μ_1 μ_2+μ_2 μ_3+μ_1 μ_3) + (μ_1+μ_2+ μ_3+ μ_1 μ_2 μ_3) (v + v^-1) /( v - ^-1v^-1)(^-1 v - v^-1) + + ^2a v^-2au_- H_2a+1 = ^2a+1 v^2a+1 u_+ +(+ ^-1) (μ_1+μ_2+ μ_3+ μ_1 μ_2 μ_3)+ (1+μ_1 μ_2+μ_2 μ_3+μ_1 μ_3)(v + v^-1)/( v - ^-1v^-1)(^-1 v - v^-1) + + ^2a+1 v^-2a-1 u_- Finally, for N_f=4 we get H_2a = ^2a v^2a u_+ + (+ ^-1)(1+∑_i<jμ_i μ_j+∏_i μ_i) + (∑_i μ_i+ ∑_i<j<kμ_i μ_j μ_k) (v + v^-1) /( v - ^-1v^-1)(^-1 v - v^-1) + + ^2a v^-2au_- H_2a+1 = ^2a+1 v^2a+1 u_+ + (+ ^-1) (∑_i μ_i+ ∑_i<j<kμ_i μ_j μ_k)+ (1+∑_i<jμ_i μ_j+∏_i μ_i)(v + v^-1)/( v - ^-1v^-1)(^-1 v - v^-1) + + ^2a+1 v^-2a-1 u_- These theories actually have an SO(2 N_f) global symmetry. This is not completely manifest from the above expressions, but can be restored by rescaling u_± and the H_a operators by ∏_i μ_i^1/2. E.g. the numerator factors are characters for the spinor representations of SO(2 N_f). The N_f=4 theory has a class interpretation with Lie algebra 𝔰𝔩_2 and C being the four-punctured sphere, with regular singularities of monodromy parameters μ_1 μ_2^± and μ_3 μ_4^±. We will discuss the auxiliary Hilbert space description at length in the next section, as the main example of quantization of a complex character variety. Here we can sketch the main formulae. The Schur index is I_(μ) = 1/2∮_|ζ|=1dζ/2 π i ζ (1-ζ^2)(1-ζ^-2)(^2)^2_∞ (^2 ζ^2;^2)^2_∞ (^2 ζ^-2;^2)^2_∞/∏_i (-μ_i^±ζ;^2)_∞ (-μ_i^±ζ^-1;^2)_∞ = =1+ χ_Adj(μ)^2 + ⋯ Rather non-trivially, only characters of triality-invariant representations of the SO(8) flavour group appear in the index. This is due to the fact that S-dualities for this SCFT act as triality on the flavour group. [For example, the Schur trace of a fundamental Wilson line starts with χ_8(μ), a character of the vector representation of SO(8). The spinor characters in H_a guarantee that the corresponding traces start with χ_8_s(μ) or χ_8_c(μ) for the spinor representations, compatibly with the fact that S-duality exchanges Wilson lines and 't Hooft lines while acting as a triality on SO(8). ] The candidate spherical vector is II_B(ζ) = δ_B,0(^2;^2)_∞ (^2 ζ^2;^2)_∞ (^2 ζ^-2;^2)_∞/∏_i (-μ_i ζ;^2)_∞ (-μ_i ζ^-1;^2)_∞ in an auxiliary Hilbert space defined with the usual magnetic Vandermonde measure (v^-1-v)( v^-1 - v) , as well as u v = ^2 v u and u_+ = ∏_i (1+^-1μ_i v^-1)/(1-^-2 v^-2)(1-v^-2) u u_- = ∏_i (μ_i^-1 +^-1 v)/(1-^-2 v^2)(1-v^2) u^- 1 u_+ = ∏_i (1+^-1μ_i v)/(1-^-2 v^2)(1- v^2) u u_- =∏_i (μ_i^-1 +^-1 v^-1)/(1-^-2 v^-2)(1- v^-2) u^- 1 The Weyl symmetry has to be adjusted by a factor of (-∏_i μ_i)^B. The remaining theories also have a similar class interpretation: N_f=3 has two regular punctures and one irregular of rank 1, N_f=2 has a realization with two regular punctures and one irregular of rank 1/2 and a realization with two irregular of rank 1, N_f=1 has a realization with an irregular of rank 1 and one of rank 1/2. In the remainder of this section, we will discuss some interesting applications of Schur quantization of these theories to the theory of quantum groups. §.§ Back to U(2) with N_f=1. In order to make contact with quantum groups, we can gauge the U(1) flavour symmetry of the SU(2) with two flavour theory. Then the two hypermultiplets together with the U(1) gauge fields give a copy of SQED_2, with the SU(2) flavour symmetry being gauged. This gives back U(2) with N_f=1. The operators inherited from SQED_2 include the 't Hooft operators with U(1) magnetic charge only and the U(1) Wilson line. They give a copy of the quantum group U_q(𝔰𝔩_2), with the Casimir coinciding with the fundamental Wilson line for SU(2). According to our general discussion, the Lagrangian formulation of Schur quantization for this theory thus presents the Hilbert space _ as a spectral decomposition into principal series representations _M;μ of [U_q(sl_2),ρ]. We expect this representation to be a fundamental ingredient of a quantum group description of an irregular singularity of rank 1/2 in complex Chern-Simons theory, akin to the Teichmüller construction of irregular conformal blocks <cit.>. §.§ A q-deformation of T^*SL(2,). Recall that one of the class descriptions of SU(2) N_f=2 involves a P^1 geometry with two irregular singularities of rank 1. If we gauge both U(1) sugbroups of the flavour symmetry, this gives a theory such that A_ contains two commuting copies of U_q(𝔰𝔩_2). It is a q-deformation of the left- and right- actions of two copies of 𝔰𝔩_2 on T^*SL(2,). The Casimirs of the two U_q(𝔰𝔩_2) coincide with the fundamental Wilson line for SU(2). The Lagrangian formulation of Schur quantization thus presents the Hilbert space _ as a direct sum/integral of products _M;μ×_M;μ of two principal series representations of [U_q(sl_2),ρ]. This is a q-deformation of the Plancherel decomposition of L^2(SL(2,)). §.§ The coproduct for 𝔘_(𝔰𝔩_2). The class description of SU(2) N_f=3 only makes manifest an SO(2) × SO(4) subgroup of the flavour symmetry of the theory. The SO(2) factor is associated to an irregular singularity of rank 1, the SO(4) to two regular singularities. Gauging SO(2) gives a theory with a particularly important connection to the representation theory of quantum groups. A full discussion requires some cluster technology <cit.> and will better fit in our companion paper <cit.>. Essentially, there is a copy of U_q(𝔰𝔩_2) in A_ but also a map A_→ U_q(𝔰𝔩_2) × U_q(𝔰𝔩_2) which is essentially an isomorphism, realizing the coproduct of U_(𝔰𝔩_2). As in the Teichmüller case <cit.>, we expect this setup to give a spectral decomposition of the tensor product _M_1;μ_1×_M_2;μ_2 of two principal series representations of [U_q(sl_2),ρ] into a direct sum/integral of _M;μ. An important difference is that here we can also ask (and answer using explicit Schur quantization formulae) how the spherical vector in _M_1;μ_1×_M_2;μ_2 decomposes into a direct integral of spherical vectors in _0;μ. See <cit.> for an analogous statement in sphere quantization. § QUANTUM GROUPS AND SCHUR QUANTISATION The SQED_2 example, both in isolation and as an building block for bigger theories, has provided us with a *-algebra [U_q(sl_2),ρ]≡ U_q(𝔰𝔩(2,)_)_ S which we expect to play an important role in SL(2,) Chern-Simons theory. In this section we will compare this proposal with previous definitions of quantum deformations of SL(2,), some of which have been used to define a quantization of SL(2,) CS theory. There are two important subtleties here, as the q-deformation of a *-algebra may involve both a deformation of the underlying algebra and a deformation of the *-structure. For example, the standard U_q(𝔰𝔩_2) deformation of U(𝔰𝔩_2) admits distinct *-structures corresponding to real forms such as SU(1,1) and SL(2,) which are classically equivalent <cit.>.[It would be interesting to explore this statement in the context of real Schur quantization. We leave that to future work.] Furthermore, the quantum deformation of groups like the Lorentz group SL(2,), which are not semi-simple, is not unique <cit.>. The quantum deformation introduced in <cit.> is characterised by having a quantum deformation of the compact subgroup SU(2) inside of it. Another quantum deformation exhibits a deformed version of the Gauss decomposition <cit.>. It will turn out that not all such features can be made fully manifest in the quantum deformations at the same time, but may be realised in more subtle ways. It is therefore not a priori clear which of these quantum deformations is most relevant for the goal to define a quantization of the SL(2,) CS theory. It is therefore not surprising that the quantum group U_q(𝔰𝔩(2,)_)_ S emerging from Schur quantisation turns out to be different from the quantum deformation of U(𝔰𝔩(2,)_) previously studied in <cit.>, <cit.>, here denoted as U_q(𝔰𝔩(2,)_)_ PW. The quantum group U_q(𝔰𝔩(2,)_)_ PW has been used to develop a quantization of complex Chern-Simons theory in <cit.>. A comparison between the quantum Lorentz group U_q(𝔰𝔩(2,)_)_ PW used in <cit.> and U_q(𝔰𝔩(2,)_)_ S is a natural first step to compare the corresponding quantizations of complex Chern-Simons theory. In this Section we will exhibit some of the differences between the two approaches. §.§ The principal series of SL(2,) In order motivate some of the following discussions, and to facilitate the comparison with quantum group theory, we shall very briefly review a few basic facts about the principal series representations of SL(2,) as it arises in the closely related context of sphere quantisation <cit.>. A traditional presentation of the spherical principal series representations of SL(2,) involves the Hilbert space 𝒫_ϑ=L^2(ℙ^1,|K|^1-ϑ) ϑ∈ of twisted half-densities on ℙ^1. The holomorphic differential operators ℰ=∂_x, ℋ=-2x∂_x+J, ℱ=-x^2∂_x+2Jx. with J=-1/2+i ϑ generate a representation of the central quotient of U(𝔰𝔩_2), with quadratic Casimir J(J+1) as global conformal transformations of ℙ^1. The anti-holomorphic differential operators ℰ̅=∂_x̅, ℋ̅=-2x̅∂_x̅+J, ℱ̅=-x̅^2∂_x̅+2Jx̅, generate a second, commuting action. With some foresight, we can identify that as an action of U(𝔰𝔩_2)^ with generators ℱ:=-ℰ^†=ℰ̅, ℋ:=-ℋ^†=ℋ̅, ℰ:=-ℱ^†=ℱ̅, The definition is justified by the observation that the combinations e:=ℰ-ℰ, f:=ℱ-ℱ, h:=ℋ-ℋ. actually define the sub-algebra of rotations of ℙ^1 and exponentiates to an SU(2) action. There is an unique normalizable state Φ_0(x)=(1+|x|^2)^ϑ-1 in 𝒫_ϑ which is SU(2) invariant, i.e. spherical in the sense of representation theory. It is also cyclic: the action of U(𝔰𝔩_2) on Φ_0(x) generates a dense basis of 𝒫_ϑ consisting of the direct sum of all finite-dimensional SU(2) representations R_j of integral spin:[See e.g. <cit.> for a detailed discussion.] 𝒫_ϑ≃⊕_j∈_≥ 0R_j. At this point, a careful reader can probably guess an alternative, algebraic presentation of 𝒫_ϑ: 𝒫_ϑ is a spherical unitary representation of the * algebra double of the central quotient of U(𝔰𝔩_2), with ρ(F):=-E ρ(H) = -H ρ(E) = -F . It is associated to the unique trace on the central quotient of U(𝔰𝔩_2), which happens to be positive when J=-1/2+i. This trace is the starting point of sphere quantization. In order to facilitate the comparison with the Schur quantization, it is useful to recall an alternative auxiliary presentation of 𝒫_ϑ which arises from a Coulomb branch perspective. The presentation is essentially a spectral decomposition into one-dimensional distributional eigenspaces for H and is related to L^2(ℙ^1,|K|^1-ϑ) by a Mellin transform. In the Coulomb presentation, E and F are implemented by difference operators which are a → 1 limit of these which appear in Schur quantization. Schur quantization of SQED_1 provides a positive (twisted) trace on the central quotient of the quantum group algebra U_q(𝔰𝔩_2) which deforms the above structure into a spherical unitary representation of a *-algebra double [U_q(𝔰𝔩_2),ρ]. We will now review in some detail the definition of [U_q(𝔰𝔩_2),ρ] and then study the analogue of the SU(2) action on 𝒫_ϑ. §.§ Real forms of quantum groups from Schur quantisation Recall that the algebra U_q(𝔰𝔩_2) is defined by the relations KE=^2EK, KF=^-2FK, [ E,F ]=K-K^-1/-^-1. We introduced with ^2=q. There is a Casimir element E F + ^-1K + K^-1/(^-1 - )^2, and we will sometimes take a central quotient of the algebra fixing the Casimir to a specific value proportional to μ+μ^-1. The algebra U_q(𝔰𝔩_2) is a Hopf-algebra with co-product Δ(E)=E⊗ 1+K^-1⊗ E, Δ(F)=F⊗ K+1 ⊗ F, Δ(K)=K⊗ K. It will be important to note that there exist very similar, but non-isomorphic, quantum groups often denoted as U_q(𝔰𝔩_2) as well. One of these variants, in the following denoted Û_q(𝔰𝔩_2) uses generators e, f, and k, such that mapping E to e, F to f, and K to k^2 defines an embedding of U_q(𝔰𝔩_2) into Û_q(𝔰𝔩_2). Other variants have an additional generator H such that K=q^2H, with E, F and H satisfying the relations of 𝔰𝔩_2. In order to define a *-algebra double [U_q(𝔰𝔩_2),ρ] which deforms the *-algebra controlling unitary SL(2,) representations in the sense described above, we need to choose an automorphism ρ. It turns out that there are multiple possible choices with the same q → 1 limit. Schur quantization gives a distinguished choice ρ_ S: ρ_ S(E)=- KF, ρ_ S(F)=- K^-1E, ρ_ S(K)=K^-1. leading to the *-algebra double [U_q(𝔰𝔩_2),ρ]. We can contrast this with a naive q-deformation ρ_0(E)=-F, ρ_0(F)=-E, ρ_0(K)=K^-1. Other possibilities would include e.g. ρ_n(E)=-^n K^n F, ρ_0(F)=-^n K^-n E, ρ_n(K)=K^-1. We will see that ρ_ S has some particularly nice features. For example, an analysis based on <cit.> indicates that positive twisted traces only exist for n ≤ 1 and are not unique for n ≤ 0. Another nice feature is that the conditions for a spherical vector can be written as (E+ K^-1 F^† )|1⟩ =0, (F K^†+^-1 E^† )|1⟩=0 , K K^† |1⟩=0. We will see later that the combinations appearing in sph-coprod are related to the co-product of the U_q(𝔰𝔩_2) generators, and that they define a quantum deformed analog of the compact sub-algebra of 𝔰𝔩(2,). One may expect that the rest of the representation will decompose into a direct sum of finite-dimensional representations of this algebra, with each integral spin appearing once. We will demonstrate at the end of this Section that the actual story is slightly more complicated, but reduces to P-R-decomp when q→ 1. §.§ Quantum group representations from Schur quantisation We will now review and extend the discussion of the Schur quantization representation of [U_q(𝔰𝔩_2),ρ]. In Section <ref> we gave an auxiliary presentation of the representation by finite difference operators on the Hilbert space L^2(× S^1),[To simplify the notation we often do not distinguish the operators representing A_ from the generators of the abstract algebra A_.] π(E)= v^-1u_-/^-1-, π(K)=v, π(F)=u_+/-^-1, where u_± can be represented as u_+=(1+μ v)u, u_-=u^-1(1+μ^-1v), in terms of operators u, v satisfying the Weyl-algebra uv=^2vu defined as u g_n(θ)=g_n+1(θ-ħ), v g_n(θ)=^ne^θg_n(θ). We are here representing elements of L^2(× S^1) by collections (g_n)_n∈ of functions g_n∈ L^2(S^1) such that ∑_n∈‖ g_n‖^2_L^2(S^1)<∞. This should be compared with the Mellin transform of the 𝒫_ϑ representation. It is useful to parameterize μ=-^2ϑ, and let us note that the representation introduced above is equivalent to a representation of the following form vf_n(p)=^2mf_n(p), uf_n(p)=f_n+1(p- ), m=1/2(n+ p). One may note that the representation Qdiff-Uqsl2-mod can be restricted to functions f_n(p) which satisfy f_n(p+1/log)=f_n(p). Introducing the notation J=-1/2+ϑ leads to a representation of U_q(𝔰𝔩_2) by finite difference operators of the form μ𝖤_ f_n(p) =[J+1-m]f_n-1(p+), 𝖥_ f_n(p) =[J+1+m]f_n+1(p-), 𝖪_ f_n(p)=^2mf_n(p), [x]:=1-^2x/1-^2. It is easy to see that the representation Qdiff-Uqsl2 reduces to the representation 𝖤f_n(p) =(J+1-m)f_n-1(p+), 𝖥f_n(p) =(J+1+m)f_n+1(p-), 𝖧f_n(p)=mg_n(p), of 𝔰𝔩_2 in the limit ħ→ 1. In order to compare the representation Qdiff-Usl2 with the principal series representations of SL(2,), let us note that the Mellin transformation f_n(p):=∫_d^2x e^ n (x)|x|^-2(j+1)+ pf(x)=∫_d^2x/|x|^2j+2 x^1/2( p+n)x̅^1/2( p-n) F(x), maps the finite difference operators 𝖤, 𝖥 and 𝖧 to the differential operators ℰ, ℱ and ℋ generating the principal series 𝒫_ϑ of SL(2,), respectively. Conversely, we can do a Fourier transform on L^2(× S^1) and diagonalize the action of u, with μ v^-1 acting by a rescaling. Then π(E) is essentially a q-derivative with respect to u and is a natural deformation of ℰ. The other generators are identified with q-differential operators which deform ℋ and ℱ. §.§.§ Spherical vectors Suppose that we were simply given the representation of E, F and K on L^2(× S^1) and a choice of the automorphism ρ. We could then define the action of [U_q(𝔰𝔩_2),ρ] by acting on E, F and K with ρ, and taking Hermitean conjugates. The corresponding spherical vector can be represented by a wave-function of the form g_n(θ)=δ_n,0φ_ S^(θ), where the condition on n follows from K|1⟩=K|1⟩. Choosing ρ=ρ_ S, we will find (-^-1)F g_n(θ) =(1+μ e^θ)δ_n+1,0 φ_ S^(θ+ħ), (-^-1)Fg_n(θ) =μ e^θ(1+μ^-1 e^θ)δ_n+1,0 φ_ S^(θ-ħ). The condition Fg_n(θ)=Fg_n(θ) can be solved by choosing φ_ S(θ)= ∏_k=0^∞1/(1+^2k+1μ^-1 e^-θ)(1+^2k+1μ^-1 e^θ). This coincides with the standard expression for the spherical vector and we recover the structure of Schur quantization. The same representation of E, F and K can also be promoted to a representation of other *-algebra doubles such as [U_q(𝔰𝔩_2),ρ_0]. It is not difficult to find wavefunctions which satisfy modified spherical conditions. Indeed, the theta function ϑ_(v)=∏_k=0^∞(1+^2k+1 v)(1+^2k+1 v^-1). commutes with the the tilde generators and satisfies u ϑ_(v)=∏_k=0^∞(1+^2k+3 v)(1+^2k-1 v^-1) u = ϑ_(v) ^-1 v^-1 u . Then the wave-function ϑ_(v)g_n(θ) satisfies the constraints for a spherical vector for [U_q(𝔰𝔩_2),ρ_0]. A more general product of θ functions would be appropriate for ρ_n with n <0. Solving the spherical conditions for n>1, instead, seems to require negative powers of the theta function, introducing poles into the wave-function of the spherical vector. It therefore seems unlikely that spherical vectors satisfying all relevant conditions can exist for [U_q(𝔰𝔩_2),ρ_n], with n>1. §.§.§ Positive traces As discussed at the beginning of Section <ref>, there is a direct correspondence between spherical unitary representations of the Schur double of an algebra A, and positive traces on A. In order to represent the corresponding positive traces explicitly, we may introduce a grading ν on U_q(𝔰𝔩_2) by counting the powers of E positively, and the powers of F negatively. The traces are supported on the component with grade zero, which can be represented as functions a=a_0(K), with a_0 being a Laurent polynomial. The positive traces associated to the different choices ρ_ S^, ρ_0^ can now be represented as expectation values of a=a_0(K) defined by the spherical vectors Tr(a) =∫_0^2πdW(θ) a_0(e^θ), dW(θ)={ |φ_ S^(θ)|^2dθ for twist ρ_ S^, |ϑ_(e^i θ)φ_ S^(θ)|^2dθ for twist ρ_0. . In this way it is becoming fully explicit how a change of the automorphism ρ is reflected by a change of the measure in the integral representations of the positive traces. Positive traces on the central quotient of U_q(𝔰𝔩_2) have been classified in <cit.>. It seems likely that these results can help classifying the spherical unitary representations of different *-algebra doubles. Physically, the extra theta functions in the integral can be interpreted as the contribution of extra surface defect insertions. §.§ Comparison with other definitions of the quantum Lorentz group A quantum group called quantum Lorentz group was first constructed in <cit.>. The classification of its unitary representation has been found in <cit.>, and the harmonic analysis of this quantum group was developed in <cit.>. It has been demonstrated in <cit.> that there exist other quantum deformations of the group SL(2,). A classification of quantum deformations of the Lorentz group was given in in <cit.>. We shall here compare the quantum Lorentz group from Schur quantisation to the quantum group defined in <cit.>, which is the quantum deformation of the Lorentz group that has attracted most attention up to now, and which has been used in a previous approach to the quantisation of complex Chern-Simons theory <cit.>. It should be noted that previous studies of quantum Lorentz groups have often focused attention on quantum deformations Fun( SL_(2,)_) of the algebra of functions on SL(2,)_. We have so far mainly discussed the quantum deformations U_q(𝔰𝔩(2,)_) of the universal enveloping algebra of the Lie algebra of SL(2,)_. While it is certainly natural to expect that the solutions to these two problems are pairwise related by quantum group dualities, it will require further work to establish the relations in detail. The following discussion will therefore restrict attention to some aspects where a direct comparison is possible on the basis of the known results. §.§.§ The quantum Lorentz group of Podles and Woronowicz The quantum Lorentz group considered in <cit.> is related by quantum group duality to a quantum deformation of U_q(𝔰𝔩(2,)_)_ PW which is isomorphic to U_q(𝔰𝔲(2))⊗Pol(SU_q(2)) as a vector space, with * U_q(𝔰𝔲(2)) being the real form of the Hopf algebra Û_q(𝔰𝔩_2) having generators e, f, and k, and relations ke= ek, kf=^-1fk, [ e,f ]=k^2-k^-2/-^-1, star-structure k^∗=k, e^∗=^-1f, f^∗= e, and co-product Δ(e)=e⊗ k+k^-1⊗ e, Δ(f)=f⊗ k+k^-1⊗ f, Δ(k)=k⊗ k, * and Pol(SU_q(2)) is the Hopf algebra with generators a, b, c and d, relations ab=ba, ac=ca, bd=db, cd=dc, bc=cb, ad-da=(^-1-)bc, ad-^-1da=1, star-structure a^∗=d, b^∗=-^-1c, c^∗=- b, and co-product Δ(a)=a⊗ a+b⊗ c, Δ(c)=c⊗ a+d⊗ c, Δ(b)=b⊗ d+a⊗ b, Δ(c)=c⊗ b+d⊗ d, The algebra structure on U_q(𝔰𝔲(2))⊗Pol(SU_q(2)) defined in <cit.> also involves the mixed relations kc= ck, kb=^-1bk, ka=ak, kd=dk, [e,c]=0, [e,b]=^-1(ka-k^-1d), ae- ea=k^-1c, ed- de=ck, [f,b]=0, [f,c]=(kd-k^-1a), fa-^-1af=bk, df-^-1fd=k^-1b. Central elements of this algebra can be constructed as Ω_+=+(-^-1)eb+^-1ka+ k^-1d, Ω_-=-^-1(-^-1)fc+^-1k^-1a+ kd. Our goal is to compare this version of the quantum Lorentz group to the quantum group U_q(𝔰𝔩(2,)_)_ S from Schur quantisation. We shall use the notation 𝔇_ PW for the complex algebra having generators a,b,c,d,e,f,k, and relations efk-rels-def, abcd-rel and q-Lorentz-mixed, and U_(𝔰𝔩(2,)_)_ PW for the for quantum deformation of U(𝔰𝔩(2,)_) defined as a real form 𝔇_ PW using the star structures efk-star and abcd-star above. §.§.§ Algebraic structure of the principal series of U_(𝔰𝔩(2,)_)_ PW The interpretation of U_(𝔰𝔩(2,)_)_ PW as a quantum deformation of the Lorentz group can be supported in particular by comparing the structure of its unitary representations described in <cit.> to the algebraic structure P-R-decomp of the principal series representations of SL(2,). As a preparation for a similar analysis in the case of U_(𝔰𝔩(2,)_)_ S we shall here outline a simple approach for the case of spherical principal series representations. Spherical principal series representations of U_(𝔰𝔩(2,)_)_ PW can be generated by the action of Pol(SU_q(2)) on a vector v_0 transforming trivially under the sub-algebra U_q(𝔰𝔲(2)) of U_(𝔰𝔩(2,)_)_ PW. We shall be interested in representations 𝒫_ϑ. having a diagonal action of the Casimir generators Ω_± with eigenvalue 2cos(2ħϑ). We are going to argue that this implies a structure of the representation of the following form 𝒫_ϑ,≃⊕_j∈_≥ 0ℛ_j,, with ℛ_j, being irreducible (2j+1)-dimensional representations of U_q(𝔰𝔲(2)). To see this, one may first note that the relations q-Lorentz-mixed imply that v^j_j:=c^j v_0 satisfies the highest weight condition e v^j_j:=0. Acting with f^j-m on v^j_j allows one to define vectors v^j_m, m=-j,…,j, generating ℛ_j,. This will allow us to establish princ-decomp inductively. In order to understand the recursive structure, let us consider the subspace ℛ_j,^+ generated by linear combinations of vectors of the form gv^j_m, with g∈{a,b,c,d} and m=-j,…,j. The space ℛ_j,^+ is 3(2j+1)-dimensional since Ω_+v^j_m=ω_+v^j_m implies a relation between av^j_m, bv^j_m and dv^j_m. It decomposes into eigenspaces of k with eigenvalue ^n, n∈. The eigenspace with eigenvalue ^j+1 is one-dimensional, generated by c^j+1v_0. The subspace with eigenvalue q^j is two-dimensional, spanned by the vectors av^j_j and dv^j_j. It contains the vector v^j_j=^j+1 av^j_j+^-j-1 dv^j_j satisfying the highest weight condition ev^j_j=0. One may note, however, that v^j_j=Ω_+ v^j_j=ω_+v^j_j, which is proportional to v^j_j. In a similar way one may see that the eigenspace with eigenvalue ^j-1 is three-dimensional, and contains the vector v^j-1_j-1. Using these observations one may easily see that 𝒫_ϑ, contains each ℛ_j, only once. §.§ Algebraic structure of the spherical principal series of U_q(𝔰𝔩(2,)_)_ S We are next going to investigate the algebraic structure of the subspace[Which may be expected to be dense in L^2(× S^1).] U|1⟩ generated by the action of U= U_q(𝔰𝔩_2) on the spherical vectors |1⟩ in L^2(× S^1) defined in Section <ref>. Comparison with the principal series of SL(2,) suggests that the subspace U|1⟩ is dense in L^2(× S^1). We are going to find a structure which is similar to, but also different from the structure of the spherical principal series of U_(𝔰𝔩(2,)_)_ PW. §.§.§ Quantum analogs of the compact sub-algebras For concisenes, denote U= U_q(𝔰𝔩_2). To begin with, let us introduce two commuting copies U_l, U_r of U, and observe that K=K_l, E=E_l, F=F_l, K^-1=K_r, E=-E_rK_r, F=-K_r^-1F_r, defines a map from U⊗ U_ op into U_l⊗U_r. The definining conditions of |1⟩, combined with Xrl-wtX, imply Ê |1⟩=(E_l ^+K_l^-1E_r^)|1⟩=0, F̂ |1⟩=(F_l ^K_r+ F_r^)|1⟩=0, K̂ |1⟩=K_lK_r|1⟩=|1⟩. We see that the spherical vector transforms trivially under the sub-algebra U_q^+(𝔰𝔩_2) generated by Ê, F̂, and K̂. One may note that Ê, F̂, and K̂ are defined by taking the co-products Δ of E, F, and K, respectively. The opposite co-product Δ' is defined by exchanging the factors in the tensor product. We may use this observation to identify another sub-algebra U_^-(𝔰𝔩_2) of U_l⊗ U_r acting trivially on |1⟩, generated by Ê', F̂', and K̂', K̂'=K_lK_r, Ê'=^-1E_r+ E_l^K_r^-1, F̂'= K_lF_r ^+ ^-1F_l^. The factors of ^± 1 are needed to satisfy Ê'|1⟩=0=F̂'|1⟩. They can be introduced into the definition of the opposite co-product by means of the automorphism of U_(𝔰𝔩_2) scaling E and F inversely. As the definition of the generators Ê, F̂ and K̂ is related to the co-product of U_(𝔰𝔩_2), while Ê', F̂' and K̂' are similarly related to the opposite co-product, it follows that the isomorphism between U_^+(𝔰𝔩_2) and U_^-(𝔰𝔩_2) is described by the universal R-matrix. We will see that these structures offer a replacement for the compact sub-group in the quantum deformation of the Lorentz group from Schur quantisation. §.§.§ Module structure The identification of the algebraic structure of U⊗ U_ op|1⟩ will be facilitated by the following observations. While square-roots of K_r and K_s are not well-defined in the representation on L^2(S^1×), it is possible to define square-roots denoted as k_lk_r and k_lk_r^-1 of K_lK_r and K_l^K_r^-1, respectively. This allows us to define a=k_l^k_r^-1, c=(1-^2)k_l k_rE_r, b=(1-^-2)F_l^(k_lk_r)^-1, d=(k_l^k_r^-1)^-1-(-^-1)^2F_l^(k_l^k_r^-1)^-1E_r^. Formulae EFKhat and efk-def-mod-mod define an embedding of 𝔇_ PW into U_l⊗U_r.[A closely related observation was made in the Appendix of <cit.>.] It is not hard to show that ( a b c d )|1⟩=( K (1-^-2)F (^2-1)KE ω-^2 K )|1⟩, using that the Casimir C=(-^-1)^2FE+ K+^-1K^-1 acts diagonally with eigenvalue ω =2cos(2ħϑ). We note that only positive powers of K appear in these expressions, and that a |1⟩, b|1⟩, c|1⟩ and d|1⟩ generate a three-dimensional representation of the sub-algebra U_(𝔰𝔩_2) generated by e, f and k. The arguments used in Section <ref> can easily be adapted to show that the subspace 𝒫_ϑ,^- of U|1⟩ generated by KE, F, and K decomposes as module of U_(𝔰𝔩_2) in the same way as the right side of princ-decomp. Exchanging the indices l and r, and taking into account the scaling by factors of noted above, defines another realisation of 𝔇_ PW by combining a'=k_l^-1k_r^, c'=(1-^2) k_lk_r E_l, b'=(1-^-2) F_r^(k_lk_r)^-1, d'=k_l^k_r^-1-(-^-1)^2^2k_l^E_l^F_r^k_r^-1, with EFKprimehat. As above we may compute ( a' b' c' d' )|1⟩=( K^-1 (^-1-)K^-1F ^2(1-^2)E ω-^2 K^-1 )|1⟩ Only negative powers of K appear in these expressions. Considering the subspace 𝒫_ϑ,^- of U|1⟩ generated by E, FK^-1, and K^-1, one may again use the arguments from Section <ref> to show that the vector space U_-|1⟩ also decomposes as module of Û_(𝔰𝔩_2) generated by e', f' and k' as the right side of princ-decomp. Taken together we find U|1⟩≃ |1⟩⊕⊕_j∈_> 0(ℛ_j,^+⊕ℛ_j,^-), where ℛ_j,^+ and ℛ_j,^- are (2j+1)-dimensional representations of the two sub-algebras U_^+(𝔰𝔩_2) and U_^-(𝔰𝔩_2), respectively. As the difference between Δ and Δ' disappears in the classical limit, we expect that the classical limits of ℛ_j,^+ and ℛ_j,^- will coincide, reproducing the direct summands R_j in the decomposition P-R-decomp of the spherical principal series representations of SL(2,). §.§ Existence of inequivalent quantum deformations of SL(2,) Our results above already reveal both similarities and differences between the two quantum deformations U_(𝔰𝔩(2,)_)_ PW and U_q(𝔰𝔩(2,)_)_ S of U(𝔰𝔩(2,)_) discussed in this paper. The quantum groups U_(𝔰𝔩(2,)_)_ PW and U_q(𝔰𝔩(2,)_)_ S preserve different features of the classical Lie-algebra 𝔰𝔩(2,)_. While U_(𝔰𝔩(2,)_)_ PW preserves many features following from the Iwasawa decomposition of SL(2,), the algebra U_q(𝔰𝔩(2,)_)_ S from Schur quantisation is naturally associated to the representation of 𝔰𝔩(2,)_ as a real form of 𝔰𝔩_2⊕𝔰𝔩_2. While a quantum analog of the Lie algebra 𝔰𝔲(2)_ of the compact subgroup of SL(2,) is built into the definition of U_(𝔰𝔩(2,)_)_ PW, it has a more subtle counterpart in the case of U_q(𝔰𝔩(2,)_)_ S. One may note, on the other hand, that the star structure representing 𝔰𝔩(2,)_ as a real form of 𝔰𝔩(2,)_ has a very simple counterpart in the definition of U_q(𝔰𝔩(2,)_)_ S, while the star structure defining U_(𝔰𝔩(2,)_)_ PW is quite different.[To see the difference clearly, one may note that the star structure defining U_(𝔰𝔩(2,)_)_ S maps the generator a defined in efk-def-mod-mod to its inverse, while the star structure of U_(𝔰𝔩(2,)_)_ PW maps the generator a to the generator d which does not commute with a.] Existence of inequivalent quantum deformations of U(𝔰𝔩(2,)_) is a phenomenon that we expect to be related by quantum group duality to the existence of the inequivalent deformations of the algebra of functions on SL(2,) classified in <cit.>. The deformed algebras of functions Pol(SL_(2,)) considered in <cit.> have generators α, β, γ, δ associated to the matrix elements of the two-dimensional representation of SL(2,). The deformations classified in <cit.> differ only in the mixed relations between α, β, γ, δ and α^∗, β^∗, γ^∗, δ^∗. The family of star-algebras denoted G_q,t in <cit.> contains a very natural candidate for the quantum group dual to U_q(𝔰𝔩(2,)_)_ S associated to the parameter value t=1, and characterised by mutual commutativity of the sub-algebras generated by α, β, γ, δ, and α^∗, β^∗, γ^∗, δ^∗, respectively. This feature strongly suggests that the star-algebras G_,1 are the quantum deformations of Pol(SL_(2,)) which are relevant in the context of Schur quantisations. The corresponding quantum groups clearly deserve further study. § SCHUR QUANTIZATION AS COMPLEX QUANTIZATION OF A CHARACTER VARIETY. The relations with Kapustin-Witten theory reviewed in the Introduction suggest a dual description of the Schur indices of theories of class in terms of the quantisation of character varieties. The goal of this section is to present a self-contained discussion of the complex quantization of 𝔰𝔩_2 character varieties (SL(2),C) in Fenchel-Nielsen coordinates and a comparison with Schur quantization of the corresponding class Lagrangian gauge theories. We will review how the complex quantisation of character varieties is related to complex Chern-Simons theory in Section <ref>. The quantization of character varieties is well-understood at the algebraic level. Observables are built from the quantum skein algebra Sk_(C,G). The theory of unitary representations of *-algebras which can be built from Sk_(C,G) is much less understood, though one should recall that the KW lift of brane quantization <cit.> provides an useful perspective on the various available options. See Section <ref> for a discussion. A possibility which has been explored in depth is quantum Teichmüller theory, available for ||=1, which quantizes the Teichmüller locus in the character variety. Here we are instead interested in the case where is real, which has been studied less, and the phase space is the whole complex character variety, treated as a real phase space. The *-algebra of observables is thus the *-algebra double 𝔇_(C)=Sk_(C,SL(2))×Sk_(C,SL(2))^ , with a * structure which exchanges the two factors. In the language of the rest of the paper, we consider examples where ρ=1. The main new features of the representations to be studied here originate from the existence of a spherical vector. This section will offer a self-contained perspective on the construction of the spherical vector in a representative example. §.§ Complex quantisation of the character variety – Case of C=C_0,4 In order to illustrate the main new features arising in the regime -1<<1 of interest here, we shall pick a sufficiently typical example associated to C=C_0,4, allowing us to be reasonably brief and explicit at the same time. §.§.§ Background on the character variety Recall that a set of generators for the algebra of holomorphic functions on the character variety is provided by the trace functions W_R,ℓ. This algebra carries a canonical Poisson structure. In order to prepare the discussion of the quantisation for the case of C=C_0,4=ℙ^1∖{z_1,z_2,z_3,z_4}, let us note that the algebra of trace functions has three generators in this case, denoted W, H, and D, and associated to simple closed curves encircling only (z_1,z_2), (z_1,z_3) and (z_2,z_3), respectively, The trace functions W, H, and D satisfy the equation of the the Klein cubic P_K(W,H,D)=0, with P_K being a cubic polynomial. While the precise form of P_K will not be needed explicitly, one should bear in mind that the coefficients of P_K depend on four complex numbers μ_r parameterising the traces of the holonomies L_r around the punctures z_r as L_r=μ_r+μ_r^-1 for r=1,2,3,4, respectively. Rational parameterisations of the Klein cubic can be associated to pants decompositions of C_0,4. Considering the pants decomposition defined by a curve separating z_1 and z_2 from z_3 and z_4, for example, one can solve the equation P_K(W,H,D)=0 in terms of two parameters u and v by setting W=v+v^-1, H=c_+(v) u^2+c_0(v)+c_-(v) u^-2, D=c_+(v) v u^2+c_0(v)+ c_-(v) v^-1u^-2, using the functions c_+, c_0 and c_- defined as c_+(v)=1, c_-(v)= ∏_s,s'=±(1+m_1^sm_2^s'v)(1+m_3^sm_4^s'v)/(1-v^2)^4, c_0(v)=(v+v^-1)(L_1L_3+L_2L_4)-2(L_2L_3+L_1L_4)/(v-v^-1)^2, It will be useful to note that replacing v by v^-1 and u by u^-1(c_-(v)/c_+(v))^1/2 leaves the expressions for W, H and D invariant. This means that an open dense set in (C_0,4,SL(2)) can be parameterised by a _2-quotient of the space ^2 with coordinates u and v. Let us furthermore note that u and v can be represented as exponential functions of Darboux coordinates for the canonical Poisson structure of (C_0,4,SL(2)) often referred to as coordinates of Fenchel-Nielsen type. §.§.§ Quantisation, the algebraic level The algebraic level of the quantisation of the character varieties has been extensively studied, prompting us to be brief. The algebra Sk_(C_0,4,SL(2)) has generators denoted as W, H, D, satisfying a deformed version of the equation of the Klein cubic of the form P_K,(W,H,D)=0, with P_K, being a polynomial in non-commutative variables which is known explicitly. We may start by introducing the algebra ⊗^, defined by the relations uv= vu, u̅v̅=^-1v̅u̅, vu̅=u̅v, v̅u=uv̅. Out of these generators we can formally[We are postponing a discussion of the analytic aspects for a moment.] construct a representation of Sk_(C_0,4,SL(2)) by defining W=v+v^-1, H=u C_+(v) u+C_0(v)+u^-1 C_-(v) u^-1, D=u v C_+(v) u+C_0(v)+u^-1 v^-1 C_-(v) u^-1, using the functions C_+, C_0 and C_- defined as C_+(v)=1, C_-(v)= ∏_s,s'=±(1+m_1^sm_2^s'v)(1+m_3^sm_4^s'v)/(1-^2v^2)(1-v^2)^2(1-^-2v^2), C_0(v)=(v+v^-1)(L_1L_3+L_2L_4)-(+^-1)(L_2L_3+L_1L_4)/( v-^-1v^-1)(^-1v- v^-1). These formulae are related by a similarity transformation to the difference operators representing the action of Verlinde line operators on Virasoro conformal blocks <cit.>. It can be verified directly that the relations P_K,(W,H,D)=0 are satisfied. A representation of the algebra Sk_(C_0,4,SL(2))^ can furthermore be generated by the operators W, H and D defined by replacing u,v by u̅,v̅ in the formulae lineC04. The generators W, H and D clearly commute with W, H and D. One should note that the formulae lineC04 can be used to define operators W, H and D that are formally normal in any unitary representation of ⊗ representing the generators u, v, u̅, v̅ such that u^†=u̅, v^†=v̅. Combined with lineC04 we then find the relations W^†=W, H^†=H and D^†=D. §.§.§ Definition of the Hilbert space The basis of our construction will be a representation of an auxiliary algebra of Weyl-type , introduced in <cit.> in a closely related context. We are going to define a representation of the algebra ⊗, defined by the relations Weyl-Weylop, represented by densely defined unbounded normal operators v, u, v̅, and u̅ on the Hilbert space _=L^2(S^1×), defined as Φ={f_m∈ L^2(S^1);m∈} such that || Φ||^2_:=∑_m∈|| f_m||^2_L^2(S^1)<∞. Operators v, u, v̅, and u̅ representing Weyl-Weylop can be defined as u f_m(θ)=f_m+1(θ-ħ2), v f_m(θ)=^m/2e^θf_m(θ), u̅f_m(θ)=f_m-1(θ+ħ2). v̅f_m(θ)=^m/2e^-θf_m(θ), =e^-ħ. One may note that the operators defined in W-rep satisfy v^†=v̅, u^†=u̅. The auxilliary Hilbert space _ will be used to define the Hilbert space (C_0,4) by taking a _2-quotient of _ representing a quantised version of the redundancy of the parameterisation of (C_0,4,SL(2)) in terms of the Fenchel-Nielsen type coordinates u and v. In order to find the proper quantised analog of the symmetry v↦ v^-1 and u^2↦ u^-2c_-(v)/c_+(v) reflecting this redundancy, let us note that the formulae lineC04 are invariant under the symmetry v↦v^-1, u^2↦ u^-1·C_-(v)/C_+(v)· u^-1. This symmetry is generated by a unitary operator on _ :=φ_0(v)/φ_0(v̅)·ϖ, where ϖ is the parity operator satisfying ϖ=ϖ^-1, ϖ·v·ϖ=v^-1, ϖ· u·ϖ=u^-1, and φ_0 is a function satisfying the difference equation φ_0( v)/φ_0(^-1v)=C_-(v)/C_+(v)= ∏_s,s'=±(1+vm_1^sm_2^s')(1+vm_3^sm_4^s')/(1-^2v^2)(1-v^2)^2(1-^-2v^2). Indeed, using the relations Weyl-Weylop we find ^-1· u^2·= ϖ· u·φ_0(v)/φ_0(^-1 v)· u·ϖ=u^-1·C_-(v)/C_+(v)· u^-1, ^-1·u̅^2·= ϖ·u̅·φ_0(v̅)/φ_0(^-1 v̅)·u̅·ϖ= u̅^-1·C_-(v̅)/C_+(v̅)·u̅^-1, using C_+(v)=C_+(v^-1). Note that the solution to diffeq-phi0 is given by the function φ_0(v)=(^2v^2;^2)_∞(v^2;^2)_∞/∏_s,s'=±(- v m_1^sm_2^s';^2)_∞(- vm_3^sm_4^s';^2)_∞. These preparations allow us to complete the definition of the representation by setting (C_0,4):={ Ψ∈_ ; Ψ=0 }, =1/√(2)(1-). It seems very likely that formulae lineC04, W-rep define a representation of Sk_(C_0,4,SL(2)) on (C_0,4). In order to establish this claim one needs to address the unboundedness of the operators generating Sk_(C_0,4,SL(2)). This unboundedness not only comes from the unboundedness of the operators representing u and v, one also needs to take into account singularities from vanishing denominators in the formulae for C_i(v), i=-,0,+. We will later demonstrate that there exists a dense domain within (C_0,4) on which the unbounded operators generating 𝔇_(C_0,4,SL(2)) can be defined. The operators defined in this way admit a normal extension to a dense domain within (C_0,4), satisfying W_a^†=W_a . One may, in particular, be worried that the poles of C_i(v), i=-,0,+, could spoil normalisability of HΨ, for example. In this regard it seems encouraging to note that the wave-functions representing elements of (C_0,4) satisfy certain vanishing conditions at v=1. Existence of a normal extension is easy to prove in the case of W, being realised as a pure multiplication operator in the representation lineC04, W-rep. §.§.§ Dependence on choice of pants decomposition The representation lineC04 clearly depends on a choice of a pants decomposition. There are three basic pants decompositions of C_0,4, defined by contours separating the pairs (z_1,z_2), (z_2,z_3) and (z_1,z_3) from the remaining two punctures, respectively. For each of these pants decompositions one can define representations of 𝔇_(C_0,4) by using formulae obtained from lineC04 by appropriate permutations of the indices 1,2,3,4. We conjecture that these three representations are all unitarily equivalent to each other. The next paper in this series <cit.> will outline the construction of unitary operators relating the three representations obtained in this way. For now one may note that this amounts to the solution of the spectral problems for the operators H and D within the representation above. We may anticipate, in particular that the unitary operator diagonalising H, for example, can be represented in the form Ψ_s(θ,m)=∫_S^1dθ∑_m'∈F_μ([ θ θ'; m m' ]) Ψ_t(θ',m'), with μ=(μ_1,…,μ_4), and ℱ_θ',m'^μ(θ,m)=F_μ([ θ θ'; m m' ]) being an eigenfunction of H with eigenvalue h=v'+1/v', where v'=^m'/2e^θ'. Existence of this unitary operator implies that H is normal, as Conjecture 1 predicts. In a way that is analogous to the quantum Teichmüller theory, one may use the unitary operators representing the changes of pants decomposition in order to define a representation of the braid group of C_0,4, and an analog of a modular functor. §.§.§ Spherical vector A central role is played in this representation by the spherical vector |1⟩∈_ satisfying W_a^ |1⟩=W_a^†|1⟩, ∀ a∈Sk_(C_0,4,SL(2)). We will represent |1⟩ by wave-functions f_m(θ)=⟨θ,m |1⟩. We claim that the unique solution to these conditions is of the form f_m(θ)=δ_m,0ϕ_0(θ), where ϕ_0(θ)=φ_0(e^θ), with function φ_0 defined in psi0-expl. In order to verify that Phi0C04 solves H-barH let us note, on the one hand, Hf_m(θ) =C_-(^-1v)δ_m-2,0ϕ_0(θ-ħ)+C_0(v)δ_m,0ϕ_0(θ)+ δ_m+2,0C_+(v)ϕ_0(θ+ħ) =δ_m-2,0C_-(e^θ)ϕ_0(θ-ħ)+δ_m,0C_0(e^θ)+ δ_m+2,0C_+(e^θ)ϕ_0(θ+ħ), using v f_m(θ)=^m/2e^θf_m(θ). We have, on the other hand, H^†= C_+(^-1v̅) u̅^2 +C_0(v̅)+ C_-(v̅) u̅^-2, Using C_±(e^-θ)=C_±(e^θ) this implies H^†f_m(θ)=δ_m+2,0C_-(e^θ)ϕ_0(θ-ħ)+δ_m,0C_0(e^θ) +δ_m-2,0C_+(e^θ)Φ_0(θ+ħ). Equation H-barH is therefore equivalent to C_-(e^θ)ϕ_0(θ-ħ)=C_+(e^θ)ϕ_0(θ+ħ). Representing ϕ_0(θ) as ϕ_0(θ)=φ_0(e^θ), and using the explicit expressions for C_±(v) given above, we find the following difference equation for φ_0: φ_0(^2v)/φ_0(v)=C_-( v)= ∏_s,s'=±(1+ v m_1^sm_2^s')(1+ vm_3^sm_4^s')/(1-^4v^2)(1-^2v^2)^2(1-v^2). It remains to notice that equation diffeq-c is solved by Phi0C04. The spherical vector is indeed contained in (C_0,4), as follows from 𝖱 |1⟩=|1⟩, and the finiteness of the norm ‖Φ_0‖^2=∫_S^1dv/v(v-v^-1)^2 (^2v^2;^2)_∞^2(^2v^-2;^2)_∞^2/∏_s,s_1,s_2=±(- v^sm_1^s_1m_2^s_2;^2)_∞(- v^sm_3^s_1m_4^s_2;^2)_∞. We claim that the vector |1⟩ is in the domain of all W_a∈Sk_(C_0,4,SL(2)). This can be verified for W, H, and D noting that the measure defined by the functions ϕ_0 cancels potentially non-integrable factors in C_i, i=-,0,+. We conjecture that this holds in general. A somewhat non-trivial claim is formulated as the following Conjecture: The vectors W_a|1⟩, a∈Sk_(C_0,4,SL(2)), span a dense subspace in (C_0,4). This conjecture is not at all obvious at this stage. Our next paper will introduce techniques for addressing this issue. §.§ Relation to the Schur quantisation There is a considerable freedom in the choice of representation of the skein algebra introduced above. Similarity transformations can be used to modify the representation by finite difference operators. This can be useful to reveal certain properties of the representation. We will here consider the example which facilitates the comparison with the Schur quantisation. We shall consider the similarity transformation W_a'=S^-1· W_a· S, with S=(vv̅)^-1/2ħlog(m_2m_3)(v^2;^2)_∞/(^2v̅^2;^2)_∞∏_s=±(- m_2^-1m_1^s v̅;^2)_∞ (- m_3^-1m_4^s v̅;^2)_∞/(- m_2^m_1^s v;^2)_∞ (- m_3^m_4^s v;^2)_∞. It is not hard to verify that the generators W', H', D', and W', H', D' defined in this way are represented by finite difference operators having a similar form as in lineC04, but with modified coefficient functions. Note, in particular that 1/(v^2;^2)_∞· u^2 · (v^2;^2)_∞ =u·(^2v^2;^2)_∞/(^-2v^2;^2)_∞· u =u·1/(1-^-2v^2)(1-v^2)· u , (v̅^2;^2)_∞·u̅^2 ·1/(^2v̅^2;^2)_∞ =u̅·(^4v̅^2;^2)_∞/(v̅^2;^2)_∞·u̅ =u̅·1/(1-v̅^2)(1-^2v̅^2)·u̅ . For H' and H' one thereby finds the expressions H'=D_+(v) u^2+C_0(v)+ D_-(v) u^-2, H'=D̅_+(v̅) u̅^2+C_0(v̅)+D̅_-(v̅) u̅^-2, using the functions D_± and D̅_± defined as D_+(v) =∏_s=±(1+ m_1^sm_2^ v)(1+ m_3^m_4^s v)/ m_2^m_3^(1-^2v^2)(1-v^2) =D_-(v^-1), D̅_+(v̅)= ∏_s=±(1+^-1 m_1^sm_2^-1 v̅)(1+^-1 m_3^-1m_4^s v̅)/m_2^-1m_3^-1(1-^-2v̅^2) (1-v̅^2) =D̅_-(v̅^-1). It had been argued in <cit.> that the difference operators appearing in the integral formulae for Schur indices of line operators should coincide with the operators representing the insertion of the same line operators in four-ellipsoid partition functions <cit.>. The latter are known to to be related to the difference operators representing the action of Verlinde line operators on Virasoro conformal blocks <cit.>. In order to compare the Hilbert space realisations of 𝔇_(C) coming from Schur quantisation and complex CS-theory, one may first compare the explicit formula Schur for the norm of |1⟩ with the Schur index. This coincides with the UV formula for the Schur-index in the N_f=4-theory as given in <cit.>. It has been shown in <cit.> that the K-theoretic Coulomb branches of theories of class coincide with the skein algebra Sk_(C_0,4,SL(2)). This has been verified in <cit.> by comparing the representation of Sk_(C_0,4,SL(2)) generated by the difference operators W', H', D' introduced above with the relevant special case of the more general formulae for the generators of the K-theoretic Coulomb branches obtained in <cit.>. § SCHUR QUANTIZATION OF COMPLEX CHERN-SIMONS THEORY The main topic of this section is Chern-Simons theory with complex (say simply-laced in this paper) gauge group G_ and imaginary level κ = i s, with action <cit.>: i s/2 S_CS() - i s/2 S_CS() . Here is a G_ connection and S_CS the standard Chern-Simons action.[One can consider more general levels (k+i s)/2 and (k-i s)/2 for integer k. Some of the constructions in this paper can be extended to that case. See e.g. <cit.>.] The choice of imaginary level means that we use as the symplectic form the imaginary part of the natural complex symplectic form on (G,C). As described in the introduction, we aim to describe this theory via Schur quantization of theories of class at = e^-π s^-1. The classical equations of motion of Chern-Simons theory require the complex connection to be flat. If space is a compact two-dimensional surface C, the theory has a finite-dimensional phase space: the moduli space (C,G) of flat G_ connections on C, equipped with a symplectic form proportional to i ∫_C [δ∧δ - δ∧δ] The classical phase space carries several structures reflecting the topological nature of the theory and which we would like to persist in the quantum theory: * The solutions of the equations of motions on a three-dimensional space-time M_3 with boundary C give a Lagrangian submanifold (M_3,G) consisting of flat G_ connections on C which extend to M_3. * The special case of M_3 being a mapping cylinder gives a representation of the mapping class group of C as Lagrangian correspondences. * Wilson lines for computed at fixed time along a path ℓ on C give classical observables W_R,ℓ≡_R Pexp∮_ℓ labelled by ℓ and a finite-dimensional representation R of G. These are holomorphic functions on (C,G). Other holomorphic functions W_a can be realized as “skeins” a of Wilson lines on C joined by intertwining tensors. The product and Poisson brackets on the phase space are local on C and closes within this class of functions. If we invert the direction of the path ℓ, we dualize the representation: W_R,ℓ = W_R^∨, ℓ^-1. * Wilson lines for give a second collection of classical observables W_R,ℓ≡_R Pexp∮_ℓ These are anti-holomorphic functions on (C,G). We use conventions where W_R,ℓ= W_R,ℓ^-1. Other anti-holomorphic functions W_a can be realized as skeins a. We have W_ρ(a) = W_a for an appropriately defined dual skein ρ(a). In these conventions, W_a = W_a if the connection is unitary. The Poisson bracket closes within this class of functions, which Poisson-commute with the holomorphic functions. All of this data only depends on the topology of the (sub)manifolds involved. The quantum theory should associate to C some Hilbert space _s(C,G) which quantizes (C,G). This Hilbert space should carry compatible actions of: * The mapping class group of C. * The quantized algebra of holomorphic Wilson line networks in C ×, isomorphic to the Skein algebra Sk_(C,G). Here we defined = e^-π/s and we will sometimes employ the notation _(C,G) for the Hilbert space _s(C,G). See <cit.> for a modern discussion. * The quantized algebra Sk_^-1(C,G) = Sk_(C,G)^op of anti-holomorphic Wilson lines in C ×, commuting with Sk_(C,G). We should have W_ρ(a)^† = W_a, so that the algebras are realized by normal operators. Here ρ is an automorphism of the Skein algebra, extended anti-linearly over the complex numbers. The quantization procedure should also produce a canonical collection of (possibly distributional) states |M_3⟩ in _s(C,G) given by a path integral over three-manifolds with boundary C, compatible with the above actions. More precisely, there is a combinatorial way to build “Skein modules” Sk_(M_3,G) for Sk_(C,G) which literally encode skeins W_m of Wilson lines in M_3 and their relations to skeins in C. Quantization should provide a state for every decoration of M_3 by a skein, i.e. a module map Sk_(M_3,G) ×Sk_(M_3,G)^op→_s(C,G) though the images |M_3;m, m⟩ could include distributional states.[In particular, there is no guarantee that the partition function on a three-manifold will be finite: the TFT is not fully extended.] Schur quantization in class precisely provide all of this data: * The algebra A_ coincides with Sk_(C,G) and thus the Hilbert space _ carries the desired actions of the Skein algebra. * The space of couplings of coincides with the space of complex structures of C and thus _ carries an unitary action of the mapping class group compatible related to the natural permutation action on Sk_(C,G). * Boundary conditions B(M_3,G) labelled by three-manifolds <cit.> give the desired states |B(M_3)⟩ in _. Decorations by boundary line defects provide |M_3;m, m⟩ with the expected properties. We will denote this theory as “complex Chern-Simons theory” or as “G_ Chern-Simons theory”. We should list other variants of Chern-Simons theory which may be confused with this theory: * Standard Chern-Simons theory with unitary gauge group G_c, the compact form of G_. This TFT has a quantized level k and a phase space (G_c,C) which consists of unitary flat connections. Here we only encounter (G_c,C) as a Lagrangian sub-manifold of (G,C) which we aim to quantize to a special state in the Hilbert space. * Chern-Simons theory with G_ gauge group, with G_ being some other real form of G_ and phase space (G_,C). A full definition of this theory, possibly extending the quantum Teichmüller theory <cit.>, is not quite available at this point. It will not play a role in this paper.[The space (G_,C) as a Lagrangian sub-manifold of (G,C) and its connected components have applications in (Lorentzian, positive curvature) 3d quantum gravity. The role of _c(G,C) is less clear.] * Analytically continued Chern-Simons theory with gauge group G. This is not actually a 3d theory: it describes general properties of path integrals with an S_CS() Chern-Simons action but no specified reality condition on . It can be formulated as a relative theory, living at the boundary of 4d Kapustin-Witten theory <cit.>. Analytically continued Chern-Simons theory can be an ingredient in the analysis of all the other Chern-Simons theories mentioned above, allowing one to embed them in 4d KW theory. The literature presents two very different quantization strategies <cit.> for complex Chern-Simons theory, which have important limitations and are difficult to compare with each other. See <cit.> for a review of the problem. These strategies are akin to the two sides of the Riemann-Hilbert correspondence: one describes the phase space in terms of bundles equipped with holomorphic connections (“de Rham”) and the other in terms of the associated representation of the fundamental group (“Betti”), in some respects following the paradigm of quantum Teichmüller theory. As discussed in the introduction, Schur quantization applied to theories of class is a variant of the second option which can bridge the gap between these two descriptions. Indeed, Schur quantization provides an Hilbert space equipped with the spherical cyclic vector |1⟩∈, which we identify as the quantization of the Lagrangian submanifold of unitary flat connections _c(G,C) ⊂(G,C). This vector acts as a Rosetta stone: it allows us relate this quantization strategy to the previous two by identifying analogous spherical vectors in the respective Hilbert spaces. §.§ A topological boundary condition from unitary flat connections We should briefly review the Chern-Simons interpretation of the state |1⟩ as being created by a topological boundary condition B_c. We define the boundary condition by restricting both the connection and the gauge transformations to lie in the maximal compact subgroup G_c at the boundary. This is possible because the potential boundary gauge anomaly restricted to the G_c subgroup is the difference between the holomorphic and anti-holomorphic levels and thus vanishes. At the level of the phase space, the boundary condition defines the Lagrangian submanifold _c of unitary (i.e. G_c) flat connections inside the moduli space of complex flat connections. Semiclassically, the state associated to this submanifold can be represented by the intersection of _c with the space of flat connections on a given bundle. By Narasimhan-Seshadri and generalizations, the intersection exists and is essentially unique if the bundle is stable. Locally, it can be described as the graph of a generating function. The intersection can be described in terms of local data as follows. Pick an unitary flat connection a and solve locally a_z̅ = g^-1∂̅g. The solutions associated to different local frames are related by left action of the transition functions of the G-bundle associated to the connection. The combination ρ = g g^† gives a map from the surface to the G_/G_c homogeneous space, twisted by the transition functions on the left and their hermitian conjugate on the right. We find that ρ satisfies the G_/G_c WZW equations of motion ∂∂̅ρ = ∂ρρ^-1∂̅ρ which imply conservation of holomorphic and anti-holomorphic currents ∂(ρ^-1∂̅ρ) = 0 ∂̅(∂ρρ^-1) = 0 Conversely, given a bundle we can solve for such a ρ and then the current ∂ρρ^-1 gives a holomorphic connection with unitary monodromy. The G_/G_c WZW action S_WZW, which is just the action for a sigma model with target G_/G_c, evaluated on a solution of the equations of motion as a function of the choice of bundle, gives the generating function for the space of unitary flat connections. Indeed, essentially by definition, δ/δ_z̅ S_WZW= J_z[ρ] = - i s/8 π_z[ρ] δ/δ_zS_WZW = J_z̅[ρ] =- i s/8 π_z̅ It is easy to argue that these statements will persist quantum-mechanically: the partition function Z_WZW for a G_/G_c WZW sigma model at imaginary level coincides with the wave-function which quantized _c.[For real level, this is a well-studied 2d CFT. We need to analytically continue these results.] The argument is simple: the wavefunction can be computed from a slab geometry, with B_c at one end and Dirichlet boundary conditions at the other end. The computation will not depend on the thickness of the slab, as the 3d theory is topological. When the slab is very thin, the 3d Chern-Simons theory reduces precisely to the 2d WZW model with target G/G_ and level is/2. A similar argument can be used to study the pairing of B_c to oper boundary conditions, leading to the partition function of the 2d Toda CFT. We expect the state |1 ⟩ created by B_c to be normalizable, as _c is compact. In terms of the WZW model, this means that the integral of |Z_WZW|^2 over (C,G) should converge. The WZW partition function is expected to be singular at the “wobbly locus” of (C,G), so this statement is rather non-trivial. The B_c boundary conditions support topological Wilson lines W_R,ℓ labelled by finite-dimensional representations R of G_c. These coincide with the image of both holomorphic and anti-holomorphic bulk lines associated to the same data. Skeins of Wilson lines added to the boundary define a more general collection of states |a⟩, coinciding with W_a |1⟩= W_a |1⟩. These states also do not depend on the complex structure on C. The mapping class group should act on them as it acts on the skeins themselves. In the language of the WZW model, the states |a⟩ should be given explicitly by partition functions of the WZW theory decorated by skeins W_a of Verlinde lines. Irregular singularities on C featured prominently in some of our examples and in quantum group applications. They complicate the semi-classical interpretation of the state |1⟩. Indeed, ρ^2 acts on the Stokes data of irregular singularities by rotating the Stokes lines by one full sector. In particular, it does not square to 1. The classical equation W_ρ(a) = W_a^∗= W_a^∗ implies the rather restrictive condition W_a = W_ρ^2(a) and thus does not appear to describe a real Lagrangian manifold in the real phase space. Instead, the condition W_a = W_a describes some complexified Lagrangian manifold. At first sight, there is some tension between this statement and the desired definition of |1⟩ as being created by the B_c boundary condition. The tension is only ”local” in C, though: the condition W_a = W_a is compatible with the monodromy data away from irregular singularities being unitary. Only the Stokes data at the irregular singularity is affected by ρ^2. Irregular singularities on C can be thought of as some intricate disorder line defect in the 3d Chern-Simons theory. The state |1⟩ prescribes some specific behaviour at the point where the disorder line meets the B_c boundary. It would be interesting to understand this point better. It is natural to wonder if alternative options could be available. In the specific case where A_ is U_(𝔰𝔩_2), we saw that the Schur correlation functions define a positive-definite inner product associated to a specific ρ, but other options are available (and previously studied) which employ a different choice ρ' and may be associated to Schur indices modified by surface defects. Such alternative options could be used at any rank 1 irregular singularity, just by employing the same surface defect for SQED_2. It seems plausible that a range of alternative options would be independently available at each irregular puncture. We leave this point to future work. §.§ GL(1) on T^2 As a final toy model, consider a GL(1) Chern-Simons theory compactified on an elliptic curve E_τ. We can gauge-fix the flat connection to be constant, _z̅ = 2 π i a/τ - τ̅ The normalization is chosen so that a gauge transformation by e^2 π i (δz̅ + δ̅z) with δ = n τ + m/τ - τ̅ shifts a → a + n τ + m and identifies with the elliptic curve E_τ. If we denote _z = p, the momentum conjugate to a, then the gauge transformations also shift p → p - 2 π i n τ̅+ m/τ - τ̅ so the phase space is a twisted cotangent bundle of E_τ and the Hilbert space will be given by L^2 normalizable sections of a bundle on E_τ. When we quantize p = - i s^-1∂_a, the gauge transformations will have to be accompanied by multiplication of the wavefunctions by c_n,mexp[2 π s n τ̅+ m/τ - τ̅ a + 2 π s n τ + m/τ - τ̅a̅] for some constant c_n,m. A prototypical wave-function is the Gaussian |1 ⟩ = exp 2 π s |a|^2/τ - τ̅ so that c_n,m = exp 2 π s |n τ + m|^2/τ - τ̅. The notation anticipates that this wavefunction has a nice behaviour under the action of the quantum holonomies: u = exp[- i s^-1∂_a + 2 π i a/τ - τ̅] v = exp[- i s^-1τ∂_a + 2 π i a τ̅/τ - τ̅] Indeed, the quantum holonomies act with norm 1 u |1 ⟩ = exp[2 π i (a - a̅)/τ - τ̅+ s^-1π/τ - τ̅]|1 ⟩ = (u^†)^-1|1 ⟩ v |1 ⟩ = exp[2 π i (a τ̅- a̅τ)/τ - τ̅+ s^-1π |τ|^2/τ - τ̅]|1 ⟩ = (v^†)^-1|1 ⟩ If we define more general quantum holonomies x_m,n = exp[- i s^-1 (n τ + m) ∂_a + 2 π i a (n τ̅+ m) /τ - τ̅] then we can produce a dense basis of states x_m,n|1 ⟩ = exp[ 2 π s |a -i s^-1 (n τ + m))|^2/τ - τ̅ -s^-1π |n τ̅+ m|^2 /τ - τ̅] which identify the Hilbert space with L^2(^2), the natural quantization of the space ^* ×^* of holonomies. Up to a τ-dependent pre-factor, the wavefunction |1 ⟩ we proposed precisely matches the analytically continuation to imaginary κ of the partition function of a non-compact free boson of level κ, which is a WZW model with target GL(1,)/U(1) =. By construction, it satisfies the KZ equations. The trace associates to the state |1⟩ is x_m,n = δ_n,0δ_m,0. §.§ Outlook: A new quantization of complex Chern-Simons theory Developing the quantisation of complex Chern-Simons theory more deeply and in larger generality will require more powerful instruments. In previously studied cases of quantum Chern-Simons theory there were two main instruments which have turned out to be very useful, one being quantum group theory, the other being cluster algebra technology. Quantum groups can in particular represent a residual gauge symmetry in the quantisation of Chern-Simons theory, allowing one to construct quantum Chern-Simons theory from quantum group representation theory. The existence of different quantum deformations of SL(2,) suggests that there may exist quantisations of complex Chern-Simons theory which differ from the one previously constructed in <cit.>. To close this section we'd like to explain why we expect that the quantum Lorentz group from Schur quantisation is particularly well-suited for developing a new quantization of complex Chern-Simons theory. Roughly, it seems particularly well-suited for the use of a powerful blend of cluster algebra and quantum group theory, generalising the paradigm provided by quantum Teichmüller theory. It has been observed in <cit.> that the co-product of U_q(𝔰𝔩_2) is related to the quantum cluster algebra associated to the marked twice punctured disk in the context of quantum Teichmüller theory. It follows that the braiding in quantum Teichmüller theory is naturally related to the R-matrix of the modular double of U_q(𝔰𝔩_2) constructed in <cit.>. These observations have been generalised to higher Teichmüller theory in <cit.>. The relation to quantum Teichmüller theory helps to compute the Clebsch-Gordan decomposition of tensor products of modular double representations <cit.>. These connections represent key ingredients in the passage from the cluster algebra structures originally defining quantum Teichmüller theory to the modular functor structure associated to pants decompositions <cit.>. In the forthcoming companion paper we will study relations between cluster algebras and Schur quantisation. It will turn out that the quantum group from Schur quantisation defined here has a natural relation to cluster algebras which generalises the relations known from quantum Teichmüller theory. This should be a key ingredient for a new quantisation of complex Chern-Simons theory which has a natural relation to Schur quantisation, as predicted by the dualities discussed in the introduction. § REAL QUANTIZATION In this Section we discuss some evidence for the existence of a real version of Schur quantization: an Hilbert space _^ equipped with an unitary action of a *-algebra _^ obtained by equipping A_ with a star structure a^* = τ(a) . The Hilbert space will be defined as the L^2 closure of an auxiliary A_ module M_ equipped with a certain inner product. We will be schematic and leave many details to future work. As a quick motivation, consider the standard notion of unitary representations of real forms _ of Lie algebras. Such representation can be thought of as unitary representations of a *-algebra (_) defined by equipping U() with a *-structure τ, an automorphism of the Lie algebra fixing _. The Lie algebra _ has a maximal compact sub-algebra which exponentiates to a compact Lie group K acting on the representation . Defining ρ as before as a reflection of the generators of , the compact sub-algebra is fixed by ρ∘τ. We can decompose both U() and into finite-dimensional irreps of K. It is easy to see that this gives a dense basis M in which behaves as a module for U(), sometimes denotes as a (,K)-module. Conversely, can be recovered as the L^2 closure of M under an inner product. The notion of K-invariant, e.g. spherical vectors in is also important. These structures are expected to arise naturally in sphere quantizations, though a systematic analysis is still not available. The mathematical theory is rich. See e.g. <cit.>. We expect real Schur quantization to provide analogous structures for *-algebras _(_) defined by a similar star structure τ on U_q(). §.§ A Chern-Simons motivation An analogue real version of sphere quantization is reasonably well understood in the 3d setup <cit.> and employs correlation functions on hemispheres. The choice of boundary conditions for the hemisphere determines the structure of the module and of the inner product. The * structure on the algebra and the positivity property of the inner product are obtained as a generalization of certain properties <cit.> of for protected sphere two-point functions for (2,2) SCFTs <cit.>. They are typically obscure, and identifying a boundary condition which gives a given τ is challenging. We are not aware of K-theoretic generalizations of <cit.> which could be relevant in the current context. Indeed, we doubt they exits. As a result, the real Schur quantization procedure we sketch below will only work for a certain class boundary conditions, which at the moment we do not know how to characterize. Fortunately, some considerations about complex Chern-Simons theory, KW theory and 2d CFT lead to the definition of a bounty of boundary conditions suitable for the real Schur quantization procedure, at least for theories of class . Indeed, it may well be the case that such constructions for _(_) may shed light on the representation theory of both quantum and classical Lie algebras, after a judicious 3d limit. In complex CS theory, we can attempt quantization in a situation where the surface C^ has a boundary and/or is non-orientable. In order to do so (in a topological way), we should specify a topological boundary condition at each boundary components of C^. We have already encountered a natural set of options associated to real forms G_ of G, such that the complex connection restricts to a G_ connection at the boundary. We will employ these. Sometimes, a boundary condition can be defined via a reflection trick. This is the case here. We can describe C^ as a quotient C/τ, where C is the orientation cover of C^ and τ is an anti-holomorphic involution of C. The boundaries of C^ lift to the locus of points in C fixed by the action of τ. As τ flips the orientation on C, we can keep the complex CS action invariant if we define an action of τ which complex-conjugates the connection. At a boundary component, the action of τ is such that the component of the connection parallel to the boundary is τ-invariant if it lies in (the Lie algebra of) G_. Correspondingly, we have a lift of τ to an anti-holomorphic involution of the space (C) of complex flat connections on C. The τ-fixed locus (C^) in (C) gives a real phase space for the system, which we aim to quantize. A point in (C^) gives, in particular, a complex flat connection on C. We thus have classical observables W_a and W_a labelled by a skein a in C. Restricted to _, these observables satisfy a reality condition we write as: W_a =W^†_ρ(a) = W_τ(ρ(a)) . This complex CS setup has a lift to KW theory which modifies slightly a construction from <cit.>. We can define a three-manifold U_τ≡C × [-1,1]/_2 where _2 acts as a combination of τ and a reflection of the segment. The resulting manifold has a co-dimension 1 singular locus associated to boundaries of C^. In <cit.>, a prescription was given to smoothen the singular locus in a manner depending on the choice of G_. Then the space of states of KW theory on U_τ gives a tentative definition for the space of states of complex CS theory on C^. Next, we can introduce the real analogue of the B_c boundary condition. Away from boundaries of C^, we can restrict the connection to be unitary, i.e. be a G_c connection. At the boundary, we need to further select some junction between the G_ boundary condition and the G_c boundary condition. Both boundary conditions can be implemented in KW theory by a reflection trick <cit.>. We thus quotient C × [-1,1] × by the above _2 and a _2 reflecting both factors in [-1,1] ×. The resulting geometry is a bit complicated. The two reflections combine in particular to a _2 quotient of the boundary C × factors by a simultaneous reflection. This can be smoothened to a manifold V_τ akin to U_τ, but with a semi-infinite cylindrical region. As a consequence, we can create states |m;⟩ labelled by skeins m in the Skein Module M_ associated to V_τ. These are our tentative dense collection of states for 3d CS theory on C_, to be completed to an Hilbert space _^ by computing somehow the inner products ⟨ m,|m',⟩ . with an action of the Skein algebra A_ equipped with a * structure by the action of τ on skeins. Next, we conjecture that the inner products can be computed by a careful deformation and a chain of dualities mapping them to Schur half-indices, aka a twisted partition functions on HS^3 × S^1. We will not attempt to prove this fact. In the bulk of HS^3 we place the class theory associated to C and at the boundary of HS^3 we place the boundary condition defined by V_τ according to the 3d-3d correspondence <cit.>. As a check, the boundary lines give indeed elements of M_ with the correct action of the Skein algebra A_ and we can thus write a meaningful equality ⟨ m,|m',⟩ = II_m,m'() which can be further decorated by Skein algebra elements/bulk line defects: ⟨ m,|W_a|m',⟩ = II_m,a m'() This identification implies positivity properties for half-indices of boundary conditions which arise from V_τ. We should elaborate on the conjectural algebraic structures which appear in this construction. Suppose that we are given a (left) module M_ of A_ and we are looking for a positive-definite inner product on M_ compatible with a *-structure W^†_a = W_τ(a). The map τ is an algebra morphism A_→ A_^. It maps M_ to a right module τ(M_). The inner product gives, in particular, a pairing (∙, ∙) between τ(M_) and M_, i.e. a linear functional on the tensor product τ(M_) ⊗_A_ M_. In general, given _ and M_, one can find a finite-dimensional space of such linear functionals, which may or not include a cone of positive-definite ones. Again, it would be nice to find a way to characterize the functional provided by the Schur half-indices counting local operators between boundary lines. A key property of the Schur half-indices, of course, is that they will only depend on the theory and boundary condition and not on the specific duality frame used to describe either of them. §.§ Example: free boundary conditions in pure U(1) gauge theory Consider the case of the quantum torus. Up to SL(2,) re-definitions, there are two natural choices for τ: * The choice τ: (u,v)→ (u,v^-1) classically fixes the locus u^† =u, |v|^2=1. This locus has two components: u can be positive or negative. The locus fixed by ρ∘τ is u= ± 1. The corresponding unitary representations have unitary v and self-adjoint u. * The choice τ: (u,v)→ (v,u) classically fixes the locus u = v^†. The locus fixed by ρ∘τ is u v=1. The corresponding unitary representations have u, v adjoint to each other. In particular, they are not normal operators. Instead, u v is self-adjoint and u v^-1 is unitary. A prototypical representation of the first type involves the Hilbert space L^2() ≃ L^2(S^1). In the first description, v is a translation operator in and u is a multiplication operator ^2n. In particular, the state |1,⟩ supported at the origin is a reasonable quantization of the u=1 locus and generates a basis |v^n,⟩ of the whole Hilbert space under the action of v. The images define a simple module for the quantum torus algebra, consisting of powers of v. Expectation values ⟨ 1;| ^-ab u^a v^b |1;⟩ = δ_b,0 can be identified, up to a (^2)_∞ normalization factor, with half-indices for Neumann boundary conditions in the 4d pure U(1) gauge theory. The v^n module elements represent K-theory classes of boundary Wilson lines. Dirichlet boundary conditions would exchange the role of u and v. [An alternative realization would employ the same Hilbert space, but u acting as - ^2n. At the level of 4d gauge theory, the difference between these two choices is a bit subtle. Essentially, it has to do with a choice of fermion parity for the local operator representing the endpoint of a bulk 't Hooft line at the Neumann boundary. ] A prototypical representation of the second type also involves the Hilbert space L^2() ≃ L^2(S^1). We can define the action of u and v as a combination of the translations in either direction along and multiplication by ^n. Again, the |1,⟩ supported at the origin is a reasonable quantization of the u v=1 locus and generates a basis of the whole Hilbert space. In the 4d gauge theory description, the relevant boundary condition is a Neumann boundary condition equipped with one unit of Chern-Simons coupling. If we identify ^* ×^* with the moduli space of ^* flat connections on T^2, we could attempt to match the above involutions with geometric involutions of the T^2. Denote as σ_1, σ_2 the two angular coordinates on T^2. A reflection σ_2 → - σ_2, σ_1 →σ_1 has fixed loci σ_2 = 0 and σ_2 = π and the quotient of T^2 gives an annulus. A reflection σ_1 →σ_2 gives the Moëbius strip and σ_2 → - σ_2, σ_1 →σ_1 + π is a Klein bottle. It is pretty clear that the annulus and Klein bottle will give involutions of the first type and the Moëbius strip of the second type. We leave a detailed identification of different quantizations and choices of real forms at boundaries to future work. §.§ Some comments on representations of the U_q(𝔰𝔩_2) quantum group Already for the case of Abelian gauge theories, there is a large collection of boundary conditions which are compatible with some involution τ and may give non-trivial representations of A_ with the corresponding Hermiticity properties. For SQED_2, we can correspondingly engineer a variety of unitary representations of U_q(𝔰𝔩_2). For the case of SQED_1 we similarly expect q-deformed versions of representations of the Weyl algebra. In particular we expect q-deformations of several unitary representations of U(𝔰𝔩_2) encountered in (hemi)sphere quantization: principal series representations, discrete series and finite-dimensional representations. One may note, on the other hand, that the ∗-algebra structures on U_q(𝔰𝔩_2) have been classified in <cit.>. For real one only finds quantum deformations of SU(2) and SU(1,1). It would be interesting to clarify if these representations can recovered within Schur quantisation. We leave details to future work. §.§ Some comments about 2d CFT constructions. For a real analogue of a 2d CFT analysis, we can approximate (C^) as the twisted cotangent bundle of a space of “real bundles” Bun_. A real bundle should be understood as the data necessary to define a 2d CFT with Kac-Moody symmetry on a Riemann surface C with boundaries or cross-caps. The G_ data at boundaries controls the gluing condition for chiral and anti-chiral currents. We get a candidate Hilbert space _ as a space of L^2-normalizable twisted half-densities on Bun_. The geometry of the problem may allow a greater choice of twists than in the complex case. We will not attempt to characterize them here. Holomorphic quantization depends on a choice of complex structure on C via the KZ equations. With some work, it should be possible to extend the definition of Verlinde lines to include the quantum analogues of the classical observables defined above, satisfying again W^†_a = W_τ(a) for bulk observables. From the 2d perspective, it is natural to consider the definition of a G_/G_c WZW model on C. This will require a choice of boundary conditions and cross-cap states for the WZW model, which in turn may allow for a variety of extra parameters in the construction. We will not attempt to characterize them here. Up to this hidden data, the partition function should define a state |1;⟩∈_. We can produce further states by acting with quantum observables, defining the image in _ of some module M_ for A_ which, in a sense, quantizes . We expect this to provide a dictionary between real Schur and 2d CFT constructions. § ACKNOWLEDGEMENTS This research was supported in part by a grant from the Krembil Foundation. DG is supported by the NSERC Discovery Grant program and by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. The work of J.T. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Founda- tion) – SFB-Geschäftszeichen SFB 1624/1 2024 – Projektnummer 506632645, and furthermore supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy EXC 2121 "Quantum Universe" 390833306. § SOME U(N) EXAMPLES §.§ Pure U(N) gauge theory We now discuss briefly the gauge theory with U(N) gauge group and no matter fields in order to illustrate the general combinatorics of the Schur correlation functions and the isometry to an auxiliary Hilbert space. Furthermore, the description of U(2) gauge theory is helpful in setting up conventions for an SU(2) gauge group, which is our next example. The Schur index becomes I_ = 1/N!∮_|ζ|=1∏_i (^2)^2_∞ dζ_i/2 πıζ_i∏_i≠ j(1-ζ_i ζ_j^-1) (^2 ζ_i ζ_j^-1;^2)^2_∞ We will not attempt to describe the full algebra A_. Recall that its generators are expected to be classes [L_λ_m, λ_e] of 't Hooft-Wilson loops, with labels defined up to the action of the Weyl group. Here λ_m and λ_e are vectors in ^N, with an S_N Weyl group action. There is an useful notion of “minuscule” magnetic charge, λ_m = (1, ⋯, 1,0⋯, 0) up to an overall shift by the diagonal magnetic charge (1,⋯, 1). The 't Hooft operators of “minuscule” charge do not suffer from bubbling and thus the corresponding difference operators are readily written. They can be dressed by generic electric charges. The residual S_k × S_N-k action, where k is the number of “1” entries, reduces the electric charge to a choice of an U(k) × U(N-k) weight. We will present the difference operators in a form adapted to an isometry to L^2((S^1 ×)^N)^S_N with spherical vector image II_B(ζ) = δ_B,0(^2)^N_∞∏_i≠ j(^2 ζ_i ζ_j^-1;^2)_∞ We define N copies (u_i,v_i) and ( u_i, v_i) of the standard set of multiplication and shift operators. In order to avoid some square roots of phases in the formulae below, we will use a slightly modified magnetic Vandermonde measure Δ_B(ζ) = ∏_j < i(v_i-v_j)( v_i^-1 - v_j^-1) which differs from the standard one by a factor of ^(N-1) ∑_i B_i.[The price of this modification is some factors of ^N-1 which may have to be added to our expressions of 't Hooft line insertions below to match the correct answer for half-BPS line defects. ] An important ingredient in the presentation of 't Hooft operators of minimal charge are the combinations u_+,i ≡^N-1/∏_j^j ≠ i(1-v_j v_i^-1)u_i u_-,i ≡1/∏_j^j ≠ i(v_i v_j^-1-1)u^-1_i u_+,i ≡^N-1/∏_j^j ≠ i(1- v_i v_j^-1) u_i u_-,i ≡1/∏_j^j ≠ i( v_j v_i^-1-1) u^-1_i The difference operators are S_N -invariant combinations of these and v_i. We choose the relative normalization so that u_±, i II_B(ζ) = u_±, iII_B(ζ) . . and we have formal adjoint relations (with respect to the above Vandermonde measure) ρ(u_-,i) = ∏_j^j≠ i (^-1 v_j v_i^-1) u_+,iρ(u_+,i) = u_-,i∏_j^j≠ i ( v_i v_j^-1) and the expected Witten effect: ρ^2(u_-,i) =∏_j^j≠ i (v_i v_j^-1) u_-,i∏_j^j≠ i (v_i v_j^-1) ρ^2(u_+,i) = ∏_j^j≠ i (v_j v_i^-1) u_+,i∏_j^j ≠ i (v_j v_i^-1) Wilson lines are realized by characters χ_R(v) of U(N). The 't Hooft operators of minimal charge are simply the Weyl-invariant sums H_1 ≡∑_i u_+,i H_-1≡∑_i u_-,i They can be dressed by Wilson lines for U(1) × U(N-1) in a natural way by inserting appropriate characters in the sum. E.g. we can define H_1,n≡∑_i ^n v_i^n u_+,i H_-1,n≡∑_i ^-n v_i^n u_-,i by inserting U(1) characters evaluated on v_i. Characters for U(N-1) will be evaluated on v_j for j ≠ i. A product of the form w_R H_1 thus gives a sum of minimal 't Hooft operators dressed by the U(1) × U(N-1) representations contained in R, with extra powers of controlled by the U(1) charge. The 't Hooft operators of higher minuscule charge k are sums of N k higher shift operators u_+,I≡^k(N-k)/∏_j ∉ I∏_i∈ I(1-v_j v_i^-1)u_i where I is a subset of size k in 1, ⋯, N. They can be dressed by Wilson lines for U(k) × U(N-k) in a natural way. The full algebra of observables can be recovered from Wilson lines and 't Hooft lines. E.g. [H_1, w_1] = [H_1,∑_i v_i] = ( - ^-1) H_1,1 etcetera. As a richer example of the relations which appear in A_, note u_+,i u_+,j = ^2/(1-^2 v_i v_j^-1)(1-v_j v_i^-1)u_+,i,j and consider H_1,n_1 H_1,n_2 = ∑_i^n_1+3 n_2 v_i^n_1+n_2 u_+,i^2 + ∑_i ≠ j^n_1+n_2+2 v_i^n_1+1v_j^n_2+1/(v_j-^2 v_i)(v_i-v_j) u_+,i,j and then ^n_1-n_2 H_1,n_1 H_1,n_2 - ^-n_1-n_2 H_1,0 H_1,n_1+n_2 = ∑_i ≠ jv_j^n_1 - ^2 n_1 v_i^n_1/v_j-^2 v_i^2 v_i v_j^n_2+1/v_j-v_i u_+,i,j = = ∑_i<j[v_j^n_1 - ^2 n_1 v_i^n_1/v_j-^2 v_iv_j^n_2 - v_i^n_1 - ^2 n_1 v_j^n_1/v_i-^2 v_jv_i^n_2] ^2 v_i v_j/v_j-v_i u_+,i,j The coefficient of u_+,i,j is a symmetric polynomial in v_i and v_j, corresponding to some U(2) Wilson line dressing for a 't Hooft operators of minuscule charge 2. E.g. ^2 H_1,1 H_1,-1 - H_1,0^2 = ∑_i<j^2 u_+,i,j≡^2 H_2 Another important relation is H_1,0 H_1,1 = ^2 H_1,1 H_1,0 which is a first step towards building a cluster structure on the K-theoretic Coulomb branch <cit.>. The “bare” 't Hooft operators coincide with the Hamiltonians for the open relativistic quantum Toda chain and in particular commute with each other. This is not completely obvious from the explicit formulae: [H_1, H_2] = ∑_i∑_j<k [u_+,i, u_+,jk] = ∑_j<k∑^i ≠ j, i≠ k_i^3/(1-^2 v_i v_j^-1)(1-v_j v_i^-1)(1-^2 v_i v_k^-1)(1-v_k v_i^-1) u_+,ijk + -^3/(1- v_i v_j^-1)(1-^2 v_j v_i^-1)(1-v_i v_k^-1)(1-^2 v_k v_i^-1) u_+,ijk vanishes only after symmetrization of i,j,k. On the other hand ^2 H_1,1 H_2 = H_2 H_1,1 gives another piece of the cluster structure. The cluster structure is closely related to the IR perspective on Schur quantization discussed in the companion paper <cit.>. It will be used to predict the spectrum of the Toda Hamiltonians in this complex quantization scheme. §.§ The N=2^* U(N) gauge theory We now discuss very briefly the gauge theory with U(N) gauge group and adjoint matter fields, the simplest conformal example. The Schur index becomes I_ = 1/N!∮_|ζ|=1∏_i (^2)^2_∞ dζ_i/2 πı (-μ;^2)_∞(-μ^-1;^2)_∞ζ_i∏_i≠ j(1-ζ_i ζ_j^-1) (^2 ζ_i ζ_j^-1;^2)^2_∞/(-μζ_i ζ_j^-1;^2)_∞(-μ^-1ζ_i ζ_j^-1;^2)_∞ §.§.§ Abelianization ingredients The 't Hooft operators of “minuscule” charge do not suffer from bubbling. We will present them in a form adapted to an isometry to L^2((S^1 ×)^N)^S_N with spherical vector image II_B(ζ) = δ_B,0(^2)^N_∞/(μ;^2)^N_∞∏_i≠ j(^2 ζ_i ζ_j^-1;^2)_∞/(-μζ_i ζ_j^-1;^2)_∞ The presence of similar factors at numerator and denominator leads to neat simplifications of various formulae below. Define N copies (u_i,v_i) and ( u_i, v_i) of the standard set of multiplication and shift operators. An important ingredient in the presentation of 't Hooft operators of minimal charge are the combinations[One may include some overall factors of μ^±1/2 in order to restore a μ→μ^-1 symmetry in the presentation of A_ generators below. Again, we made a choice here which minimizes square roots of phases.] u_+,i ≡∏_j^j ≠ i v_i+ μ v_j /v_i-v_ju_i u_-,i ≡∏_j^j ≠ iμ^-1 v_j+ ^-1 v_i/v_j-v_i u^-1_i u_+,i ≡∏_j^j ≠ i v_j+ μ v_i/ v_j- v_i u_i u_-,i ≡∏_j^j ≠ iμ^-1 v_i+ ^-1 v_j/ v_i- v_j u^-1_i Taking adjoints with the magnetic Vandermonde measure[Again we avoided some square roots at the price of a ^(N-1)∑_i B_i factor, leading to some ^N-1 factors below] Δ_B(ζ) = ∏_j < i(v_i-v_j)( v_i^-1 - v_j^-1) , we learn that ρ(u_-,i) = u_+,iρ(u_+,i) = u_-,i so that ρ^2=1 as expected. The following intertwining relations hold for the images of the spherical vector II u_±, i II_B(ζ) = u_± iII_B(ζ) §.§.§ The algebra We can now review the presentation of the K-theoretic Coulomb branch algebra. First of all, Wilson lines are described by characters χ_R(v) for (finite-dimensional) U(N) representations. The 't Hooft operators of minimal charge are simply H_1 ≡∑_i u_+,i H_-1≡∑_i u_-,i They can be dressed by Wilson lines for U(1) × U(N-1) in a natural way. E.g. H_1,n≡∑_i ^-n v_i^n u_+,i H_-1,n≡∑_i ^n v_i^n u_-,i The 't Hooft operators of higher minuscule charge k are sums of N k higher shift operators u_+,I≡∏_j ∉ I∏_i∈ I v_i - μ v_j/v_i-v_ju_i where I is a subset of size k in 1, ⋯, N. They can be dressed by Wilson lines for U(k) × U(N-k) in a natural way. Again, “bare” 't Hooft lines commute, via miraculous-looking simplifications of the commutators. They coincide with the Hamiltonians for a trigonometric quantum Ruijsenaars-Schneider model. §.§.§ Comments on S-duality This theory is endowed with S-duality, acting as SL(2,) on the magnetic and electric labels of 't Hooft-Wilson loops. E.g. the S transformation permutes H_1 and the Wilson line w_1 in the fundamental representation. The Schur index and Schur quantization are invariant under SL(2,), but this is far from obvious from the above presentation. An immediate consequence of S-duality is that the joint spectrum of the bare minuscule 't Hooft operators, which commute with each other, must coincide with that of the minuscule Wilson lines, i.e. Wilson lines for antisymmetric powers of the fundamental representation. This result fully characterizes the spectrum of this complex quantization of the trigonometric Ruijsenaars-Schneider model. One can produce a formally unitary integral kernel on the auxiliary Hilbert space implementing S as superconformal index of a certain T[U(N)] theory. The construction is actually best understood in a recursive way, in terms of S-dual interfaces between U(N) and U(N-1) gauge theories. One interface simply reduce the gauge group from U(N) to U(N-1) by a partial Dirichlet boundary condition. Concretely, that means a U(N) Wilson line, say, brought to the interface is decomposed into U(N-1) ⊗ U(1) Wilson lines and the latter are evaluated on a fixed value of the v_N fugacity. The Schur index in the presence of the interface takes the form of a pairing in ^aux_[N-1] of a Dirichlet wavefunction for U(N-1) and the restriction of a Dirichlet wavefunction for U(N) in ^aux_[N] to fixed values of ζ_N and B_N. The S-dual interface couples both U(N) and U(N-1) gauge fields to two sets of “bifundamental” 3d free chiral fields. Concretely, the Schur index in the presence of the interface takes the form of a pairing with a (distributional) kernel in ^aux_[N-1] ×^aux_[N] which is a product of 2N(N-1) complex quantum dilogarithms. Elementary 't Hooft operators for U(N) acting on the kernel can be traded for the a linear combination of 't Hooft operators for U(N-1) which is analogous to the decomposition of Wilson lines. A convolution of N-1 such kernels fully diagonalized 't Hooft operators. We leave details of the construction to an enthusiastic reader, referring to <cit.> for a classical version of the construction. §.§ U(N) SQCD with N_f flavours. This is our final general example. The Schur index becomes I_ = 1/N!∮_|ζ|=1∏_i (^2)^2_∞ dζ_i/2 πıζ_i∏_i≠ j(1-ζ_i ζ_j^-1)(^2 ζ_i ζ_j^-1;^2)^2_∞/∏_i∏_r=1^N_f (-μ_r ζ_i;^2)_∞(-μ_r^-1ζ_i^-1;^2)_∞ The 't Hooft operators of “minuscule” charge do not suffer from bubbling. We will present them in a form adapted to an isometry to L^2((S^1 ×)^N/S_N) with spherical vector image II_B(ζ) = δ_B,0(^2)^N_∞∏_i≠ j(^2 ζ_i ζ_j^-1;^2)_∞/∏_i ∏_r=1^N_f (-μ_r ζ_i;^2)_∞ Define N copies (u_i,v_i) and ( u_i, v_i) of the standard set of multiplication and shift operators. An important ingredient in the presentation of 't Hooft operators of minimal charge are the combinations u_+,i ≡^N-1/∏_j^j ≠ i(1-v_j v_i^-1)u_i u_-,i ≡∏_r=1^N_f(1+^-1μ_r v_i)/∏_j^j ≠ i(v_i v_j^-1-1)u^-1_i u_+,i ≡^N-1∏_r=1^N_f(1+^-1μ_r v_i)/∏_j^j ≠ i(1- v_i v_j^-1) u_i u_-,i ≡1/∏_j^j ≠ i( v_j v_i^-1-1) u^-1_i Taking adjoints, we get ρ(u_-,i) = ∏_j^j≠ i (^-1 v_j v_i^-1) u_+,iρ(u_+,i) = u_-,i∏_j^j≠ i ( v_i v_j^-1)∏_r=1^N_f(^-1μ^-1_r v^-1_i) and thus the expected Witten effect: ρ^2(u_-,i) =∏_j^j≠ i (v_i v_j^-1) u_-,i∏_j^j≠ i (v_i v_j^-1) ∏_r=1^N_f(^-1μ^-1_r v^-1_i)ρ^2(u_+,i) = ∏_j^j≠ i (v_j v_i^-1) u_+,i∏_j^j ≠ i (v_j v_i^-1) ∏_r=1^N_f(^-1μ_r v_i) controlled by the anomaly 2 N- N_f. Again, 't Hooft operators of minuscule charge are built from the u_±,i and analogous u_±,I. For N_f = 2 N, the theory is conformal and is endowed with a non-trivial S-duality which re-arranges the U(1) and SU(N) parts of the 't Hooft charges and maps Wilson lines to dyonic lines of even magnetic charge. § FROM U(2) TO SU(2) 'T HOOFT OPERATORS First, we can illustrate the construction of pure U(2) 't Hooft operators from SU(2) and U(1) expressions. We introduce symbols v_1 and v_2, as well as u_1,± and u_2,± which multiplicatively shift v_1 and v_2 by ^2, as for two copies of U(1) gauge theory. We should think about u_i,± as combinations of U(1) and SU(2) generators, so that u_1,+ = u^SU(2)_+ u^U(1)_+ u_1,- = u^SU(2)_- u^U(1)_- u_2,+ = u^SU(2)_- u^U(1)_+ u_2,- = u^SU(2)_+ u^U(1)_- and correspondingly v_1 = v_SU(2) v_U(1)^1/2 and v_2 = v_SU(2)^-1 v_U(1)^1/2, so that v_U(1) = v_1 v_2. Correspondingly, we have product rules such as u_1,+ u_1,- = v_1 v_2/(v_1 - v_2)( v_1 - ^-1 v_2) u_1,+ u_2,+ = v_1 v_2/(v_1 - v_2)( v_1 - ^-1 v_2) (u^U(1)_+)^2 u_1,+ u_2,- = u_2,- u_1,+ etcetera. The elementary 't Hooft operators in the SU(2) gauge theory can be promoted to ^a/2^b v_U(1)^b u^U(1)_+ v_SU(2)^a u^SU(2)_+ + ^a/2^b v_U(1)^b u^U(1)_+ v_SU(2)^-a u^SU(2)_- ^a/2^-b v_U(1)^b u^U(1)_- v_SU(2)^a u^SU(2)_+ + ^a/2^-b v_U(1)^b u^U(1)_- v_SU(2)^-a u^SU(2)_- i.e. H_1,0;a,b = H_0,1;b,a = ^a v_1^a v_2^b u_1,+ + ^a v_2^a v_1^b u_2,+ H_-1,0;a,b= H_0,-1;b,a = ^-a v_1^a v_2^b u_1,- + ^-a v_2^a v_1^b u_2,- Next, we can present the difference operators for U(2) with a single flavour. The fundamental flavour modifies the product rules as u_1,+ u_1,- = v_1 v_2/(v_1 - v_2)( v_1 - ^-1 v_2) (1 + v_1) u_1,+ u_2,+ = v_1 v_2/(v_1 - v_2)( v_1 - ^-1 v_2) (u^U(1)_+)^2 u_1,+ u_2,- = u_2,- u_1,+ etcetera. The expressions for the elementary 't Hooft operators is unchanged: H_1,0;a,b = H_0,1;b,a = ^a v_1^a v_2^b u_1,+ + ^a v_2^a v_1^b u_2,+ H_-1,0;a,b H_0,-1;b,a = ^-a v_1^a v_2^b u_1,- + ^-a v_2^a v_1^b u_2,- We can now consider the product H_1,0;a,b H_-1,0;c,d = (^a v_1^a v_2^b u_1,+ + ^a v_2^a v_1^b u_2,+)(^-c v_1^c v_2^d u_1,- + ^-c v_2^c v_1^d u_2,-) = ^a-c+2 d v_2^a+d v_1^b+c u_1,- u_2,+ +^a+c v_1^a+c v_2^b+d u_1,+ u_1,- + + ^a+c v_2^a+c v_1^b+d u_2,+ u_2,- + ^a-c+2 d v_1^a+d v_2^b+c u_1,+ u_2,- = ^a-c+2 d v_2^a+d v_1^b+c (u^SU(2)_-)^2 +^a+c v_1^a+c v_2^b+dv_1 v_2/(v_1 - v_2)( v_1 - ^-1 v_2) (1 + v_1) + + ^a+c v_2^a+c v_1^b+dv_1 v_2/(v_1 - v_2)(^-1 v_1 - v_2) (1 + v_2) + ^a-c+2 d v_1^a+d v_2^b+c (u^SU(2)_+)^2 In particular, we get H_1,0;a,b H_-1,0;-a,-b = ^2a - 2b v_SU(2)^2a-2b (u^SU(2)_+)^2 + v_1 v_2/(v_1 - v_2)( v_1 - ^-1 v_2) (1 + v_1) + + v_1 v_2/(v_1 - v_2)(^-1 v_1 - v_2) (1 + v_2) + ^2a-2b v_SU(2)^-2a+2b(u^SU(2)_-)^2 = ^2a - 2b v_SU(2)^2a-2b (u^SU(2)_+)^2 + (+ ^-1 + v_1 + v_2)/( v_SU(2) - ^-1 v_SU(2)^-1)(^-1 v_SU(2) - v_SU(2)^-1) + + ^2a-2b v_SU(2)^-2a+2b(u^SU(2)_-)^2 and H_1,0;a,b H_-1,0;-a,1-b = ^2a-2b +2 v_1^a-b+1 v_2^b-a (u^SU(2)_+)^2 + v_2 v_1 v_2/(v_1 - v_2)( v_1 - ^-1 v_2) (1 + v_1) + + v_1 v_1 v_2/(v_1 - v_2)(^-1 v_1 - v_2) (1 + v_2) +^2a-2b+2 v_2^a-b+1 v_1^b-a (u^SU(2)_-)^2 = ^2a-2b +2 v_SU(2)^2a-2b+1 v_U(1)^1/2 (u^SU(2)_+)^2 +((+ ^-1 )v_1 v_2+ v_1 + v_2)/( v_SU(2) - ^-1 v_SU(2)^-1)(^-1 v_SU(2) - v_SU(2)^-1)+ +^2a-2b+2 v_SU(2)^2b-2a-1 v_U(1)^1/2 (u^SU(2)_-)^2 JHEP
http://arxiv.org/abs/2406.08163v1
20240612125209
A conceptual predator-prey model with super-long transients
[ "Misha Chai", "Holger Kantz" ]
q-bio.PE
[ "q-bio.PE", "nlin.CD", "physics.bio-ph" ]
[ [ June 17, 2024 ================= There is a growing recognition that long-term or asymptotic behavior is rare, and that focusing on transients might be a more effective approach to understanding the complexity in ecosystems <cit.>. Moreover, many models and observations suggest that transients may persist over a super-long period of time <cit.>, during which cyclic and chaotic behaviors appear repeatedly. These cyclic dynamics are one of the most notable phenomena in population biology, particularly in predator-prey systems where the predator and prey coexist in recurring cyclic patterns over indefinitely long periods of time. The Lotka-Volterra model, a cornerstone in mathematical biology and ecology, provides a fundamental framework for understanding cyclic dynamics in predator-prey interaction. Further simplification can be achieved by the discretization of time. Logistic map <cit.>, for example, a well-known discrete-time model, has been used to describe the population of a single species influenced by carrying capacity, showcasing a spectrum of behaviors from stable equilibrium to periodic oscillations and chaos determined by its growth rate. A much more complex behavior can be achieved by introducing competition models <cit.>. It has been extensively used to analyze population dynamics and to understand the biodiversity<cit.> in ecosystems. One of the most classic topics is the study of the complexity and biodiversity of plankton species <cit.>, aimed at understanding the famous “paradox of the plankton” <cit.>. In the other ecosystems, such as Forage fish <cit.>, insects <cit.>, grass community <cit.>, and Dungeness crabs <cit.>, various methods have also been used to understand different types of systems. However, due to the complexity and high dimensionality of these models, unraveling the complicated dynamics in ecosystems is challenging, let alone conducting linear analysis. Therefore, a simple conceptual model that contains most of typical features of real-world population, while still being amenable to theoretical or linear analysis, is extremely important! Predator-prey model Here based on the logistic map x_n+1=rx_n(1-x_n), where x_n ≥ 0 is a dimensionless measure of the population in the nth generation and r ≥ 0 is the intrinsic growth rate—reflecting population changes under ideal conditions without external factors—we propose a simple predator-prey model in which the prey responds to predation. Thus, the evolution of the prey can influence predator dynamics, which in turn affects prey evolution. These dynamics can lead to the continuous co-evolution of predators and prey in response to each other's adaptations. It displays rich dynamical complexity, such as the persistence and coexistence of population cycles and chaotic behaviors<cit.>, the emergence of super-long transient<cit.>, and regime shift (a sudden change that usually results in the extinction of species and the loss of biodiversity)<cit.>. It can help us understand the complexity of realistic ecosystems. The constancy of growth rate in the logistic map does describe the properties of some simple systems or toy models well, for example, a single-species algae population with one limiting resource <cit.>. However, in realistic systems, the introduction of new species (e.g., through invasion from other parts of the globe or by mutation) and the extinction of species both change the dimension of the phase space and affect the quality of the dynamics. Thus, the growth rate cannot be constant anymore but changes along with the system's evolution. This motivates us to introduce r+y_n as the new growth rate. Here, r represents the intrinsic and fixed birth and death rate of the predator under limiting resources and we fixed r=3 at the point where the bifurcation occurs in the paper. The term y_n reflects the changes of limiting resources, which directly affects and adapts to changes in the predator population, as seen in plant-consumer and host-parasite systems. Here we use plant-consumer systems as an example, but the dynamics can be applied to most systems where resources are the prey, and predators and prey in response to each other's adaptation directly. Assume that there are x_n-1 cattle and y_n-1 grass in generation n-1, and x_n and y_n in generation n, as shown in Fig. <ref>. The new cattle (yellow) and the leave of cattle (red) occur at the end of the year, and every individual is identical. As we can see, x_n-1 of cattle consumed y_n-y_n-1 grass in generation n-1. Since the same amount of cattle consume the same amount of grass, x_n can be rewritten as x_n=x_n-1+(x_n-x_n-1). The x_n is divided into two parts: x_n-1, where the corresponding grass consumption is y_n-y_n-1; and x_n-x_n-1, which reflects the population changes with the corresponding grass consumption being a(x_n-x_n-1). The control parameter a scales the relationship between predator and prey. In the cattle-grass system, this can be understood as the weight of grass each cattle consumes. Thus in generation n+1, the grass is y_n-(y_n-y_n-1)-a(x_n-x_n-1)=y_n-1-a(x_n-x_n-1). The generalized logistic map hence reads: x_n+1 = (3+y_n)x_n(1-x_n) y_n+1 = y_n-1-a(x_n-x_n-1). When x_n-x_n-1>0, indicating an increase in the predator population, this can lead to a decrease in the prey population, i.e., y_n+1, which in turn results in a subsequent decline trend in the predator population in next generation, denoted as x_n+2. Correspondingly, if x_n-x_n-1<0, indicating a decrease in the predator population, this can lead to an increase in the prey population y_n+1, resulting in a subsequent rise trend of x_n+2. If the predator population remains constant, i.e., x_n-x_n-1=0, the prey population also remains constant, i.e., y_n+1=y_n-1. Since this map involves x_n-1 and y_n-1, suggesting it resembles a delay map. Consequently, its phase space would appear to be 4-dimensional. Nonetheless, Eq. (<ref>) can actually be derived from a 2-dimensional system, which includes an additional variable sgn: sgn_n=(-1)^n and an instantaneous update y_n+1=y_n + sgn_n · a(x_n-0.5). The x-dynamics remain consistent with Eq. (<ref>). Therefore, given x_1 and y_1 as initial conditions, the whole trajectory is uniquely determined. This representation of our map reveals that its phase space is the direct product of the (x,y)-plane and the set {-1,1}. In the graphical representations of the attractor, we will focus solely on projections onto the (x,y)-plane. When y_n = const (achieved in the limit a→ 0), Eq. (<ref>) becomes a one-dimensional logistic map. The bifurcation diagram then reveals the transitions from stable equilibrium through period-doubling bifurcations and eventually into chaotic dynamics as y increases, shown in Fig. <ref>A. In the whole paper, initial conditions are chosen randomly. One of the most intriguing features of the bifurcation diagram is the occurrence of period-p windows that contain the critical point x_c, the maximum of the function f(x) = rx(1-x). The periodic-p orbit contains x_c satisfies (f_r^p)'(x_c)=0 , making the orbit super-attracting (super stable), as exemplified by the well-known period-3 window, where occurs near 3.8284…≤ r ≤ 3.8415… <cit.>. When y_n ≠ const., i.e., a 0, the system maintains similar dynamics at each value of the growth rate. Hence, for small a (slow change of growth rate), the resulting phase space diagram strongly resembles the bifurcation diagram of the logistic map, shown in Fig. <ref>B. But the essential difference is that Eq. (<ref>) is an intermittent system, exhibiting various intermittency behaviors on different time scales at different values of the growth rates, and that all trajectories become transient. The latter is a consequence of the fact that 3+y_n is not bound to be smaller or equal to 4, and if 3+y_n>4, then the x-dynamics map points outside of the interval [0,1], causing them to escape towards infinity (∞), which, as noted by May<cit.>, implies that the population becomes extinct. When this happens, we stop the iteration, considering the trajectory as having escaped. Since all trajectories will eventually escape from the finite phase space, we now focus on the survival probability, which is related to the distribution of lifetimes. Initiating our trajectories with random values for x_1 and setting y_1 = 1, they exhibit quite different behaviors as a function of the lifetime. Regime 1. A certain fraction of them moves near y=1. These escape after a relatively short time in an exponential way. It is highlighted by the green background, and the zoom-in graph is shown in Fig. <ref>A. Regime 2. From those which stay longer but do not drop into the period-p window, their lifetimes are characterized by a power-law distribution. This distribution has an exponent close to -1/2, P(τ >T) ∝ T^-1/2, where τ represents the time until the escape of an individual trajectory. The exponent 1/2 can be easily explained: In the equation for y_n as shown in Eq.(<ref>), the increments x_n-x_n-1 behave similarly to white noise, as long as the x-dynamics for the given y value is chaotic. Actually, if we disregard the nonlinear dependence of x_n on x_n-1 and assume that they are independent, then the probability distribution for the difference x_n-x_n-1 will be symmetric around 0, even if the distribution of x_n is not. Consequently, the mean value of these increments is 0, so that for strongly chaotic x, they do not impose a systematic drift on y. But if the increments behave like white noise, then y_n behaves like a Brownian path. For a Brownian path, it is well known that the probability of crossing a specific value (in this case, y=1) in the next step drops like t^-3/2, where t is the time since the last crossing of this value. Since we start from y_1=1, the time till reaching y_n>1 asymptotically then follows the law t^-1/2. In our numerical experiments, where the empirical value of the exponent is α=-0.5011 (the red dash line shown in Fig. <ref>B), we well reproduce the power close to 1/2. The slight deviation of this numerical value from 1/2 is attributed to some trajectories being stuck in period-p windows for a super-long period of time. It is highlighted by the pink background, and the zoom-in graph of the survival probability is shown in Fig. <ref>B. Regime 3. The most significant behavior occurs near those y-values where the stable period-3 window happens in the logistic map. Specifically, once trajectories come to the period-3 window, they first move along it until reaching its boundary, where there is a possibility for the trajectories to escape outside of the window for an extremely short time of chaos. But there is also a high possibility that it will be quickly attracted back to the period-3 window again, repeating similar dynamics for a super-long period of time, resulting in the occurrence of super-long transients. The smaller the a is, the slower the motion in the y-direction. Consequently, many suitable y_n have been generated near the period-3 window. Even with a small probability, y_n could move upward or downward, leaving the window. However, due to the slow movement, y_n remains extremely close to the period-3 window. Then, because of the stability of the period-3 window, y_n is attracted back to the period-3 window again. As a result, trajectories spend an exceptionally long period of time near the period-3 window, which results in a significant impact on the global properties of the system. The ability of the system bounce back to the period-3 window after chaotic behavior is known as "resilience"<cit.>, which has been defined as the capacity to tolerate perturbations without collapsing. Particularly under environmental changes, measuring, quantifying, and maintaining the resilience becomes critically important. In our model, a as a control parameter, decides the resilience of the system. In Fig. <ref>, the transient x-dynamics have been shown in the time domain. And for different values of a we show exemplarily typical trajectories x_n. As we can see from the first 3 panels, when a is bigger, cyclic and chaotic behavior (similar to perturbations in realistic systems) appear repeatedly until regime shift suddenly changes the dynamics and lead trajectories go to infinity, which indicates the extinction of species. In this process, even though trajectories can be quickly attracted to the period-3 window after perturbation, the frequent occurrence of perturbation significantly increases the probability of the escaping of trajectories, resulting in their faster leaving the phase space, which also means the loss of resilience. When a gets smaller, as shown in the last 2 panels, cyclic behavior becomes more pronounced and perturbations become increasingly rare, which indicates the period-3 window becomes more attractive. As a result, it is difficult for trajectories to escape from the period-3 window, even under perturbation, which indicates stronger resilience of the system. Consequently, trajectories are stuck in a period-3 window for a super-long period of time, indicating the occurrence of super-long transients. Another quite interesting dynamic in our model is the coexistence of multiple time scales. As we know, the stability of the period-3 window causes trajectories to spend a super-long period of time on it, which is one of the significant time scales. However, there are other periodic windows as well. For example, as a gets smaller, trajectories may also pass through the period-5 window. Although it is less attractive than the period-3 window, it can still capture points for a long time of period. As a continues to decrease, more periodic windows become significant, contributing to systems' complexity. The survival probabilities of Regime 3 with variable control parameter a are shown in Fig. <ref>. It is highlighted by the blue background, and the zoom-in graph of the survival probability is shown in Fig. <ref>C. In Fig. <ref>A and Fig. <ref>B, we observed that the escape rate gets smaller with the decrease of a. Starting from a critical point a_c ∈ (0.0050, 0.0060), as a becomes even smaller, the escape rate becomes super sensitive to the a, as seen in Fig. <ref>D. The lifetime (lifetime = 1/escape rate) follows an exponential law with the variation of a. And the exponent is η = -7073. This is a kind of super-transient behavior, previously observed in other systems<cit.>: The transition occurs near stable dynamics without escape, and lifetimes have been observed to depend exponentially on a control parameter. Here, we are unable to obtain the escape rate or lifetime numerically in the limit of a→ 0, but we can see from Fig. <ref>D that the lifetime grows to extremely huge as a → 0, which indicates the existence of super-long transient. In Fig. <ref>C, by choosing parameters in the interval a ∈ (0.0050, 0.0060), the emergence of the super-long transient tail has been shown with the decrease of a. In Fig. <ref>B, additional fascinating dynamics are observed that align well with trade-offs<cit.> in ecosystems. As seen, with the decrease in a, the escape rate decreases, indicating a longer lifetime. However, at the same time, the population size also diminishes. This suggests that it is might not possible to maximize both lifetime and population size simultaneously. A longer lifetime of ecosystems might have to come at the expense of a reduced population. This reminds us of the evolutionary trade-offs: a trait increases in fitness at the expense of decreased fitness in another trait due to limited resources. The balance between different traits contributes to the success of natural systems. Moreover, the survival probability here closely aligns with the numerical result on plankton species richness that Michael J. Behrenfeld illustrates in Fig. 1 of his paper<cit.>. In the paper, neutral theory<cit.> as a method aimed to explain the diversity and abundance of species in ecosystems, has been used to explore the role of stochastic processes. In Fig. 1 <cit.>, the survival probability initially exhibits an exponential decay, then undergoes a stochastic process, resulting in different tails (population size) based on the strength of immigration (external factors). This closely resembles our results. The only difference is that our data show different transient tails based on variations in parameter a. This can be explained as follows: in our model, prey (resources) respond to predation, while changes in resources are caused by external factors. The control parameter a, as a scale between resource changes and predation, also indirectly reflects the strength of external factors. This confirms that our results align closely with Behrenfeld's findings, further reinforcing our confidence that our model can serve as a conceptual tool to help ecologists and physicists understand the complexities in ecosystems. After trajectories leave the period-3 window and are not attracted back quickly, they either move upward towards y=1 which have the possibility to escape in a short time, or move downward towards smaller y-values. Once they move downward within the periodic regime, the values of y systematically decrease, causing the trajectory to follow an inverse period-doubling process along the purple curve in Fig. <ref>B until it reaches 3+y = 3 which acts as a reflecting boundary. Subsequently, the trajectory will be bounced back to the period-3 window or chaotic region again, following the orange curve. All those processes have the chance to repeat again and again until the trajectory eventually leaves the phase space through 3+y>4. The upper endpoint of the orange curve is dependent on the control parameter a. The larger the value of a is, the further the curve extends. By initializing 2<3+y_n<3, in Fig. <ref>B, all trajectories following the orange curve move upwards into the region from which they have the chance to escape through 3+y>4. For trajectories with 3+y_n<2, they move to minus infinity. Our paper mainly focuses on trajectories that leave the phase space through 3+y>4. Conclusion The evolution of predators is influenced by their prey, meanwhile, prey adapts to predators. This continuous co-evolution of predators and prey contributes to the complicated dynamics in our system. Among these, the most intriguing dynamics occur during periodic windows: i) The existence of super-long transients. For example, trajectories spend a long period of time on the period-3 window, which contributes to the occurrence of super-long transient, but at the same time, also indicates that the time scale on the period-3 window differs from others. ii) The coexistence of multiple time scales. There are many periodic windows, such as the period-3 and the period-5 window. Since trajectories spend different amounts of time on each, this indicates the multiple time scales in our system when a gets even smaller. The cyclic behaviors on periodic windows are similar to cyclic population dynamics in ecosystems. As we know, with the fading out of population cycles in ecosystems becoming increasingly common, the collapse of these cycles has become a very interesting topic and attracts a lot of attention<cit.>. In our model, a as the control parameter, scaling the relationship between predators and prey, determines the persistence of population cycles. This promotes us to ask whether the scale between predators and prey in real-world systems might influence the persistence of population cycles. Moreover, due to the simplicity of our model, this gives us the chance to further explore what contributes to the persistence of population cycles? Until a sudden regime shift broke the population cycles and led the system to extinction, the question arises: Does the cumulative behavior of the system lead to the occurrence of a regime shift<cit.>? And are there any early warning signals for a regime shift<cit.>? Additionally, the coexistence with chaotic behaviors makes us think about the role chaos plays in transient behaviors<cit.>. Our model gives us the chance to analyze all those topics or even conduct linear analysis. Furthermore, the existence of the evolutionary trade-off in our model further supports its credibility. More importantly, the survival probability shown in our model aligns with the findings of Michael J. Behrenfeld <cit.>. All of those indicate that our model, as a conceptual framework, combines most of the dynamic features of ecosystems. This integration might provide us with the opportunity to deeply understand, analyze, and unify these topics within a single model. 33 Hastings2001Hastings, A. Transient dynamics and persistence of ecological systems. Ecol. Lett. 4, 215-220(2001). https://doi.org/10.1046/j.1461-0248.2001.00220.xdoi: 10.1046/j.1461-0248.2001.00220.x Hastings2004Hastings, A. Transients: the key to long-term ecological understanding?. Trends Ecol. Evol.19, 39-45(2004). https://doi.org/10.1016/j.tree.2003.09.007doi: 10.1016/j.tree.2003.09.007 Ims2008Ims, R. A., Henden, J. A. & Killengreen, S. T. Collapsing population cycles. Trends Ecol. Evol. 23, 79-86(2008). https://doi.org/10.1016/j.tree.2007.10.010doi: 10.1016/j.tree.2007.10.010 Morozov2016(46)Morozov, A.Yu., Banerjee, M. & Petrovskii, S. V. Long-term transients and complex dynamics of a stage-structured population with time delay and the Allee effect. J. Theor. Biol. 396, 116-124(2016). https://doi.org/10.1016/j.jtbi.2016.02.016doi: 10.1016/j.jtbi.2016.02.016 Hastings2018(420)Hastings, A., et al. Transient phenomena in ecology. Science. 361, eaat6412(2018). https://doi.org/10.1126/science.aat6412doi: 10.1126/science.aat6412 Morozov2020Morozov, A., et al. Long transients in ecology: Theory and applications. Phys. Life Rev. 32, 1-40(2020). https://doi.org/10.1016/j.plrev.2019.09.004doi: 10.1016/j.plrev.2019.09.004 Hastings1994Hastings, A. & Higgins, K. Persistence of transients in spatially structured ecological models. Science. 263, 1133-1136(1994). https://doi.org/10.1126/science.263.5150.1133doi: 10.1126/science.263.5150.1133 Blasius2019Blasius, B., Rudolf, L., Weithoff, G., Gaedke, U. & Fussmann, G. F. Long-term cyclic persistence in an experimental predator–prey system. Nat. 577, 226-230(2020). https://doi.org/10.1038/s41586-019-1857-0doi: 10.1038/s41586-019-1857-0 May1976May, R. M. Simple mathematical models with very complicated dynamics. Nat. 261, 459-467(1976). https://doi.org/10.1038/261459a0doi: 10.1038/261459a0 Tilman1977(1455)Tilman, D. Resource Competition between Plankton Algae: An Experimental and Theoretical Approach. Ecol. 58, 338-348(1977). https://doi.org/10.2307/1935608doi: 10.2307/1935608 Huisman1999(1173)Huisman, J. & Weissing, F. J. Biodiversity of plankton by species oscillations and chaos. Nat. 402, 407-410(1999). https://doi.org/10.1038/46540doi: 10.1038/46540 May1994Tilman, D., May, R. M., Lehman, C. & Nowak, M. Habitat destruction and the extinction debt. Nat. 371, 65-66(1994). https://doi.org/10.1038/371065a0doi: 10.1038/371065a0 Chesson2000diversity(6529)Chesson, P. Mechanisms of Maintenance of Species Diversity. Annu. Rev. Ecol. Syst. 31, 343-366(2000). https://doi.org/10.1146/annurev.ecolsys.31.1.343doi: 10.1146/annurev.ecolsys.31.1.343 Mccann2000diversity(3744)McCann, K. S. The diversity-stability debate. Nat. 405, 228–233(2000). https://doi.org/10.1038/35012234doi: 10.1038/35012234 Telesh2019Telesh, I. V., et al. Chaos theory discloses triggers and drivers of plankton dynamics in stable environment. Sci. Rep. 9, 20351(2019). https://doi.org/10.1038/s41598-019-56851-8doi: 10.1038/s41598-019-56851-8 Behrenfeld2021Behrenfeld, M. J., O'Malley, R., Boss, E., Karp-Boss, L. & Mundt, C. Phytoplankton biodiversity and the inverted paradox. ISME Commun. 1, 52(2021). https://doi.org/10.1038/s43705-021-00056-6doi: 10.1038/s43705-021-00056-6 hutchinson1961paradoxHutchinson, G. E. The paradox of the plankton. Am. Nat. 95, 137-145(1961). https://doi.org/10.1086/282171doi: 10.1086/282171 Frank2011Frank, K. T., Petrie, B., Fisher, J. A. D. & Leggett, W. C. Transient dynamics of an altered large marine ecosystem. Nat. 477, 86-89(2011). https://doi.org/10.1038/nature10285doi: 10.1038/nature10285 Ludwig1978Ludwig, D., Jones, D. D. & Holling, C. S. Qualitative analysis of insect outbreak systems: the spruce budworm and forest. J. Anim. Ecol. 47, 315-332(1978). https://doi.org/10.2307/3939doi: 10.2307/3939 Fukami2005Fukami, T., Martijn Bezemer, T., Mortimer, S. R. & van der Putten, W. H. Species divergence and trait convergence in experimental plant community assembly. Ecol. Lett. 8, 1283-1290(2005). https://doi.org/10.1111/j.1461-0248.2005.00829.xdoi: 10.1111/j.1461-0248.2005.00829.x Higgins1997Higgins, K., Hastings, A., Sarvela, J. N.& Botsford, L. W. Stochastic dynamics and deterministic skeletons: population behavior of Dungeness crab. Science. 276, 1431-1435(1997). https://doi.org/10.1126/science.276.5317.1431doi: 10.1126/science.276.5317.1431 scheffer1998ecologyScheffer, M., et al. Ecology of Shallow Lakes(Chapman & Hall, London, 1998). YANG20113077Yang, J. S., et al. Mathematical model of Chlorella minutissima UTEX2341 growth and lipid production under photoheterotrophic fermentation conditions. Bioresour. Technol. 102, 3077-3082(2011). https://doi.org/10.1016/j.biortech.2010.10.049doi: 10.1016/j.biortech.2010.10.049 strogatz2018Strogatz, S. H. Nonlinear Dynamics and Chaos(CRC. press, 2018). Holling1973Holling, C. S. Resilience and stability of ecological systems. Annu. Rev. Ecol. Syst. 4, 1-23(1973). https://doi.org/10.1146/annurev.es.04.110173.000245doi: 10.1146/annurev.es.04.110173.000245 Scheffer2001(8000)Scheffer, M., Carpenter, S., Foley, J. A., Folke, C. & Walker, B. Catastrophic shifts in ecosystems. Nat. 413, 591-596(2001). https://doi.org/10.1038/35098000doi: 10.1038/35098000 Marten2018Scheffer, M., et al. Quantifying resilience of humans and other animals. PNAS. 115, 11883-11890(2018). https://doi.org/10.1073/pnas.1810630115doi: 10.1073/pnas.1810630115 Stearns1989Stearns, S. C. Trade-offs in life-history evolution. Funct. Ecol. 3, 259-268(1989). https://doi.org/10.2307/2389364doi: 10.2307/2389364 Hubbell2001Hubbell, S. P. The Unified Neutral Theory of Biodiversity and Biogeography (MPB-32)(Princeton Univ. Press, Princeton, 2011) Scheffer2003(4000)Scheffer, M & Carpenter, S. R. Catastrophic regime shifts in ecosystems: linking theory to observation. TREE. 18, 648-656(2003). https://doi.org/10.1016/j.tree.2003.09.002doi: 10.1016/j.tree.2003.09.002 Scheffer2009(4000)Scheffer, M., et al. Early-warning signals for critical transitions. Nat. 461, 53–59(2009). https://doi.org/10.1038/nature08227doi: 10.1038/nature08227 Carpenter2011(900)Carpenter, S. R., et al. Early Warnings of Regime Shifts: A Whole-Ecosystem Experiment. Science. 332, 1079-1082(2011). https://doi.org/10.1126/science.1203672doi: 10.1126/science.1203672 Hastings1993_710Hastings, A., Hom, C. L., Ellner, S., Turchin, P. & Godfray, H. C. J. Chaos in Ecology: Is Mother Nature a Strange Attractor?. Annu. Rev. Ecol. Syst. 24, 1-33(1993). https://doi.org/10.1146/annurev.es.24.110193.000245doi:10.1146/annurev.es.24.110193.000245 § ACKNOWLEDGEMENTS We are grateful for stimulating discussions with Christian Beck, Peter Grassberger, and Jin Yan. § AUTHOR CONTRIBUTIONS Both authors developed the model system together. Misha Chai performed the numerical simulations and created the figures. All authors contributed to the interpretation of the results and to writing the manuscript. § COMPETING INTERESTS The authors declare that they have no competing financial interests. § CORRESPONDENCE Correspondence and requests for materials should be addressed to chaimisha@pks.mpg.de.
http://arxiv.org/abs/2406.07865v1
20240612044533
FaithFill: Faithful Inpainting for Object Completion Using a Single Reference Image
[ "Rupayan Mallick", "Amr Abdalla", "Sarah Adel Bargal" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Unveiling the Power of Wavelets: A Wavelet-based Kolmogorov-Arnold Network for Hyperspectral Image Classification Seyd Teymoor Seydi, Member, IEEE seydi.teymoor@ut.ac.ir Received: date / Accepted: date ================================================================================================================= § ABSTRACT We present FaithFill, a diffusion-based inpainting object completion approach for realistic generation of missing object parts. Typically, multiple reference images are needed to achieve such realistic generation, otherwise the generation would not faithfully preserve shape, texture, color, and background. In this work, we propose a pipeline that utilizes only a single input reference image -having varying lighting, background, object pose, and/or viewpoint. The singular reference image is used to generate multiple views of the object to be inpainted. We demonstrate that FaithFill produces faithful generation of the object's missing parts, together with background/scene preservation, from a single reference image. This is demonstrated through standard similarity metrics, human judgement, and GPT evaluation. Our results are presented on the DreamBooth dataset, and a novel proposed dataset. § INTRODUCTION The success of generative diffusion models has paved the way for successful image editing, including image inpainting. Image inpainting is the go-to solution for recovering occluded or corrupted image/object regions. Using text-to-image models such as Stable Diffusion <cit.> works well for inpainting, but while it produces a plausible and realistic result, that result may not preserve shape, color, or texture features of the original foreground or background. This is due to lack of contextual information while inpainting. We preserve contextual information by finetuning on a reference image. Typically, generative models - after some finetuning iterations - will give an impressive result, but will not be faithful to neither the foreground or background. We define the preservation of shape, texture, and color as faithfulness. The first technique that introduced faithfulness (`authenticity') in generative inpainting models is RealFill <cit.>. RealFill requires multiple reference images that have very similar viewpoints to the target viewpoint to achieve such authenticity. Most finetuning is based on pluralistic approaches,  using multiple reference images, or pretraining on a similar domain dataset. The former is more computationally efficient. While RealFill is an example of the former, Paint-By-Example <cit.> is an example of the latter. Paint-By-Example fuses an object from a reference image to the target image in a realistic manner using a diffusion-based framework. Paint-By-Example conditions the model on the reference image at inference time, but finetunes on the entire OpenImages <cit.> dataset at training time. While Paint-By-Example uses a reference image after finetuning, SmartBrush <cit.> uses the target object mask and text conditioning to inpaint in an authentic way. Using such a target object mask preserves the background. The challenge arises when we have a single reference image for finetuning (most computationally efficient scenario) due to less prior knowledge compared to the pluralistic approaches. The correspondence between the reference and target images might vary quite extensively due to different backgrounds, viewpoints, object poses, object shapes, and lighting conditions. One-shot finetuning is prone to underfitting the reference image, causing distortion of the object's shape, color, texture, or not preserving the background during the reconstruction of the masked region. In this work, we propose FaithFill, a generative in-painting technique that is faithful to both foreground and background objects in the image (Figure <ref>), and only requires a single reference image in order to do so. FaithFill finetunes on a single reference image, and requires a minimal text prompt  long textual descriptions or prompt tuning. We overcome the risk of one-shot finetuning by generating multiple object views using Neural Radiance Field (NeRF) based models <cit.>. This gives FaithFill additional flexibility in view point change between the reference and the to-be-inpainted (target) image compared to RealFill <cit.>. Concurrent work LeftRefill <cit.> produces multiple views in an autoregressive manner. In contrast, we generate the views in one-shot using NeRFs. We demonstrate more faithful inpainting results compared to state-of-the-art, and concurrent work of LeftRefill<cit.> for most instances both quantitatively and qualitatively. Evaluation is based on the quality of image generation   that of the ground truth image using (a) standard similarity metrics used in the recent inpainting literature, (b) human judgement, and (c) GPT evaluation. We summarize our contributions as follows. * We propose FaithFill, a finetuning pipeline that is able to faithfully inpaint objects using a single reference image. Faithfulness is defined to be reservation of shape (pose can change for deformable objects), color, and texture. * We propose the FaithFill Dataset, a dataset of image pairs of 45 objects taken under different lighting conditions, from different viewing angles, and with different background settings. Each pair consists of a reference image and a target image. * FaithFill demonstrates superior performance to  on the six standard image similarity metrics, human judgement, and GPT evaluation. § RELATED WORKS Diffusion Models. Image generation has been revolutionized with the introduction of diffusion models. There are a number of works for image-to-image generation such as DDPM <cit.>, DDIM <cit.>, and text-to-image generation such as DALL-E <cit.>, Imagen <cit.>, Stable Diffusion <cit.>. The main breakthrough introduced by diffusion models lies in reversing the Markov denoising process. The models based on this principal achieve  results in many computer vision tasks. Text-to-Image diffusion models have been trained on the LAION-5B <cit.> dataset that enables models to use this as prior information for further finetuning. Finetuning these models leads to impressive results for image editing, controllable image generation, and image personalisation <cit.>, video generation, pose generation <cit.>, texture generation <cit.>, panoramas <cit.>, 3D meshes <cit.> Generative Diffusion Based Models for Image Inpainting. Image-inpainting is an important problem as filling up the missing regions within an image has plethora of uses. The inpainting problem has been long studied <cit.> dating back prior to deep learning based models. In deep learning based models, a neural network is trained to complete the missing regions. Furthermore, generative models leveraged image prior(s) for completion of missing region(s). More recently, text-to-image models use text priors to fill in missing image regions. RePaint <cit.> is one of the early works inspired to use the fundamental principal of diffusion models by iteratively denoising the Gaussian noise to fill in the missing regions for any given shape of the target mask. This model is based on the image-to-image diffusion model, DDPM <cit.> that conditions on a given image region. Stable Inpainting <cit.> uses stable diffusion to generate samples from a noisy latent distribution. Stable Inpainting requires additional conditioning on the target masks in addition to that of text. Blended Diffusion <cit.> leveraged the combination of CLIP <cit.> and Denoisining Diffusion Probabilistic Model (DDPM) <cit.> for prompt-based image editing. Combining the pretrained text-image model such as CLIP with DDPM enabled mitigation against the adversarial examples as they blend the text latent along with the image at each denoising step. This method uses a multistep blending process to fill the masked regions. Later, the authors also extended this work using a Text-to-Image model instead of the DDPM <cit.>. GLIDE <cit.> uses the mask conditioning in addition to the text conditioning for the model to mask the image regions. It uses an ablated diffusion model <cit.> along with a transformer <cit.> in addition to an upsampler to obtain higher resolution images. The closest works to our proposed FaithFill are Paint-By-Example <cit.>, RealFill <cit.>, and very recently LeftRefill<cit.>. Paint-By-Example is similar because it targets blending the target image with an object from the reference image. However, Paint-By-Example uses pretraining the augmentations on an entire dataset while we only finetune on a single reference image. RealFill or DreamBooth-Inpaint <cit.> use 3-5 reference images for the image completion making it either requiring n>3 reference images, or failing on complex scenarios. In contrast, we finetune on a single reference image. LeftRefill uses Novel View Synthesis (NVS) from a single reference image in an autoregressive manner to generate the right view from a left view for the purpose of image stiching. In contrast, we do not require iterative NVS, we generate the different views in one-shot using NeRFs and address objects with different backgrounds, lighting, and larger viewpoint change. Generative Adversarial Models for Image Inpainting. Prior to the advent of diffusion models, GANs have been employed to tackle the image inpainting task. One of the early methods that did this is Context Encoders (CE) <cit.> that attempts to perform image inpainting using semantically consistent content. Nevertheless, this method is constraint to filling only square holes located in the center of an image. GAN inversion inpainting methods have been employed in <cit.>, where the latent space of a pretrained GAN is searched for a latent code representing the masked image. Then, the image is reconstructed by reversing the latent code to get the inpainted image. Other deep learning methods have incorporated patch borrowing and patch replacement operations <cit.>. PEPSI <cit.> adopts a single decoder-encoder network for semantic inpainting, in contrast to the use of cascading fine-to-coarse networks. Li  <cit.> introduced a visual structure reconstruction layer to entangle the generation of the image structure and the corresponding visual content in a progressive fashion. <cit.> proposes a model for inpainting ultra high-resolution images up to 8K. <cit.> introduces a conditional image-to-image generation framework for multiple-output (diverse) image inpainting. Other methods focused on improving texture and structure via dual path inpainting <cit.>. Jain   <cit.> combine a coarse-to-fine GAN-based generator with fast Fourier convolution layers to improve both structure and texture generation. Sragsyan   introduced MI-GAN <cit.>, an image inpainting model for mobile devices. <cit.> Integrates Wasserstein Generative Adversarial Network (WGAN) with a reverse masking operator to focus only on the masked part and maintain the rest of the image unchanged. <cit.> proposed SPDNorm to alter the random noise vector of the vanilla GAN network in order to take context constraints into consideration. Ni <cit.> proposed a language-guided Image inpainting model based on the DF-VQGAN. § BACKGROUND: DIFFUSION MODELS Diffusion Models are generative models that gradually invert the Markovian forward process for denoising a Gaussian distribution to generate an image. Latent diffusion models <cit.> work on a latent representation to reduce the computation cost. In this method text prompts are encoded using a pretrained CLIP model. An input image x_0∈ℝ^H × W × 3 is converted into the latent space as z_0 using a pretrained autoencoder ε (, z = ε(x)). The input latent representation goes through the Markov forward process of adding noise for t time steps to produce z_t. The noisy latent representation z_t is obtained as presented in Equation <ref>, where the noise ϵ∼𝒩(0, 𝐈) and {α_t}_t=1^T and α_t is the noise scheduler. z_t = √(α_t)z_0 + (√(1-α_t))ϵ The overall learning objective given the encoded text prompt τ(p_t) for a text-to-image model is presented in Equation <ref>. ℒ = 𝔼_z_0, t, τ(p), ϵ∼𝒩(0, 1)[ ‖ϵ - ϵ_θ(z_t, t, τ(p)) ‖^2_2] For the inference, inversion starts from z_t∼𝒩(0, 𝐈) and reduces noise at t-1 timestep, (z_t-1) using the denoiser ϵ_θ finally obtaining the target denoised z_0 iteratively for t time steps. While any denoiser could be used here, we use the standard U-Net as the denoiser. The z_0 is the latent representation of the target image which is finally decoded using the decoder 𝒟 to the image space. § METHOD: FAITHFILL In this work we are proposing a finetuning framework for a reference-based image inpainting technique for a target image to make it plausible, photo-realistic, and faithful to object attributes in the reference image. In this section we discuss FaithFill's finetuning and inference pipelines. The input to the finetuning pipeline is the reference image (I_ref∈ℝ^H × W × 3), and the input to the inference pipeline is the masked target image (I_tgt∈ℝ^H × W × 3). Our FaithFill finetuning pipeline is depicted in Figure <ref>. This includes a segmentation module to extract the object from the reference image, a diffusion based view generation module to generate multiple views of the extracted object, followed by an inpainting module that reconstructs the masked image. This section will describe the different modules of this pipeline. §.§ FaithFill Finetuning Segmentation Module. This module receives as input the reference image I_ref∈ℝ^H × W × 3. It first determines the object of interest,  the object that needs to be inpainted using I_tgt , { I_ref∩ I_tgt}. We then use the Segment Anything Model (SAM) <cit.> to extract the object. The segmented/extracted object is presented to a diffusion based multiple view synthesis module. The background is removed from different views to preserve the homogeneity and natural blending in the target image I_tgt when inference happens. View Generation Module. We perform multi-view synthesis of the extracted object from the Segmentation module using diffusion based NeRFs, this is inspired by Liu  <cit.>. Liu  <cit.> use the Zero123 <cit.> model, a viewpoint conditioned 2D diffusion model for the generation of multi-view images from the input that transforms these views to a 3D space. This is achieved by finetuning a stable diffusion model. Our aim is is not to create a 3D mesh from a single image but rather get multiple 2D views for finetuning later modules in our pipeline. We do not train or finetune the NeRFs model, rather we use its inference as a view generation module to generate N=6 (5 plus original) views as presented in Figure <ref>. Let the set of images with different viewpoints (VP) be denoted as x ∈𝒳_VP, where 𝒳_VP = {x}_n=1^N. Using this module enables us to use reference images that have more flexible viewpoints than state-of-the-art reference based inpainting methods <cit.>. This module could be switched out for any multi-view synthesis module,  <cit.>. Once the views are generated, a Random Mask Generator creates one mask per view {m}_n=1^N. Each mask is randomly centered and masks at least ratio percentage of the view image. The mask consists of a random number of rectangles, each having a random width between 0 and w*ratio, and a random height between 0 and h*ratio such that the masking ratio is achieved. Inpainting Module. The inpainting module consists of a CLIP text encoder <cit.>, a ControlNet <cit.> adapter, and stable inpainting pipeline that uses a U-Net denoiser. The inpainting module takes as input the generated views alongside corresponding randomly generated masks and textual description. The multiple viewpoint images x_n are masked with an inverted mask m_n using a Hadamard product , it's computed as x_n ⊙ (1-m_n). The input to the ControlNet pipeline are {x_n ⊙ (1-m_n), m_n, τ(p) }, where τ(c) is the text embedding, and τ(.) is CLIP text encoder for the prompt p. The output of the ControlNet is then passed to the Stable Inpainting pipeline for filling in the missing region. The ControlNet adapter provides additional control to resist updates to the unmasked regions of the different views. We use a Low Rank Adaptation Technique (LoRA) <cit.> for finetuning the U-Net and the CLIP text encoder in the inpainting module, instead of finetuning of the complete module that is computationally expensive. LoRA injects the trainable low rank residual matrices in addition to the network weight matrices. The pretrained weight matrix W ∈ℝ^n × n is then updated with the low rank decomposition matrices as W + Δ W = W + BA, where B ∈ℝ^n × r and A ∈ℝ^r × n, r<<n. During the finetuning process, the low rank matrices A and B are updated while the network weights W are frozen. The loss function that governs the finetuning is presented in Equation <ref>. ℒ = 𝔼_x, t, τ(p), ϵ∼𝒩(0, 1)[ ‖ϵ - ϵ_θ((x ⊙ (1-m)), m, t, τ(p)) ‖^2_2] §.§ FaithFill Inference Inpainting Module. At inference time, the goal is to complete the missing regions of the I_tgt. The input to the pipeline is { I_tgt, m, τ(p) }, where p is the same text prompt used for finetuning the reference image , “an image of the <object class>.” The inpainting module with the modified weights based on the finetuning on the reference image is used to inpaint the masked target image. This inference module is Stable-Inpainting based on Stable-Diffusion v2 pipeline with a DPMS sampler <cit.>. In an ideal scenario, the reconstructed image must preserve the regions other than the regions that need to be inpainted, therefore we restrict alterations to the missing regions by employing a binary mask {0,1}∈ℝ^H × W. Following <cit.> and other inpainting works, 0 denotes the regions to skip whereas 1 denotes the regions to fill. § EXPERIMENTS In this section we first introduce the benchmark datasets used for our experiments, followed by the evaluation metrics used to compare inpainted results against ground-truth. We then present the implementation details for reproducing our experimental setup and results. Finally, we present and discuss qualitative and quantitative inpainting results of similarity metrics, human judgement, and GPT judgement for FaithFill and comparison to previous state-of-the-art. §.§ Benchmark Datasets We evaluate on the DreamBooth dataset <cit.> that contains multiple reference images for a single subject/object (total of 30 subjects/objects). Each subject/object has 3-5 casually taken reference images. We randomly sample a pair of images from each subject/object where we randomly assign reference and target image roles. This dataset presents multiple views, backgrounds, lighting settings In addition to the DreamBooth dataset, we created our FaithFill Dataset for a more comprehensive evaluation on a larger number of images with larger viewpoint variations. The FaithFill dataset consists of 45 objects where they were selected for their intricate structures and varied textures. For each object, we captured two images depicting it from different viewpoints, under different lighting, and against different backgrounds. Each pair constitutes a reference image and a target image. This dataset will be made publicly available. §.§ Implementation Details The multi-view generation module is used in the inference time with the settings mentioned by the authors in <cit.>. Further, the number of iterations to finetune our model depends on the dataset used. We finetune 1100 iterations or timesteps for the DreamBooth dataset on a single 40GB NVIDIA A100 GPU. We use 1500 iterations or timesteps for the FaithFill dataset. We hypothesize that this is because of a larger viewpoint difference between reference and target images in the FaithFill dataset. We set the masking percentage of the Random Mask Generation module to 50%. The text-prompt used as input to the text encoder is of the form `An image of <object>'. As mentioned in Section <ref>, we use LoRA based models. The LoRA rank for both the datasets is set to be 4. The guidance scale for inference is set to be 7.5. The learning rates are set to be 5e-4. For the baseline experiments the hyperparameters are kept as is recommended by the respective authors. §.§ Evaluation We evaluate on both the publicly available DreamBooth and our proposed FaithFill datasets qualitatively and quantitatively. We compare against state-of-the-art using image similarity metrics, human judgement, and GPT evaluation. For image similarity metrics, we use low-level perceptual similarity that computes the texture and color, mid-level semantic differences such as layout and pose, as well as high level differences that engrave more high-level attributes. The low-level SSIM is a patch based similarity matrix that computes the difference in structural similarity. SSIM fails to capture some human perception nuances, thus LPIPS was introduced. LPIPS computes the feature distance between the two patches. PSNR is another low-level pixel-wise image similarity comparing the signal to that of the background noise. For mid-level semantic differences, the images are evaluated on DreamSIM, that aims to have evaluation standards similar to that of human perception. The use of the DreamSIM enables to bridge the gap between the low-level and high level semantics. DINO and CLIP are used for high-level semantic differences. CLIP differentiates between semantic consistency while DINO is used for the semantic parts. CLIP computes the mean cosine similarity of the embeddings computed using CLIP between the ground truth and the generated image, while <cit.> introduced DINO where it computes the mean cosine similarity between the ViT-S/16 features between the generated and ground truth image. In addition to the presented similarity metrics and quantitative results, we conducted a user study and a GPT evaluation comparing each FaithFill generation to each state-of-the-art method generation  the ground-truth target image. For the user study, we use the Amazon Mechanical Turk (AMT) crowdsourcing marketplace to recruit crowd workers. We accept AMT workers who had previously completed at least 1000 tasks (a.k.a HITs), and maintained an approval rating of at least 98%. We compensate the work of all crowd workers who participated in our tasks. In the user study, we ask a random set of nine evaluators for each comparison to determine which image most closely resembles the target image. Each subtask presents the worker with one target image and two generated images, one is FaithFill generated, and the second is a competing state-of-the-art method generation. A sample interface question is presented in Figure <ref>. The worker is asked to select which of the two images is more similar to the target image. We post all HITs simultaneously, while randomizing the presentation order of the FaithFill other images. We allot a maximum of ten minutes to complete each HIT and paid $0.10 per HIT. For the GPT evaluation, we presented GPT-4o with the same question setup we described for the user study. We run the evaluation three times on each pair of images. Figure <ref> shows a sample output from the GPT-4o study. §.§ Results: Comparisons to State-of-the-Art In this work, we are comparing against seven different  techniques that are diffusion based: RePaint, GLIDE, Blended Latent Diffusion, Stable Inpainting, Stable Inpainting FT, Paint-By-Example, and LeftRefill. The first four do not use a reference image, and therefore have no prior for faithful reconstruction. In contrast, FaithFill finetunes on a single reference image. We therefore additionally compare against the three state-of-the-art methods that employ reference images for inpainting: Stable Inpainting FT is a version of <cit.> that uses a single reference image for fair comparison. Paint-By-Example finetunes on the OpenImages dataset and uses a single reference image at inference time. In contrast, we are only finetuning on a single reference image. LeftRefill finetunes on a single reference image to generate multiple views using an autoregressive NVS approach, while using a frozen inpainting pipeline. In contrast, we use a one-shot approach for generating the views and finetune an inpainting module on these generated views. Figure <ref> presents additional qualitative results to those presented in Figure <ref> for  techniques: Stable Inpainting FT, LeftRefill, and Paint-By-Example. FaithFill generated images are more faithful to the original object attributes compared to  techniques that use a single reference image. Images from both the DreamBooth and FaithFill datasets are presented. Table <ref> presents the quantitative evaluations of FaithFill   techniques for the DreamBooth and FaithFill datasets, respectively. We note that Stable Inpainting FT represents the RealFill pipeline under a one-image configuration. In Table <ref>, metrics are grouped in low, mid, and high level metrics as defined in  <cit.> by Fu . Low level metrics (SSIM, PSNR, and LPIPS) are used to evaluate model performance on the pixel level. DreamSIM is a mid-level similarity metric that captures differences in coarse details, such as object pose, semantic features, and image layout. DINO and CLIP are high-level metrics that aim to asses the similarity between original and generated images using high-level feature correlation. FaithFill achieves best results on most metrics compared to other state-of-the-art methods and better or comparable results to the concurrent work LeftRefill <cit.>. Figure <ref> presents the results of the user study and the GPT evaluation on all images of the DreamBooth and FaithFill datasets. We can see that the FaithFill generated result is selected to be more similar to the target image over state-of-the-art methods in almost all scenarios. Concurrent work LeftRefill is favored in the AMT study 12% of the time more than FaithFill, while FaithFill is favored in the GPT study 42.2% of the time more than LeftRefill. Additional Capabilities of FaithFill are demonstrated in Figure <ref>. Such capabilities can be used for various image editing tasks,  to remove objects occluding other objects while inpainting the occluded image faithfully, or to inpaint objects on different backgrounds. The former can be achieved by masking the occluding object before inference. The latter can be achieved by masking the region the object should be inserted before inference. § CONCLUSIONS We propose a novel finetuning framework, FaithFill, for object inpainting. FaithFill completes the part of an object that is missing due to any reason from the image,  occluded by another object. We focus on completing the missing region in a faithful manner, that is, maintaining the same object structure, color, and texture, together with not altering the target background. FaithFill only needs a single reference image as input to achieve faithful inpainting of a target image. The FaithFill training pipeline includes a segmentation module to extract the object from the reference image, a diffusion based view generation module to generate multiple views of the extracted object, followed by an inpainting module that reconstructs the masked image. This work demonstrates the ability to finetune using a single reference image to produce high quality inpainting faithful to the provided reference image features, despite reasonable discrepancies in viewpoints, poses, lighting conditions, and backgrounds. Additionally, we propose the FaithFill dataset, which consists of image pairs of different objects taken in various conditions,  different viewpoints, lighting, setting, to further enrich the evaluation and share with the research community. § LIMITATIONS AND NEGATIVE IMPACTS Limitations. Since FaithFill relies on NeRFs to generate multiple views, the inpainting process might become challenging when working with a reference image that is hard to generate views for ( poor views implies poor inpainting). In addition, if the viewpoints of the reference and target images are drastically different ( back and front), FaithFill will obviously struggle to inpaint the target image. Furthermore, since FaithFill depends on the prior knowledge of the pretrained base model - Stable Diffusion - it inherits some of the challenges it faces with generation of fine details, such as text. Although FaithFill is capable of hallucinating the details of a fully occluded object based on the reference image, it cannot guarantee generating the same pose of the original object/subject in the ground truth target image. Finally, defining the number of iterations needed for finrtuning can be tricky. In our work we set a fixed number of iterations for each dataset. Having a `slider' for each image being edited would probably give better results. Negative Impacts. FaithFill is a tool that can be used by members of the society to unleash their creativity and improve the quality of their images and/or photographs. Nonetheless, as inpainting is an image editing technique, this research inherits all the potential negative impacts associated with image editing,  placing a political personnel in a contentious location or alongside a controversial figure. In addition, this research involves the use of computationally intensive models, suggesting elevated energy consumption that may have environmental implications. In efforts to minimize that, we used frozen weights whenever possible. splncs04
http://arxiv.org/abs/2406.08773v1
20240613030536
DenoiseReID: Denoising Model for Representation Learning of Person Re-Identification
[ "Zhengrui Xu", "Guan'an Wang", "Xiaowen Huang", "Jitao Sang" ]
cs.CV
[ "cs.CV" ]
Revealing hidden medium-range order in silicate glass-formers using many-body correlation functions Walter Kob June 17, 2024 ===================================================================================================== § ABSTRACT The denoising model has been proven a powerful generative model but has little exploration of discriminative tasks. Representation learning is important in discriminative tasks, which is defined as "learning representations (or features) of the data that make it easier to extract useful information when building classifiers or other predictors" in <cit.>. In this paper, we propose a novel Denoising Model for Representation Learning and take Person Re-Identification (ReID) as a benchmark task, named DenoiseReID, to improve feature discriminative with joint feature extraction and denoising. We first view each embedding layer in a backbone as a denoising layer, processing the cascaded embedding layers as if we are recursively denoise features step-by-step. This unifies the frameworks of feature extraction and feature denoising, where the former progressively embeds features from low-level to high-level, and the latter recursively denoises features step-by-step. Then we design a novel Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA) and theoretically demonstrate its equivalence before and after fusion. FEFDFA merges parameters of the denoising layers into existing embedding layers, thus making feature denoising computation-free. This is a label-free algorithm to incrementally improve feature also complementary to the label if available. Experimental results on 4 ReID datasets and various of backbones show the stability and impressive improvements. We also extend the proposed method to large-scale (ImageNet) and fine-grained (CUB200, Oxford-Pet, Flowers) classification tasks, similar improvements are observed. The code will be released on github. § INTRODUCTION Denoising Diffusion Probabilistic Models (DDPM)  <cit.> or Diffusion Model for short have been proven to be a powerful generative model <cit.>. Generative models can generate vivid samples (such as images, audio and video) by modelling the joint distribution of the data P(X, Y), where X is the sample and Y is the condition. Diffusion models achieve this goal by adding Gaussian noise to the data and training a denoising model of inversion to predict the noise. Diffusion models can generate multi-formity and rich samples, such as Stable diffusion <cit.>, DALL <cit.> series and Midjourney, these powerful image generation models, which are essentially diffusion models. However, its application to discriminative models has not been extensively explored. Different from generative models, discriminative models predict data labels by modelling the marginal distribution of the data P(Y|X). Y can be various labels, such as image tags for classification, object boxes for detection, and pixel tags for segmentation. Currently, there are several methods based on diffusion models implemented in specific fields. For example, DiffusionDet <cit.> is a new object detection framework that models object detection as a denoising diffusion process from noise boxes to object boxes. It describes object detection as a generative denoising process and performs well compared to previous mature object detectors. DiffSeg <cit.> for image segmentation, which is a method of unsupervised zero-shot sample segmentation using pre-trained models (stable diffusion). It introduces a simple and effective iterative merging process to measure the attention maps between KL divergence and merge them into an effective segmentation mask. The proposed method does not require any training or language dependency to extract the quality segmentation of any image. The methods above are carefully designed for specific tasks and require a particular data structure. For example, DiffusionDet <cit.> uses noise boxes and DiffSeg <cit.> uses noise segmentation. In this paper, we explore a more general conception of how the denoising model can improve representation learning, i.e. "learning representations (or features) of the data that make it easier to extract useful information when building classifiers or other predictors"  <cit.>, and contribute to discriminative models. We take person Re-Identification (ReID) <cit.> as a benchmark task. ReID aims to match images of a pedestrian under disjoint cameras, and is suffered by pose, lighting, occlusion and so on, thus requiring more identity-discriminative feature. A straightforward approach is applying the denoising process to a backbone's final feature  <cit.>, reducing noise in the final output and making the feature more discriminative, as Fig. <ref>(b) shows. However, this way can be computationally intensive. Because the denoising layer needs to be proceeded on the output of the previous one in a recursive and step-by-step manner. Considering that a backbone typically consists of cascaded embedding layers (e.g., convolution layer, multi-head attention layer), we propose a novel perspective: treating each embedding layer as a denoising layer. As shown in Fig. <ref>(c), it allows us to process the cascaded layers as if we are recursively proceeding through the denoising layer step-by-step. This method transforms the backbone into a series of denoising layers, each working on a different feature extraction level. While this idea is intuitive and simple, its practical implementation presents a significant challenge. The main issue arises from the requirement of the denoising layer for the input and output features to exist in the same feature space. However, in a typical backbone (e.g. ResNet <cit.>, ViT <cit.>)), the layers progressively map features from a low level to a high level. It means that the feature space changes from layer to layer, which contradicts the requirement of the denoising layer. To resolve all the difficulties above and efficiently apply the denoising process to improve discriminative tasks, our proposed Denoising Model for Representation Learning of Person Re-Identification (DenoiseReID) is as below: Firstly, we utilize a well-trained backbone and keep it fixed throughout all subsequent procedures. This step is a free launch as we can easily use any publicly available backbone without requiring additional training time. This approach allows us to preserve the backbone's inherent characteristics of semantic feature extraction. Given the backbone and an image, we can get a list of features. Next, we train denoising layers on those features. The weights of denoising layers are randomly initialized and their weights are not shared. The training process is the same as that in DDPM <cit.>, where the only difference is that the denoising layer in DDPM takes a dynamic t ∈ [1, T], and our denoising layers take fixed n ∈ [1, N], where n is the layer index, T is denoise times and N is backbone layer number as shown in Fig. <ref>(c). Finally, considering that the N denoising layers consume additional execution latency, we propose a novel Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA). As shown in Fig. <ref>(d), FEFDFA merges parameters of extra denoising layers into weights of the existing embedding layers, thus enabling joint feature extraction and denoising without any extra computation cost. We also theoretically demonstrate the total equivalence before and after parameter fusion. Please see Section <ref> and Eq (<ref>) for more details. Our contributions can be summarized as follows: (1) We propose a novel Denoising Model for Representation Learning of Person Re-Identification (DenoiseReID), which innovatively integrates the denoising process, originating from generative tasks, into the discriminative tasks. It treats N cascaded embedding layers of a backbone as T times recursively proceeded denoising layers. This idea enables joint feature extraction and denoising is a backbone, thus making features more discriminative. (2) We propose a novel Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA), which fuses the parameters of the denoising layers into the parameters of the corresponding embedding layers and theoretically demonstrates their equivalence. This contributes to a computation-efficient algorithm, which takes no extra latency. (3) Extensive experiments on 4 ReID datasets verified that our proposed DenoiseReID can effectively improve feature performance in a label-free manner and performs better in the case of label-argumented supervised training or introduction of additional training data. We also extend our proposed method to large-scale (ImageNet) and fine-grained (e.g. CUB200, Oxford-Pet, etc.) image classifications, showing its effectiveness. § RELATED WORK Generative models learn the distribution of inputs before estimating class probabilities. A generative model is a model that learns the data generation process by learning the probability distribution of the input data and generating new data samples. The generative models first estimate the conditional density of categories P(x|y = k) and prior category probabilities P(y = k) from the training data. The P(x) is obtained by the full probability formula. So as to model the probability distribution of each type of data. Generative models can generate new samples by modelling data distribution. For example, Generative Adversarial Networks (GANs) <cit.> and Variational Autoencoders (VAEs) <cit.> are both classic generative models that generate real samples by learning potential representations of data distributions, demonstrating excellent performance in data distribution modeling. Recent research has focused on using diffusion models for generative tasks. The diffusion model was first proposed by the article <cit.> in 2015, with the aim of eliminating Gaussian noise from continuous application to training images. The DDPM <cit.> proposed in 2020 have made the use of diffusion models for image generation mainstream. In addition to its powerful generation ability, the diffusion model also has good denoising ability through noise sampling, which can denoise noisy data and restore its original data distribution. Discriminative models learn condition datatribution, e.g. P(y|x), where x is input data and y is task-relative features, classiification task <cit.> map data into its tags, retrieval task <cit.> map data into a feature space where similar data should be near otherwise farawary, detection task <cit.> map data into is space position and size. Person Re-Identification (ReID) is a fine-grained retrieval task that solves the challenging task of identifying individuals in different camera views. Due to its fine-grained nature, it has a more discriminative feature, so we chose it as our benchmark task. Existing ReID methods can be grouped into hand-crafted descriptors <cit.> incorporated with metric learning <cit.> and deep learning algorithms <cit.>. State-of-the-art ReID models often leverage convolutional neural networks (CNNs) <cit.> to capture intricate spatial relationships and hierarchical features within person images. Attention mechanisms <cit.>, spatial-temporal modeling <cit.>, and domain adaptation techniques <cit.> have further enhanced the adaptability of ReID models to diverse and challenging real-world scenarios. Reparameterization of a model adopts an approximate equivalent new method to replace an existing modeling method. Reparameterization can reduce the number of parameters, reduce model computational complexity, and improve design flexibility. As RepVGG <cit.> proposed, a multi-branch structure is set up in the VGG network, which only has linear operations inside. During the inference stage, the parameters of multiple branches are merged into parameters of a single branch through reparameterization, without the need for retraining, thereby reducing the number of parameters and the time complexity of inference. § DENOISING REPRESENTATION FOR PERSON RE-IDENTIFICATION §.§ Review Representation Learning of Person Re-Identification The standard ReID pipeline consists of a backbone to learn representation followed by a classification loss and triplet loss for learning an embedding space where images of the same identity have high similarity. Common backbones include CNN series  <cit.> and Transformer series  <cit.>. In this work, we chose TransReID-SSL <cit.> as the baseline, which uses ViT as the backbone for feature extraction. As shown in the structural diagram on Fig. <ref>, the input image is first passed through PatchEmbed, which splits the image into chunks, and these chunks are fed to the transformer encoder. After that, it goes through N Blocks for feature extraction and outputs a feature vector to represent this image. §.§ Feature Extraction and Feature Denoising Unified Framework In representing learning, feature representation is the most important factor that affects the person re-identification (ReID) accuracy, so we would like to learn features with better representation to improve the retrieval performance. We suppose that in the inference stage, the features obtained by backbone extraction are noisy, which may come from the input image or may be generated during the feature extraction process. To solve the problem above, we propose a novel Feature Extraction and Feature Denoising Unified Framework (FEFDUF), which unifies feature extraction and feature denoising in a single backbone to obtain semantic and discriminative features. We refer to the diffusion modeling approach to denoise the noisy features through T-steps to obtain clean features. At the beginning, we use the features output from the backbone network as data samples for diffusion training, and get the noisy samples by continuously adding noise and learning through the network in order to simulate the data distribution of its features. q(𝐱_1:T | 𝐱_0) ∏_t=1^T q(𝐱_t | 𝐱_t-1) q(𝐱_t|𝐱_t-1) 𝒩(𝐱_t; √(1-β_t)𝐱_t-1, β_t 𝐈) where X_0 represents the feature vector output by the backbone, t represents the diffusion step size, β_t is a set of pre-set parameters, and X_t represents the noise sample obtained through diffusion process. In the inference stage, as shown in Fig. <ref>(b), we perform T-step denoising on the output features, to obtain cleaner features and improve the expressiveness of the features. p_θ(𝐱_0:T) p(𝐱_T)∏_t=1^T p_θ(𝐱_t-1|𝐱_t) p_θ(𝐱_t-1|𝐱_t) 𝒩(𝐱_t-1; μ_θ(𝐱_t, t), Σ_θ(𝐱_t, t)) where X_t represents the feature vector output by the backbone in the inference stage, which we suppose contains some noise. T is the denoising step size, representing the magnitude of the noise. We adjust t appropriately based on different datasets and backbones to obtain the optimal denoising amplitude. In ViT-base and ViT-small backbone, t is set to 10 and 15, respectively. According to p_θ(𝐱_t-1|𝐱_t) denoise it step by step, and finally obtains X_0, which represents the clean feature after denoising. §.§ Feature Extraction and Feature Denoising Fusion Algorithm As described in section <ref>, the proposed Feature Extraction and Feature Denoising Unified Framework (FEFDUF) could effectively polish noisy features but extra inference latency is introduced caused by recursive calling of the denoising layers. To solve the problem above, we proposed a novel Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA) to fuse parameters of the extra denoising layers into parameters of existing backbone layers and get a computation-free method in the final. FEFDFA expands the linear layer in the MLP of each block into two branches, one for its original FC module and the other for FEFDFA proposed in this paper. As shown on Fig. <ref>, during the training phase, we freeze the original parameters and only trained the FEFDFA. The training method is consistent with section <ref>, and the features are diffused and fed into the predictive noise network of FEFDFA for prediction. For details, please refer to Algorithm. <ref>. In the inference stage, we use the reparameterization method to combine the pre-trained W parameters with the denoising module parameters, merging the two branches into a single branch without additional inference time. The derivation of parameter merging is as follows: X_t-1 =1/√(a_t) (X_t - 1-a_t/√(1 - a̅_̅t̅) D_θ(X_t, t))+σ_tz where a_t = 1- β_t, D_θ are the parameters of the prediction noise network. Y = WX + b 1/√(a_t)X_t - X_t-1 = 1-a_t/√(a_t)√(1-a̅_̅t̅)D_θ X_t-σ_tz 1/√(a_t)Y_t - Y_t-1 = 1-a_t/√(a_t)√(1-a̅_̅t̅)D_θ Y_t-σ_tz We make a simple transformation of Eq. (<ref>) and multiply both sides simultaneously by W. The simplified equation can be obtained by bringing Y_t in terms of WX_t + b: Y_t-1 = [W - C_1(t) WW_D]X_t+WC_2(t)C_3 + b C_1(t) = 1-a_t/√(a_t)√(1-a̅_̅t̅) C_2(t) = 1-a̅_̅t̅-̅1̅/1-a̅_̅t̅β_t C_3 = Z∼ N(0, I) where W_D denotes the parameters of D_θ(X_t, t), X_t denotes the input of this linear layer, Y_t denotes the output of this linear layer, and Y_t-1 denotes the result after denoising in one step of Y_t. Due to the cascading relationship of blocks, as detailed in Algorithm. <ref>, different t values are set according to the order between levels, and the one-step denoising of one layer is combined to achieve the denoising process of Y_t → Y_0, ensuring the continuity of denoising and ultimately obtaining clean features. We split the original single branch into a dual branch structure. During the training phase, the backbone maintains its original parameters and needs to train the denoising module parameters. In the inference stage, as shown on the left side of Fig. <ref>, we use the method of reparameterization, to replace the original W parameter with W', where W' = [W - C_1(t) WW_D] in Eq. (<ref>), which has the same number of parameters as W, thus achieving the combination of FC operation and denoising without additional time cost. It is a Computation-free method. In Eq. (<ref>), we achieve one-step denoising Y_t → Y_t-1. If we need to increase the denoising amplitude, we can extend it to two-step or multi-step denoising. The following is the derivation formula for two-step denoising: 1/√(a_t)Y_t - Y_t-1 = C_1(t)D_θ Y_t-σ_tz 1/√(a_t-1)Y_t-1 - Y_t-2 = C_1(t-1)D_θ Y_t-1-σ_t-1z We can obtain this by eliminating Y_t-1 from Eq.(<ref>) and Eq.(<ref>) and replacing Y_t with WX_t+b: Y_t-2 = W”X_t + C” W” = 1/√(a_t-1){W/√(a_t) -[C_1(t) +C_1(t-1) ]WW_D + √(a_t)C_1(t-1)C_1(t)WW_DW_D} C” = 1/√(a_t-1)[WC_2(t)+ √(a_t)WC_2(t-1) -√(a_t)C1_(t-1)C_2(t)WW_D ]Z+b Note that a single module completes two steps of denoising. To ensure the continuity of denoising, the t value should be sequentially reduced by 2. The FEFDFA we propose is based on feature level denoising and can be migrated to various downstream tasks. It denoises the features on each layer for better removal of noise at each stage, as the noise in the inference stage comes from multiple sources, which could be noise in the input image or noise generated while passing through the network. Denoising each layer avoids noise accumulation and gives better quality output. And according to the noise challenges brought by data in different scenarios, the denoising intensity can be adjusted by controlling t, β_t, and the number of denoising times, which has good generalization ability. §.§ Unsupervised Learning Manner Our proposed FEFDFA is suitable for label-free unsupervised training because its essence is a generative model that models data by learning its distribution. Thus the training Loss contains only the Loss_p of prediction noise module: Loss_p = ϵ - D_θ(X_t, t) where ϵ denotes the sampled noise, X_t denotes the noise sample, t denotes the diffusion step, and D_θ(X_t, t) denotes the noise predicted by the model. However, it is worth noting that our method is complementary to label if the label is available. Loss_ReID is the original search loss for baseline, λ is the trade-off parameter between two loss functions. The label-argumented learning is defined as: Loss = (1-λ)Loss_ReID + λ Loss_p Results in experiments section <ref> shows the improved performance by introducing label information for supervised training. § EXPERIMENTS §.§ FEFDFA Performance Analysis As mentioned in Section <ref>, Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA) is an unsupervised denoising module, and its training does not require the assistance of label information. We conducte the following experiments to identify three key issues. (1) Is this label-free and unsupervised training denoising plugin effective? As shown in Table <ref> (line2), compared with baseline method (line1), the baseline method performs better after adding our label-free plugin, which shows that our FEFDFA does have denoising capability for features. (2) Could introducing label information for supervised training further improve performance? Introducing label information is actually adding Loss_ReID as mentioned in Section <ref> as a supervised signal. As shown in Table <ref> (line3), baseline method with label-argumented FEFDFA achieve improvements of 0.32% - 0.70% on the mAP metric, indicating that our denoising plugin has label compatibility, in other words, the plug-in is effective for feature denoising regardless of label-argumented supervised or lable-free unsupervised training. (3) Since our plugin can perform unsupervised denoising of features, it is natural to think about whether adding more data for training the plugin could further improve its performance? We merge four datasets for training and then test on each dataset using mAP to evaluate. Comparing the results of training on sigle dataset (line2) with on merged datasets (line4), we found that adopting other datasets for unsupervised learning can further improve the performance of FEFDFA, which also proves that FEFDFA has good generalization ability. To demonstrate that our method can perform unsupervised learning and has good generalization, we merged four datasets and rearranged the sequence IDs to ensure the reliability of the experiment. The model is tested on the entire dataset. During the training process, we freeze the baseline parameters and only train the DenoiseReID module, without the need for labels, for unsupervised learning. Then test on a single dataset and compare the results of training on a single dataset. As shown in Table <ref>, it can be observed that adding unlabeled training data from different datasets can improve the model's performance on a single dataset, proving that this module has a certain degree of generalization. §.§ Comparison with State-of-the-Art ReID Methods We compare several state-of-the-art ReID methods on four datasets. One of the best performing comparison methods is TransReID-SSL, which is a series of ReID methods based on the ViT backbones. Other methods are based on structures such as CNNs. We add the proposed FEFDFA to TransReID-SSL series and observe their performance. As shown in Table <ref>, we have the following findings: (1) our method stands out on four datasets on ViT-base backbone with a large number of parameters, achieving almost the best performance on two evaluation metrics. (2) The methods using our plugin outperforms the original methods with the same backbone on all datasets. In addition, the performance improvement of small-scale backbones with the addition of FEFDFA is more significant than the large-scale backbones approach due to the fact that FEFDFA is essentially a denoising module that removes the noise contained in the features during the inference stage. For large-scale backbones, the extracted features already have good performance, so the denoising amplitude is limited. It has already fitted the dataset well. For small-scale backbones with poor performance, due to their limited fitting ability, there is a certain amount of noise in the extracted features during the inference stage. Denoising them can obtain better feedback. (3) In fact, our FEFDFA can be applied to any other backbone, just add it to each layer. In particular, the performance improvement of adding the denoising plugin to a poorly performing backbone might be more significant. This needs to be further verified in subsequent work. However, it is undeniable that we have verified the denoising ability of the FEFDFA in the currently optimal ReID method. In this section, a comparative analysis was conducted on four datasets to assess various existing ReID methods. These methods represent current mainstream ReID approaches, employing ResNet101, ViT-S, ViT-B, and ResNet50 as backbone architectures for feature extraction, respectively. Experimental results indicate that our proposed method outperforms other approaches in terms of both mAP and Rank-1 metrics. §.§ Parameter Fusion Performance Analysis Our Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA) is a computation-free way, In section <ref>, we proved by theoretical derivation that inserting our plugin into each layer and fusion it does not introduce additional computation. In this section, we also conduct related validation experiments, the results of which are shown in Table <ref>. Compare to the baseline method TransReID-SSL, adding FEFDUF is able to improve the the performance, proving that feature based denoising is effective. However, it also brings extra inference latency (about 15%) because it is adding an extra parameter-independent denoising module at the end of the model. Adopting FEFDFA achieves a greater increase, it denoise the features on each layer, which can better remove noise at each stage because the noise in the inference stage comes from multiple aspects, which may be the noise in the input image or generated when passing through the network. Denoising each layer avoids noise accumulation and obtains a better quality output. Most importantly, since the operation of fusion can merge the parameters of the denoising module with the original parameters, the adoption of FEFDFA does not take extra inference latency cost, which is a computation-free efficient approach. §.§ Experiments on Classification Tasks -1cm-1cm The Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA) is based on denoising at the feature level and has good generalization. To demonstrate its good generalization ability, we conduct experiments on other vision tasks to test the effectiveness of the FEFDFA. We validated the generalization ability of FEFDFA in image classification tasks on ImageNet-1k <cit.> datasets and three fine-grained image classification datasets (CUB200 <cit.>, Oxford-Pet <cit.>, and Flowers <cit.>). The accuracy index is selected as the evaluation indicator to evaluate the performance of the model. As shown in Table <ref>, We compared multiple classic backbones for representation learning on ImageNet-1k, and after adding the FEFDFA module, the accuracy of both top-1 and top-5 metrics improved without adding model parameters. As shown in Table <ref>, our method shows significant improvement in accuracy metrics compared to baseline on three fine-grained classification datasets. Prove that the FEFDFA module can improve the model's ability in image classification for different classification tasks.It has been proven that our method can enhance the model's representation learning ability and obtain more effective features through denoising without adding additional time cost. More experimental analysis can be found in Table <ref> in Section <ref> of the appendix. § CONCLUSION In this work, we demonstrated that the diffusion model paradigm is effective for feature level denoising in discriminative model, and proposed computation-free and label-free algorithm: Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA). It utilizes the denoising ability of diffusion models to denoise the features in the feature extraction layer, and fuses the parameters of the denoising layer with those of the feature extraction layer through parameter fusion, further improving retrieval accuracy without incurring additional computational costs. We validated the effectiveness of the FEFDFA method on multiple common image discrimination task datasets. plainnat § EXPERIMENTAL SETTINGS Datasets and Evaluation metrics. We conducted training and evaluation on four datasets: DukeMTMC-reID <cit.>, Market-1501 <cit.>, MSMT17 <cit.>, and CUHK-03 <cit.>. These datasets encompass a wide range of scenarios for person re-identification. For accuracy, we use standard metrics including Rank-1 curves (The probability that the image with the highest confidence in the search results is the correct result.) and mean average precision (MAP). All the results are from a single query setting. Implementation Details. We implemented our method using Python on a server equipped with a 2.10GHz Intel Core Xeon (R) Gold 5218R processor and two NVIDIA RTX 3090 GPUs. The epochs we trained are set to 120, the learning rate is set to 0.0004, the batch size during training is 64, the inference stage is 256, and the diffusion step size t is set to 1000. Training and evaluation. To better constrain the performance of the denoised features of the FEFDFA for downstream tasks, we used alternate fine-tune methods. The parameters of the FEFDFA and baseline were trained alternately, and when training a part of the parameters, the rest of the parameters were frozen and fine-tuned for 10 epochs at a time, with a total number of epochs of 120. When evaluating, we averaged the results of the experiments under the same settings for 5 times, thus ensuring the reliability of the data. § EXPERIMENT ON VEHICLE IDENTIFICATION In the image retrieval task, we also conducted experiments to verify the effectiveness of our method on the vehicle recognition task. Vehicle recognition in practical scenarios often results in images containing a large amount of noise due to environmental factors such as lighting or occlusion, which increases the difficulty of detection. Our method is based on denoising to obtain features with better representation ability. Therefore, we want to experimentally verify whether the FEFDFA module plays a role in vehicle recognition tasks with higher noise levels. We selected vehicleID <cit.> as the dataset, vehicle-ReID <cit.> as the baseline, and ResNet50 as the feature extractor for the experiment. From the results in Table <ref>, it can be seen that FEFDFA has excellent performance in vehicle detection tasks. Compared to the baseline, adding the FEFDFA module significantly improves both mAP and Rank-1 metrics without incurring additional detection time costs. Verified the denoising ability of the FEFDFA module in noisy environments. § EXPERIMENT ON LARGE SCALE IMAGE CLASSIFICATION TASKS In this section, we aim to test the generalization ability of FEFDFA in other tasks. We conducted experiments on two image classification datasets, ImageNet-1k and Cifar-10. These two datasets are both classic image classification datasets, rich in common images in daily life, and belong to large-scale image databases. ImageNet-1k is a subset of the ImageNet dataset, containing images from 1000 categories. Each category typically has hundreds to thousands of images, totaling over one million images. The Cfiar-10 contains 60000 32x32 pixel color images, divided into 10 categories. Each category contains 6000 images. To evaluate the effectiveness of our method, we use standard metrics, including Top-1 accuracy and Top-5 accuracy, which are commonly used to evaluate model performance in image classification tasks and are widely used on various image datasets, and we conducted detailed comparative experiments on multiple backbones and models with different parameter versions to verify the reliability of our method. -1cm-1cm As shown in Table <ref>, We compared multiple classic backbones for representation learning on two datasets, and after adding the FEFDFA module, the accuracy metrics improved without adding model parameters. It has been proven that our method can enhance the model's representation learning ability and obtain more effective features through denoising without increasing additional time costs. Moreover, this method can be well generalized to other image tasks. § LIMITATIONS Our proposed method FEFDFA improves the accuracy of current mainstream backbones while ensuring label-free and no additional computational costs, and it has been experimentally verified to be generalizable in multiple image tasks. However, from the experimental results, it can be found that our method has limited improvement in model accuracy when generalized to general tasks, and in order to fuse the parameters of the denoising layer and the feature extraction layer, only one or two steps of denoise for each denoising layer, and the number of denoising layers is not more than that of the feature extraction layer, which limits the denoising intensity. We will continue to explore how to further improve the accuracy of the model without adding additional inference time costs or only adding a small number of additional parameters.
http://arxiv.org/abs/2406.07838v1
20240612031030
Capacity bounds on integral flows and the Kostant partition function
[ "Jonathan Leake", "Alejandro H. Morales" ]
math.CO
[ "math.CO", "05A16, 05A20, 52B05, 52A39 (Primary) 52B20, 32A08, 52B55, 28D20\n (Secondary)" ]
§ ABSTRACT The type A Kostant partition function is an important combinatorial object with various applications: it counts integer flows on the complete directed graph, computes Hilbert series of spaces of diagonal harmonics, and can be used to compute weight and tensor product multiplicities of representations. In this paper we study asymptotics of the Kostant partition function, improving on various previously known lower bounds and settling conjectures of O'Neill and Yip. Our methods build upon recent results and techniques of Brändén-Leake-Pak, who used Lorentzian polynomials and Gurvits' capacity method to bound the number of lattice points of transportation and flow polytopes. Finally, we also give new two-sided bounds using the Lidskii formulas from subdivisions of flow polytopes. Review of the low-lying excited baryons Σ^*(1/2^-) Bing-Song Zou June 17, 2024 ================================================== § INTRODUCTION Integer flows on networks are very important objects in optimization, combinatorics, and representation theory. In the latter context, the number of integer flows on a directed complete graph is also known in Lie theory as the Kostant partition function. Many important quantities in representation theory like weight multiplicities (like Kostka numbers) and tensor product multiplicities (like the Littlewood–Richardson coefficients) can be expressed in terms of this function <cit.>. In this paper, we give new lower bounds on the Kostant vector partition function that improve on previously known bounds. To do this, we utilize recent lower bounds on the number of contingency tables given in <cit.>. Contingency tables are the lattice points of transportation polytopes, and since flow polytopes can be seen as faces of transportation polytopes, we are able to adapt the results of <cit.> to our context. The bounds of <cit.> come via lower bounds on the coefficients of certain (denormalized) Lorentzian polynomials <cit.> and their associated generating series, in terms of their capacity <cit.>. Our main contribution is then new explicit estimates of the capacity of these generating series via an associated flow entropy quantity, which lead to our new lower bounds. Let N=(N_0,…,N_n-1,-∑_i N_i) where N_i ∈ℤ and let G be a directed acyclic connected graph with vertices {0,1,…,n}. We denote by ℱ_G(N) the flow polytope of G with netflow N. When G is the complete graph k_n+1, we denote by the flow polytope by ℱ_n(N):=ℱ_k_n+1(N). We are interested in the number K_n(N) of lattice points of the flow polytope ℱ_n(N), i.e. the number of integer flows of the complete graph k_n+1 with netflow N. This is called the Kostant vector partition function since it has an interpretation in the representation theory of Lie algebras: it is the number of ways of writing N as an ℕ-combination of the vectors e_i-e_j for 1≤ i <j ≤ n+1, the type A positive roots. The function K_n(N) is a piecewise-polynomial function on the parameters N_i[Note that this does not imply polynomial bounds on K_n(N), since the total degree of the polynomials can depend on the length n of the vector (see Section <ref>).] <cit.> with a complex chamber structure <cit.>. Moreover, computing the number of lattice points of ℱ_G(N) in general is a computationally hard problem <cit.>, and there are special cases that are important and give surprising answers. We list a few of these, see Section <ref> for more details. (i) K_n(1,0,…,0,-1)=2^n-1, however no general formula is known for a_n(t):=K_n(t,0,…,0,-t) which is the Ehrhart polynomial of the flow polytope ℱ_n(t,0,…,0,-t). Chan–Robbins–Yuen <cit.> showed that for t ∈ℕ, the sequence (a_n(t))_n≥ 0 satisfies a linear recurrence of order p(t), the number of integer partitions of t. Moreover, the following bounds are known for a_n(t) ∏_1≤ i<j≤ n2t+i+j-1/i+j-1≥ a_n(t) ≥ (t+1)^n-1. The lower bound follows from elementary methods, and improving this bound with more interesting techniques has proven elusive. The upper bound is more subtle and it follows from containment of ℱ_n(1,0,…,0,-1) in another polytope related to alternating sign matrices <cit.>. (ii) b_n:=K_n(1,2,…,n,-n+12) = ∏_i=1^n C_i where C_i = 1/i+12ii is the ith Catalan number. This happens to be the volume of the n2-dimensional polytope ℱ_n(1,0,…,0,-1) <cit.> and thus leading term in t of n2!· a_n(t). (iii) c_n:=K_n(1,1,…,1,-n), this value counts the number of n× n Tesler matrices <cit.> which are of interest in the study of the space of diagonal harmonics DH_n which has dimension (n+1)^n-1 (see <cit.>), the number of rooted forests on vertices [n]={1,2…,n}. Indeed, there is a formula of Haglund <cit.> for the Hilbert series of this space as an alternating sum over the integer flows counted in b_n. No simple formula is known for b_n but from the connection to DH_n and computational evidence <cit.> it is expected that eventually c_n> (n+1)^n-1. This is a special case of <cit.>. In an effort to show this lower bound, O'Neill <cit.> found the following bounds for c_n, 2^n-22-1· 3^n ≥ c_n ≥ (2n-3)!!. Note that (2n-1)!! ∼√(2)· (2/e)^n n^n. In 2016 Pak (private communication) asked whether c_n is e^Θ(n^2) and in 2019 Yip conjectured (private communication) that for all n≥ 0, c_n is at least as big as the number f_n ∼ e^1/2· n^n-2 of forests on vertices [n] <cit.>, which is also the number of lattice points of the permutahedron Π_n. (iv) d_n:=K_n(2ρ) where 2ρ = (n,n-2,n-4,…,-n+4,-n+2,-n) is the sum of all positive roots in type A_n <cit.>. The quantity d_n gives the dimension of the zero weight space of a certain Verma module <cit.> and the problem of giving bounds for this quantity was raised in <cit.>. O'Neill obtained in <cit.> the following bound for K_n(ρ) when n=2k+1, d_n ≥ 3^k^2-k-1. He also gave a bound for K_n(t· 2ρ) (see Proposition <ref>). These cases suggest studying the asymptotic behavior of the Kostant partition function. In this paper we obtain the following improvements for the all cases mentioned above. Fix an integer t > 0 and let N = (t,0,0,…,0,-t). Then for a_n(t):=K_n(N) we have log_2 a_n(t) ≥n/2log_2^2 t - O(n log_2 t). The big-O notation is with respect to n (with parameter t fixed), and the implied constant is independent of n and t. This improves over the bound log_2 a_n(t) ≥ n log_2(t+1) - O(log_2 t) from the lower bound in (<ref>) by an extra factor of 1/2log_2 t. For the Tesler case N=(1,1,…,1,-n), we have the following lower bound. Let N=(1,1,…,1,-n). Then for c_n:=K_n(N) we have log c_n ≥n/4log^2 n - O(n log n). Furthermore, c_n ≥ (n+1)^n-1 for n≥ 3000. This bound beats (asymptotically) all previously known bounds (<ref>) and proves Yip's and O'Neill's conjectures mentioned above, for large enough n, since f_n ≤ (n+1)^n-1. This bound is the first improvement beyond (n+1)^n-1 towards answering Pak's question. As a comparison, in the case N=(1,2,…,n,-n+12) where b_n has a closed formula and thus log b_n = n^2 log2 - O(nlog n) <cit.>, our methods give the lower bound log b_n ≥1/2 n^2 - O(n log n). For all polynomial growth cases N_k≥ a· k^p we also obtain a general bound. (We also obtain better bounds for more specific cases; see Section <ref>.) Fix a > 0 and p ≥ 0 and suppose N_k ≥ a · k^p for all 0 ≤ k ≤ n-1. Then log K_n(N) ≳ n^2 log n ·(p-1/2) p > 1 n^2 ·(1/2log(a/2) + 2 - 2log 2), p = 1, a > 2 n^2 ·(a - a log 2)), p = 1, a ≤ 2 n^p+1log^2 n ·(a(1-p)^2/4(p+1)) p < 1 . The ≳ symbol means the expressions given above essentially give the leading term of the actual lower bounds we obtain. See Theorem <ref> for the formal statement of this result. The phase transitions in the above lower bounds are possibly interesting, but we cannot tell whether or not they are artifacts of our proof strategy. See Section <ref> for further discussion. The final case we discuss is that of N = t · 2ρ. This case does not fit with the previous cases in the sense that some entries of N are negative, but we are still able to apply our methods to achieve the following lower bound. Fix an integer t > 0 and let N = t · 2ρ(n) = t · (n, n-2, n-4, …, -n+2, -n). Then for d_n(t):=K_n(N) we have log d_n(t) ≥n^2/2log((1+t)^1+t/t^t) - O(n log(nt)). The implied constant is independent of t. This bound improves over the results of O'Neill for all integers t > 0 (see above and Proposition <ref>), including the important case of t=1 where we improve the leading-term constant for log d_n = log d_n(1) from ln 3/4 to ln 2. Finally, we prove bounds in various other specific cases using the same methods, and these are collected in Section <ref>. §.§ Methodology To state our main result we need the following notation. Given some flow (f_ij)_0 ≤ i < j ≤ n = f∈ℱ_n(N) and letting h(t) := (t+1) log (t+1) - t log t, we define the flow entropy of f via ℋ(f) := ∑_0 ≤ i < j ≤ n h(f_ij) + ∑_0 < j < n h(∑_0 ≤ i < j (N_i - f_ij)). Note that ℋ(f) is a concave function on ℱ_n(N). With this, we can now state the main technical result we use to prove our bounds. Let N = (N_0,N_1,…,N_n) be an integer vector such that ℱ_n(N) is non-empty, and let s_k = ∑_j=0^k N_j for all k. Letting K_n(N) denote the number of integer points of ℱ_n(N), we have sup_f∈ℱ_n(N) e^ℋ(f) ≥ K_n(N) ≥ max_0 ≤ k ≤ n-1{e^h(s_k)}/(∏_k=0^n-1 e^h(s_k))^2sup_f∈ℱ_n(N) e^ℋ(f). Results similar to Theorem <ref> have appeared in various contexts before, as suggested by the cited references. That said, we give a new and streamlined proof technique for this fact in Section <ref>. Using Theorem <ref>, we can obtain explicit lower bounds on K_n(N) for a given flow vector N by computing max_0 ≤ k ≤ n-1{e^h(s_k)}/(∏_k=0^n-1 e^h(s_k))^2· e^ℋ(f) for a well-chosen f∈ℱ_n(N). The f^⋆ which optimizes ℋ has no closed-form formula in general, and thus some heuristic must be used to choose f which yields good bounds. The potential problem with this approach is that asymptotic formulas for K_n(N) may have phase transitions (see <cit.> for examples of this in the context of contingency tables); that is, it is possible that similar values of N can lead to different asymptotics. This means that a too-simple heuristic leading to a general formula for a lower bound on K_n(N) is unlikely to give a high quality bound. With this in mind, we devise a heuristic for choosing f which is complicated enough to hopefully allow for good bounds, but simple enough to be applicable to a wide range of values of N. Specifically, we choose f to be the average of the vertices of ℱ_n(N). On the one hand, counting and computing the average of the vertices of a given flow polytope can be non-trivial in general. On the other, we demonstrate the quality of this choice by proving lower bounds in various cases which are asymptotically better than all previously known bounds. Finally, once we have our choice of f, we extract explicit asymptotics from the flow entropy expression (<ref>) evaluated at f. This last step, while elementary, requires some not-so-trivial analysis of the entropy function via the Euler-Maclaurin formula. §.§ A result for intuition Theorem <ref> above suggests that a well-chosen f∈ℱ_n(N) can produce good lower bounds on K_n(N), and the improved bounds we are able to prove in this paper perhaps demonstrate this for some particular cases. We now state a result which demonstrates this more generally, and offers some more formal evidence why we expect the ideas of this paper to yield good bounds on the number of flows (see Section <ref> for the proof). Let N^* = (N_0,N_1,N_2,…) be an infinite sequence of positive integers which has at most polynomial growth, and let K_n(N) = K_n(N_0,…,N_n-1,-∑_i N_i) and ℱ_n(N) = ℱ_n(N_0,…,N_n-1,-∑_i N_i) for all n. Then the maximum flow entropy asymptotically approximates log K_n(N). That is, as n →∞ we have log K_n(N)/sup_f∈ℱ_n(N)ℋ(f)→ 1. That is, there is some choice of flows (dependent on n) which produces the correct asymptotics in the log. This perhaps makes the problem simpler for the positive polynomial growth case of Theorem <ref>: instead of counting lattice points of polytopes, we just need to find choices of (not necessarily integer) flows with high entropy. We also remark that ℋ(f) in Theorem <ref> can be replaced by the more standard geometric entropy: ℋ_g(f) = ∑_0 ≤ i < j ≤ n h(f_ij). §.§ Bounds for other regimes Finally, in the results above we mainly consider the individual entries of the flow vector N to be constant with respect to n. For example, in the case of a_n(t) in Theorem <ref>, we fix t and bound the asymptotics with respect to n. However, there are other regimes where bounds are desirable; for example, t may itself be a function of n. To handle cases like this, we use a different technique. Specifically, we use a positive formula for K_n(N) called the Lidskii formula <cit.> coming from the theory of flow polytopes and related to mixed volumes to give bounds for a_n(t) for t much larger than n. For t≥n^3/2 we have that (n-1)!·t+n-1n2∏_i=0^n-2 C_i ≥ a_n(t) ≥ t+n-1n2∏_i=0^n-2 C_i. These concrete bounds are reasonable compared to the leading coefficient ∏_i=1^n-2 C_i/n2! in t of a_n(t), the volume of the polytope ℱ_n(1,0,…,0,-1) <cit.>. The techniques used for these bounds do not fit directly into the overarching methodology discussed above. That said, we include them anyway for completeness, and due to the connection between the Lidskii formula and mixed volumes. Mixed volumes are the coefficients of volume polynomials, which are Lorentzian (see, e.g., <cit.>), and thus there may be some further connection between this formula and the entropy-based methodology discussed above. We leave this to future work. §.§ Structure of the paper The paper is organized as follows. Section <ref> has background on flow polytopes, bounds, capacity method on contingency tables, and asymptotics of entropy related functions. Section <ref> gives our proof of Theorem <ref>. Section <ref> has details on the averages of vertices of flow polytopes. Section <ref> computes the concrete asymptotic lower bounds for all the cases we consider. Section <ref> gives bounds on the Kostant partition function using the Lidskii formula from the theory of flow polytopes. Section <ref> has final remarks. Details of the asymptotic analysis are in the Appendix <ref>. § BACKGROUND AND COMBINATORIAL/GEOMETRIC BOUNDS §.§ Transportation and flow polytopes A polytope P⊂ℝ^m is a convex hull of finitely many points or alternatively a bounded intersection of finitely many half spaces. The polytopes we consider are integral, i.e. its vertices have integer coordinates. Two polytopes P⊂ℝ^m and Q⊆ℝ^k are integrally equivalent if there is an affine transformation φ:ℝ^m→ℝ^k such that restricted to P and ℤ^m ∩(P) gives a bijection Q and to ℤ^k ∩(Q). Next, we define flow polytopes and transportation polytopes. Let α=(α_0,…,α_m-1) and β=(β_0,…,β_n-1) be vectors in ℤ_≥ 0^n, the transportation polytope 𝒯(α,β) is the set of all m× n matrices M=(m_i,j) with nonnegative real entries with row sums and column sums α and β. The lattice points of 𝒯(α,β) are called contingency tables and we denote the number of such tables by (α,β). The generating function of (α,β) has the following closed form. Φ(x,y) :=∑_α,β(α,β) x^αy^β = ∏_i=0^m-1∏_j=0^n-11/1-x_iy_j, Given a a directed acyclic graph G with vertices {0,1,…,n} and m edges, and a vector N=(N_0,…,N_n-1,-∑_i N_i) ∈ℤ^n+1, an N-flow on G is a tuple (f_e)_e∈ E(G) in ℝ_≥ 0^m of values assigned to each edge such that the netflow on vertex i is N_i: ∑_e=(i,j) ∈ E(G) f_e - ∑_e=(k,i) ∈ E(G) f_e = N_i. The flow polytope ℱ_G(N) is the set of N-flows on G. When G is the complete graph k_n+1, for brevity we denote by ℱ_n(N) the flow polytope ℱ_G(N). We denote by K_n(N):=#(ℱ_n(N) ∩ℤ^n+12) the number of lattice points of ℱ(N), which counts the number of integer flows on k_n+1 with netflow N. This is called Kostant's vector partition function since it also counts the number of ways of writing N as a combination of the positive type-A roots e_i-e_j corresponding to each edge (i,j) where e_i is a standard basis vector. The generating function of K_n(N) for N∈ℤ^n+1 has the following closed form. Ψ(z) :=∑_N K_n(N) z^N = ∏_0≤ i<j ≤ n1/1-z_iz_j^-1. Flow polytopes can be viewed as faces of transportation polytopes as follows (see <cit.>). Given a flow vector N, define α = (s_0,s_1,…,s_n-1) and β = (α)=(s_n-1,…,s_1,s_0) where s_k = ∑_j=0^k N_j as above. Define a linear injection ϕ: ℱ_n(N) →𝒯(α,β) via ϕ: (f_ij)_0 ≤ i < j ≤ n↦[ f_0,n f_0,n-1 f_0,n-2 f_0,n-3 ⋯ f_0,3 f_0,2 f_0,1; f_1,n f_1,n-1 f_1,n-2 f_1,n-3 ⋯ f_1,3 f_1,2 g_n-1; f_2,n f_2,n-1 f_2,n-2 f_2,n-3 ⋯ f_2,3 g_n-2 0; f_3,n f_3,n-1 f_3,n-2 f_3,n-3 ⋯ g_n-3 0 0; ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; f_n-3,n f_n-3,n-1 f_n-3,n-2 g_3 ⋯ 0 0 0; f_n-2,n f_n-2,n-1 g_2 0 ⋯ 0 0 0; f_n-1,n g_1 0 0 ⋯ 0 0 0; ], where the subdiagonal entries g_j are chosen so that the row sums and column sums are equal to the entries of α and β. (Note that these entries are given precisely by g_j:=∑_0 ≤ i < j (N_i - f_ij) for 0 < j < n). The image of ϕ is the set matrices in 𝒯(α,β) which have 0 in all entries of the bottom-right corner of the matrix as specified in the definition of ϕ. The set ϕ (ℱ_n(N)) is a face of 𝒯(α,β) (see <cit.>). By abuse of notation, we will refer to a flow (f_ij) in ℱ_n(N) and its image ϕ(f_ij) in 𝒯(α,β) interchangeably. Given N=(N_0,…,N_n-1,-∑_i N_i) ∈ℤ^n+1, and α, β, and ϕ be defined as above, then ϕ is an integral equivalence between ℱ_n(N) and a face of 𝒯(α,β). §.§ Special examples of flow polytopes The following are two examples of flow polytopes that will be of interest and we give their vertex description. §.§.§ CRY polytope For N=(1,0,…,0,-1), the polytope ℱ_n(N) is called the Chan-Robbins-Yuen polytope <cit.>. This polytope has dimension n2 and 2^n-1 vertices <cit.>. The vertices can be described as follows: they correspond to unit flows along paths on k_n+1 from the source 0 to the sink n+1. These paths are completely determined by their support on internal vertices in {1,…,n-1}. We translate the description of these vertices in the transportation polytope. For N=(1,0,…,0,-1), the vertices of ℱ_n(N) are determined by binary strings in the sub-diagonal: (g_1,…,g_n-1)∈{0,1}^n-1. In particular there are 2^n-1 vertices. Given a binary string g=(g_1,…,g_n-1), the corresponding vertex g↦ X=(x_ij) given by x_i,j = (1-g_i)(1-g_n-j)∏_k=i+1^n-j-1 g_k if i<n-j g_i if i=n-j 0 otherwise, where g_0=g_n=0. Since ℱ_n(1,0,…,-1) is a 0/1 polytope, its lattice points are its vertices and so K_n(1,0,…,0,-1)=2^n-1. §.§.§ Generalized Tesler polytope For (N=(N_0,…,N_n-1,-∑_i=0^n-1 N_i) where N_i >0) N=(1,…,1,-n), the polytope ℱ_n(N) is called the (generalized) Tesler polytope <cit.>. This polytope has dimension n2, is simple, and has n! vertices. The vertices can be characterized as follows. For N=(N_0,…,N_n-1,-∑_i=0^n-1 N_i) where N_i>0, the vertices of ℱ(N) are characterized by flows whose associated matrix has exactly one nonzero upper triangular entry in each row. In particular there are n! vertices. Given a choice p of an upper triangular entry in each row, the corresponding vertex p↦ X=(x_ij) is defined by x_i,j = N_i + ∑_r=0^i-1 x_r,n-i if p_i,j=1 0 otherwise. and the sub-diagonal term is x_i,n-i=∑_r=0^i-1 (N_r-x_r,n-i) for i=1,…,n-1. The CRY polytope ℱ_3(1,0,0,-1) is 3-dimensional with vertices/lattice points represented by the following matrices: [ 0 0 1; 0 1 0; 1 0 ], [ 1 0 0; 0 0 1; 0 1 ], [ 0 1 0; 0 0 1; 1 0 ], [ 0 0 1; 1 0 0; 0 1 ]. The Tesler polytope ℱ_3(1,1,1,-3) is 3-dimensional with the following seven lattice points of which the first six are its vertices: [ 0 0 1; 0 2 0; 3 0 ], [ 0 0 1; 2 0 0; 1 2 ],[ 0 1 0; 0 1 1; 3 0 ],[ 0 1 0; 1 0 1; 2 1 ],[ 1 0 0; 0 1 1; 2 1 ],[ 1 0 0; 1 0 1; 1 2 ], [ 0 0 1; 1 1 0; 2 1 ]. See Figure <ref>. §.§ Previous bounds on lattice points of flow polytopes In this section we collect previous results and questions about bounds on K_n(N). See Table <ref>. Let N=(N_0,…,N_n-1,-∑_i=0^n-1 N_i), then K_n(N_0,…,N_n-1,-∑_i N_i) = ∑_f (f_0,n-1+1)(f_1,n-1+1)⋯ (f_n-2,n-1+1), where the sum is over integer flows in ℱ_n-1(N_0,…,N_n-2,-∑_i N_i). Given weak compositions N'=(N_0,…,N_n-1) and M'=(M_0,…,M_n-1), we say that N' dominates M' if ∑_i=0^k N_i ≥∑_i=0^k M_i for every k=0,…,n-1 and denote it by N'M'. Let N=(N_0,…,N_n-1,-∑_i N_i) and M=(M_0,…,M_n-1,-∑_i M_i) where N_i and M_i are in ℤ_≥ 0 such that (N_0,…,N_n-1) (M_0,…,M_n-1), then K_n(N) ≥ K_n(M). Let ϵ_i = (∑_j=0^i N_j)-(∑_j=0^i M_j) ≥ 0 for i=0,…,n-1, α=(s_0,s_1,…,s_n-1) and β=(s_n-1,…,s_1,s_0) (α'=(s'_0,s'_1,…,s'_n-1) and β'=(s'_n-1,…,s'_1,s'_0), respectively) where s_k=∑_j=0^k N_j (for s'_k=∑_j=0^k M_j). Given an integer flow (f_ij) of ℱ_n(M) viewed as a lattice point ϕ(f_ij) in 𝒯(α,β), let (f'_ij) be defined as f'_ij = f_ij+ϵ_i if j=i+1, f_ij otherwise. Then (f'_ij) is an integer flow in ℱ_n(N), i.e. ϕ(f'_ij) is a lattice point in 𝒯(α',β'). This map is injective and therefore K_n(M) ≤ K_n(N), as desired. In particular, the previous result implies a similar inequality when N ' is term-wise larger than M'. Let N=(N_0,…,N_n-1,-∑_i N_i) and M=(M_0,…,M_n-1,-∑_i M_i) where N_i and M_i are in ℤ_≥ 0 such that N_i≥ M_i then K_n(N) ≥ K_n(M). §.§.§ The cases with closed formulas The following cases of K_n(N) have closed formulas coming from a certain constant term identity due to Zeilberger <cit.>, that is a variation of the Morris constant term identity related to the Selberg integral. For self contained proofs of these product formulas see <cit.>. Let C_n:=1/n+12nn denote the nth Catalan number and let F(t,n):=∏_1≤ i<j≤ n2t+i+j-1/i+j-1, which counts the number of plane partitions of shape δ_n=(n-1,n-2,…,1) with entries at most t in ℤ_≥ 0 <cit.>. For a nonnegative integer t we have that K_n(t,t+1,t+2,…,t+n-1,-nt-n2) = C_1C_2⋯ C_n-1· F(t,n). By a result of Postnikov–Stanley (unpublished) and Baldoni–Vergne <cit.>, we have the following relation between the normalized volume of ℱ_n(1,0,…,0,-1) equals a value of K_n(·), ℱ_n+1(1,0,…,0,-1) = K_n(0,1,2,…,n-2,-n-12)=C_1C_2⋯ C_n-1, where the second equality follows by setting t=1 in (<ref>). Moreover, the leading term in t of (<ref>) gives the normalized volume of ℱ_n(1,1,…,1,-n) <cit.>, <cit.>. ℱ_n+1(1,1,…,1,-n) = C_1C_2⋯ C_n-1· f^δ_n, where f^δ_n=n2! 2^n2/∏_i=1^n i! is the number of Standard Young tableaux of shape δ_n=(n-1,n-2,…,1). The next two results collect asymptotics and bounds for the special cases and the pieces of the product on the RHS of (<ref>) log K_n(0,1,2,…,n-1,-n2) = log C_1C_2⋯ C_n-1 = n^2 log 2 - 3/2 n log n + O(n). log K_n(n,n+1,n+2,…,2n-2,-n^2-n-12) = log(C_1C_2⋯ C_n-1 F(n,n)) = (9log2 - 9/2log 3)n^2 + O(nlog n). For all integers n and t we have that 0 ≥ log F(t,n) - (n+t)^2f(t/(n+t)) ≥ - 2(t+n), where f(x)=x^2log x - 1/2 (1-x)^2 log(1-x) - 1/2 (1+x)^2log(1+x) + 2xlog 2. §.§.§ The CRY case N=(t,0,…,0,-t) For N=(t,0,…,0,-t) we have that F(t,n) ≥ K_n(N) ≥ (t+1)^n-1. To show the lower bound we use Proposition <ref> and the fact that the smallest product in the sum on the RHS of (<ref>) occurs when the first column is (f_0,n-1,…,f_n-2,n-1) is (0,…,0,t) to conclude that K_n(t,0,…,0,-t)≥ (t+1) · K_n-1(t,0,…,0,-t). The upper bound comes from <cit.>. Fix an integer t>0 and let N=(t,0,…,0,-t), then a_n(t)=K_n(t,0^n-1,-t) satisfies a linear recurrence of order p(t), the number of integer partitions of t. In particular a_n(t) ∼ c(t)^n for some constant c(t). §.§.§ The Tesler case N=(t,t,…,t,-nt) For N=(t,t,…,t,-nt) we have that (t+1)^n2 ≥ K_n(N) ≥ ∏_i=1^n-1 (it+1). To show the lower bound we use Proposition <ref> and the fact that the smallest and biggest product in the sum on the RHS of (<ref>) occurs when the first column is (f_0,n-1,…,f_n-2,n-1) is (0,…,0,(n-1)t) and (t,…,t), respectively. This implies that t^n-1 K_n-1(t,t,…,t,-(n-1)t) ≥ K_n(t,t,…,t,-nt) ≥ ((n-1)t+1) · K_n-1(t,t,…,t,-(n-1)t). A more careful analysis of Proposition <ref> was performed by O'Neill to improve the bounds for the case of N=(1,1,…,1,-n). 2^n-22-1· 3^n ≥ K_n(1,1,…,1,-n) ≥ (2n-3)!! . Let Π_n=(w (0,1,…,n-1) | w ∈𝔖_n} be the classical permutahedron. In 2019, Yip (private communication) asked whether the Tesler polytope ℱ_n(1) projects to the permutahedron and conjectured the following weaker statement. Recall that f_n ∼ e^1/2· n^n-2 is the number of forests with vertices [n] <cit.>. This is also the number of lattice points of Π_n. For N=(1,1,…,1,-n), we have that K_n(N) ≥ f_n. Next, we give some other bounds for the number of lattice points of dilations of the Tesler polytope. For t≥ 0 then C_1C_2⋯ C_n-1· F(t,n) ≥ K_n(t,t,…,t,-nt) . By Corollary <ref> we have that K_n(t,t+1,t+2,…,t+n-1,-nt-n2)≥ K_n(t,t,…,t,-nt). The upper bound follows by the product formula in (<ref>) for the LHS above. For t≥ n-1 we have that K_n(t,t,…,t,-nt)≥ C_1C_2⋯ C_n-1· F(t-n+1,n). When t≥ n-1, by Corollary <ref> we have that K_n(t,t,…,t,-nt)≥ K_n(t-n+1,t-n+2,…,t-1,t,-nt+n2). The lower bound follows by the product formula in (<ref>) for the RHS above evaluating t=t-n+1. n^2(9log2 - 9/2log 3) + O(nlog n)≥log K_n(n,n,…,n,-n^2)≥ n^2log 2 - 3/2 n log n + O(n). By Propositions <ref>, <ref> for t=n-1 we obtain that C_1C_2⋯ C_n-1· F(n,n)≥ K_n(n,n,…,n,-n^2) ≥ C_1C_2⋯ C_n-1. Taking the log and using the bounds in Propositions <ref>, <ref> give the result. §.§.§ The case of 2ρ We next consider the case where N = 2ρ(n) is the sum of all the positive roots of the type A_n root system, given by e_i - e_j for 0 ≤ i < j ≤ n. The quantity K_n(N) gives the dimension of the zero weight space of a certain Verma module <cit.>. More concretely, we have 2ρ := (n,n-2,n-4,…,-n+4,-n+2,-n). A post in <cit.> raised the question of studying bounds for K_n(2ρ). In an unpublished report <cit.>, O'Neill using the techniques in <cit.> obtained the following bounds for K_n(2ρ) and K_n(t· 2ρ). For a nonnegative odd integer n=2k+1 and t>1 we have that K_n(2ρ) ≥ 3^k^2-k-1, K_n(t· 2 ρ) ≥(t+1/2)^k^2. By taking the logs of the results above we immediately obtain the following results. Fix an integer t > 1 and let N=2ρ(n). We have that log K_n(2ρ(n)) ≥1/4 n^2 log 3 - O(n), and log K_n(t· 2 ρ(n))≥1/4 (n-1)^2 log(t+1/2). §.§ Polynomial capacity and log-concave polynomials Polynomial capacity, originally defined by Gurvits <cit.>, is typically defined as follows. Note that here we extend the definition to multivariate power series. Given a polynomial p ∈[x] = [x_1,…,x_n] or power series p ∈[[x]] = [[x_1,…,x_n]] with non-negative coefficients and any α∈_≥ 0^n, we define _α(p) = inf_x > 0p(x)/x^α = inf_x_1,…,x_n > 0p(x_1,…,x_n)/x_1^α_1⋯ x_n^α_n. This equivalently defined as log_α(p) = inf_y∈^n[log p(e^y) - ⟨y, α⟩], where ⟨·, ·⟩ is the usual dot product. The typical use of capacity is to approximate or bound the coefficients of certain polynomials or power series. For example, the following bound follows immediately from the definition. Given a polynomial or power series p ∈_≥ 0[x] = _≥ 0[x_1,…,x_n] with non-negative coefficients and any α∈_≥ 0^n, we have _α(p) ≥ [x^α] p(x), where [x^α] p(x) denotes the coefficient of x^α in p. Lower bounds in terms of the capacity, on the other hand, are harder to prove. In fact, one should not expect such lower bounds in general; e.g., [x^1] p(x) = 0 and _1(p) > 0 for p(x) = x_1^n + ⋯ + x_n^n. Thus lower bounds are typically only proven for certain classes of polynomials and power series. The most common classes are real stable polynomials, Lorentzian polynomials (also known as completely log-concave and strongly log-concave <cit.>), and most recently denormalized Lorentzian polynomials. Such bounds have been applied to various quantities, such as: the permanent, the mixed discriminant, and the mixed volume <cit.>; quantities related to matroids, like the number of bases of a matroid and the intersection of two matroids <cit.>; the number of matchings of a bipartite graph <cit.>; and the number of contingency tables <cit.>. We do not explicitly use results of definitions regarding these polynomial classes, but instead refer the interested reader to the above references. §.§ Bounds on contingency tables Given vectors α∈_≥ 0^m and β∈_≥ 0^n, recall that a contingency table is an m × n matrix M ∈_≥ 0^m × n with non-negative integer entries, for which the row sums of M and the column sums of M are given by the entries of α and β respectively. (Recall also that we index the rows and columns starting with 0.) As discussed above, the set of all contingency tables has a nice generating function with coefficients indexed by α and β, given by Φ(x,y) = ∑_α,β(α,β) x^αy^β = ∏_i=0^m-1∏_j=0^n-11/1-x_iy_j. Lower bounds on (α,β) in terms of the capacity of Φ(x,y) were obtained in <cit.>, which improved upon previous bounds of <cit.>. The bound for general contingency tables is given as follows. Given α∈_≥ 0^m and β∈_≥ 0^n, we have _α,β(Φ) ≥ (α,β) ≥ [∏_i=1^m-1α_i^α_i/(α_i+1)^α_i+1∏_j=0^n-1β_j^β_j/(β_j+1)^β_j+1] _α,β(Φ). Note that i,j starting from 1,0 respectively is not a typo, and in fact we can replace the products above by a product over any subset of all but one of the factors. Bounds are also achieved in <cit.> for contingency tables with restricted entries, where then entries of a given matrix M ∈_≥ 0^m × n are bounded above entry-wise by a given matrix K. We care in this paper specifically about the case when k_ij = +∞ for i+j ≤ n and k_ij = 0 otherwise, for which we have that the number of contingency tables counts the number of integer flows as in (<ref>). In this case, we define Φ'(x,y) = ∏_0 ≤ i,j ≤ n-1 i+j ≤ n1/1-x_iy_j, and the bound is given as follows. Note that the authors of <cit.> do not write their theorem in terms of flows explicitly, and so we translate their theorem here (in light of (<ref>)) for the convenience of the reader. Given N = (N_0,N_1,…,N_n) ∈^n+1, define s_i = ∑_k=0^i N_k for i ∈{0,1,…,n-1} and let α = (s_0,s_1,…,s_n-1) and β = (s_n-1,…,s_1,s_0). Then we have _α,β(Φ') ≥ K_n(N) ≥ max_0 ≤ i ≤ n-1{(s_i+1)^s_i+1/s_i^s_i} [∏_i=0^n-1s_i^s_i/(s_i+1)^s_i+1]^2 _α,β(Φ'). In this paper, we will determine lower bounds for _α,β(Φ') and combine this with the bound given by Theorem <ref>. AThis will lead to our new bounds on flows. §.§ Convex analysis Given a function f: ^n → (-∞, +∞], we define its domain 𝒟 = 𝒟(f) via 𝒟 := {x∈^n : f(x) < +∞}. We say f is convex if 𝒟 is convex and f is convex on 𝒟. The convex conjugate (or Fenchel conjugate or Legendre transform) of f is also a convex function, defined as follows. Given a convex function f, its convex conjugate f^*: ^n → (-∞, +∞] is a convex function defined via f^*(y) := sup_x∈𝒟[⟨x, y⟩ - f(x)]. We denote the domain of f^* by 𝒟^* = 𝒟(f^*). In order to give lower bounds on the capacity of a given polynomial or generating series, we use an idea already present in the work of Barvinok (e.g. Lemma 5 of <cit.>) and in Proposition 6.2 of <cit.>: we convert the infimum in the definition of capacity (Definition <ref>) into a supremum. We can then lower bound the supremum by simply evaluating the objective function at any particular chosen value of the domain. A proof sketch for a general version of this is given in <cit.>, and we give a different and simpler proof in this paper based on the following classical result of convex analysis relating convex conjugates via the infimal convolution. Let f be a convex function given as the sum f = ∑_i f_i of convex functions with respective domains 𝒟_i. If ⋂_i (𝒟_i) is non-empty then f^*(α) = inf_∑_i α_i = α∑_i f_i^*(α_i), where for each α the infimum is attained. Note that the domain of optimization is over all choices of vectors α_i ∈𝒟_i such that ∑_i α_i = α. Note that Theorem <ref> converts a supremum into an infimum, rather than the other way around. Since the log of capacity is the negation of a convex conjugate, Theorem <ref> then gives precisely what we need. § A DUAL FORMULATION FOR CAPACITY As above, let K_n(N) denote the number of integer flows on k_n+1 with netflow given by N∈^n+1. Counting such integer flows is equivalent to counting the integer matrices of the form given by (<ref>), where the row and columns sums are given by α = (s_0,s_1,…,s_n-1) and β = (s_n-1,…,s_1,s_0) where s_k = ∑_j=0^k N_j. Thus by Theorem <ref>, we have _α,β(Φ') ≥ K_n(N) ≥ max_0 ≤ i ≤ n-1{(s_i+1)^s_i+1/s_i^s_i} [∏_i=0^n-1s_i^s_i/(s_i+1)^s_i+1]^2 _α,β(Φ'), where Φ'(x,y) = ∏_i+j ≤ n1/1-x_iy_j with 0 ≤ i,j ≤ n-1. In this section, we will utilize Theorem <ref> to convert the infimum of the above capacity expression into a supremum. Specifically, we will prove the following. Let ϕ(ℱ_n(N)) be the image of ℱ_n(N) in 𝒯(α,β) as defined in (<ref>), and recall the definition of flow entropy ℋ(f) from (<ref>). We have that _α β(∏_i+j ≤ n1/1-x_iy_j) = sup_A ∈ϕ(ℱ_n(N))∏_i+j ≤ n(a_ij+1)^a_ij+1/a_ij^a_ij = sup_f∈ℱ_n(N) e^ℋ(f). Proposition <ref> then leads immediately to the following result, which we will utilize in the later sections. Let f be any (not necessarily integer) point of ℱ_n(N), let s_k = ∑_j=0^k N_j, and let A = ϕ(f) where ϕ is defined as in (<ref>). We have that K_n(N) ≥ max_0 ≤ i ≤ n-1{(s_i+1)^s_i+1/s_i^s_i} [∏_i=0^n-1s_i^s_i/(s_i+1)^s_i+1]^2 ∏_i+j ≤ n(a_ij+1)^a_ij+1/a_ij^a_ij. There are also versions of the above results for the volumes of flow polytopes. Since these are outside the context of the results of this paper, we leave further discussion of these results to the final remarks (see Section <ref>). §.§ Proof of Proposition <ref> The second equality follows from the definition of flow entropy ℋ(f), and so we just need to prove the first equality. Consider the following function: f_#(x,y) := -∑_i+j ≤ nlog(1 - e^x_i+y_j), Here, as above, the variables are indexed from 0 to n-1. Since -log(1-e^t) is a convex function on its domain, we have that f_# is convex on its domain 𝒟_#⊇_<0^2n. Since f_# is defined as a sum of convex functions, we can apply Theorem <ref> to obtain f_#^*(α,β) = inf_∑_i+j ≤ n (α_i,j,β_i,j) = (α,β)∑_i+j ≤ n f_#;i,j^*(α_i,j,β_i,j) for any α,β of the form described at the start of Section <ref>. Note that the sum under the inf is over all choices of vectors α_i,j and β_i,j (for 0 ≤ i,j ≤ n-1 and i+j ≤ n) such that ∑_i+j ≤ n (α_i,j, β_i,j) = (α,β). Here we have that f_#;i,j(x,y) := -log(1-e^x_i+y_j), and the function f_#;i,j is convex with domain given by 𝒟_#;i,j := {(x,y) ∈^2n : x_i+y_j < 0}. We then have f_#;i,j^*(α,β) = sup_(x,y) ∈𝒟_#;i,j[⟨x, α⟩ + ⟨y, β⟩ + log(1 - e^x_i+y_j)], and by a straightforward argument this implies 𝒟_#;i,j^* = {c · (e_i, e_j) : c ∈ [0,∞)}. Thus for any c ≥ 0, standard calculus arguments give f_#;i,j^*(c e_i, c e_j) = sup_x_i,y_j[c(x_i + y_j) + log(1-e^x_i+y_j)] = sup_t < 0[ct + log(1-e^t)] = log(c^c/(c+1)^c+1). Combining this with the above expressions then gives f_#^*(α,β) = inf_A ∈ϕ(ℱ_n(N))∑_i+j ≤ nlog(a_ij^a_ij/(a_ij+1)^a_ij+1). Negating and exponentiating both sides then gives the desired result. § COMPUTING THE AVERAGE OF VERTICES OF SOME FLOW POLYTOPES In this section we compute the uniform average of the vertices of the flow polytopes ℱ_n(N) for N with positive netflow N_i>0 and N=(1,0,…,0,-1). We assume a uniform distribution on the vertices of the polytope. In abuse of notation we refer to all the entries on or above the antidiagonal, upper triangular entries (see Proposition <ref>). For N=(N_0,…,N_n-1,-∑_i=0^n-1 N_i) where N_i>0 the uniform average of the vertices of ℱ_n(N) is the flow f(i,j)=c_n-i where c_k = s_n-kk - s_n-k-1k+1 = N_n-kk+1 + s_n-kk(k+1), for s_k=∑_j=0^k N_j. That is, it is represented by the matrix A = [ c_n c_n c_n ⋯ c_n c_n c_n; c_n-1 c_n-1 c_n-1 ⋯ c_n-1 c_n-1 b_n-1; c_n-2 c_n-2 c_n-2 ⋯ c_n-2 b_n-2 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; c_3 c_3 c_3 ⋯ 0 0 0; c_2 c_2 b_2 ⋯ 0 0 0; c_1 b_1 0 ⋯ 0 0 0; ], where b_k = k/k+1 s_n-k-1. Let A=(a_i,j) be desired uniform average, and let X=(x_i,j) be a uniformly random vertex. We first show that a_i,j = c_n-i when j ≤ n-i by induction on i. Note that by Theorem <ref>, each upper triangular entry of row i of a vertex is nonzero in exactly 1/n-i fraction of the vertices. Then using this fact and (<ref>) a_i,j = 𝔼[x_i,j] = 1/n-i(N_i + ∑_r=0^i-1𝔼[x_r,n-i]). By induction we compute a_i,j = 1/n-i(N_i + ∑_r=0^i-1 c_n-r) = 1/n-i(N_i + ∑_r=0^i-1(s_r/n-r - s_r-1/n-(r-1))) = N_i/n-i + s_i-1/(n-i)(n-i+1) = N_i/n-i + s_i/(n-i)(n-i+1) - N_i/(n-i)(n-i+1) = c_n-i. For N=(t,0,…,0,-t) where t>0 the uniform average of vertices of ℱ_n(N) is represented by the matrix t· A where A := [ 2^-(n-1) 2^-(n-1) 2^-(n-2) ⋯ 2^-3 2^-2 2^-1; 2^-(n-1) 2^-(n-1) 2^-(n-2) ⋯ 2^-3 2^-2 2^-1; 2^-(n-2) 2^-(n-2) 2^-(n-3) ⋯ 2^-2 2^-1 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 2^-3 2^-3 2^-2 ⋯ 0 0 0; 2^-2 2^-2 2^-1 ⋯ 0 0 0; 2^-1 2^-1 0 ⋯ 0 0 0 ]. Let A=(a_i,j) be desired uniform average for t=1, and let X=(x_i,j) be a uniformly random vertex. By Theorem <ref>, x_i,n-i = v_i are i.i.d. uniform Bernoulli random variables for all 1 ≤ i ≤ n-1, and v_0 = v_n = 0. Using this and the description of the vertices of ℱ_n(N) in (<ref>), if i,j ≥ 1 then a_i,j = 𝔼[x_i,j] = (1-𝔼[v_i])(1-𝔼[v_n-j])∏_k=i+1^n-j-1𝔼[v_k] = 2^-(n-i-j+1) if i<n-j 𝔼[v_i] = 2^-1 if i=n-j 0 otherwise. Further, if exactly one of i,j is equal to 0 then a_i,j = 𝔼[x_i,j] = (1-𝔼[v_i])(1-𝔼[v_n-j])∏_k=i+1^n-j-1𝔼[v_k] = 2^-(n-i-j). And finally, if i=j=0 then a_i,j = 𝔼[x_i,j] = (1-𝔼[v_i])(1-𝔼[v_n-j])∏_k=i+1^n-j-1𝔼[v_k] = 2^-(n-i-j-1) = 2^-(n-1). For the case N=(t,0,…,0,-t) for t>0 we have that ℱ_n(N)=t·ℱ(1,0,…,0,-1), and so the uniform average of the vertices also dilates by t. For the case that N = t · 2ρ(n) (where 2ρ(n) = (n,n-2,n-4,…,-n+4,-n+2,-n), see Section <ref>) with s_k = t (k+1)(n-k) for all k, we were unable to exactly compute the average of the vertices. However, a few experiments show that the average may be close to the following natural point in the flow polytope: M := t ·[ 1 1 1 ⋯ 1 1 1; 1 1 1 ⋯ 1 1 n-1; 1 1 1 ⋯ 1 2(n-2) 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 1 1 1 ⋯ 0 0 0; 1 1 2(n-2) ⋯ 0 0 0; 1 n-1 0 ⋯ 0 0 0 ], where the subdiagonal entries are given by k(n-k) for 1 ≤ k ≤ n-1. As stated above, we were not able to compute the average of the vertices of ℱ_n(N) where N=2ρ(n). The polytope for the cases n=2,…,5 have 2,7,26 and 219 vertices and averages: [ 1 1; 1 1 ] , [ 5/7 8/7 8/7; 8/7 1 13/8; 8/7 13/8 0 ] , [ 7/13 14/13 16/13 15/13; 14/13 9/13 18/13 37/13; 16/13 18/13 54/13 0; 15/13 37/13 0 0 ] , [ 0.553 1.078 1.132 1.105 1.132; 1.078 0.680 1.187 1.187 3.868; 1.132 1.187 0.973 5.708 0; 1.105 1.187 5.708 0 0; 1.132 3.868 0 0 0 ] . It would be interesting to find the number of vertices and average for this case. § FLOW COUNTING LOWER BOUNDS In this section we prove our main lower bounds on flows. Specifically we apply Theorem <ref> to various flow vectors N, using specific choices of flows f given by the average of the vertices of the associated flow polytopes (as computed in Section <ref>). This technique yields lower bounds for the number of flows K_n(N), often given in a relatively complicated product form. We then obtain more explicit lower bounds for the asymptotics of log K_n(N) by combining the Euler-Maclaurin formula (see Lemma <ref>) with a number of elementary bounds on entropy-like functions (see Lemma <ref> and the rest of Appendix <ref>). We now state the bounds obtained in the section in the following results, and the remainder of this section is devoted to proving these bounds. Throughout, as above, we let N = (N_0,N_1,…,N_n) ∈^n+1 denote the netflow vector, and we denote s_k = ∑_j=0^k N_j. Any big-O notation used is always with respect to n, with other parameters fixed. Also, we will sometimes put parameters in the subscripts of the big-O notation to denote that that implied constant may depend on those parameters. For N_k ≥ a · k^p for all k ∈{0,1,…,n-1}, with given a > 0 and p ≥ 0, log K_n(N) ≥ n^2 log n ·(p-1/2) + n^2/2·(log(ap) - 3(p-1)/2) - O(n^1+1/p) p > 1 n^2 ·(a/2(a-2)log(a/2) + 3/2 - 2log 2) - O(n log n), p = 1, a > 2 n^2 ·(a - a log 2)) - O(n log n), p = 1, a ≤ 2 n^p+1log^2 n ·(a(1-p)^2/4(p+1)) - O(n^p+1log n loglog n) p < 1 . Note that the implied constant of the big-O notation may depend on a and p. Also note that the p=1 cases limit to the same bounds at a=2. For N = (1,1,…,1,-n), log K_n(N) ≥n/4log^2 n - O(n log n). Further, log K_n(N) ≥ (n-1) log(n+1) for n ≥ 3000. We have the following. * For N = (n,n,…,n,-n^2), log K_n(N) ≥ n^2 - O(n log n). * For N_i = a · n for all i ∈{0, 1, …, n-1}, with given a ≥1/12, log K_n(N) ≥n^2/2 (2 + log a) - O(n log n). * For N_i = a · n + i for all i ∈{0, 1, …, n-1}, with given a ≥ 0, log K_n(N) ≥n^2/2(1 + log(2a + 1)) - O(n log n). Note that the implied constant may depend on a. * For N_i = n + i for all i ∈{0, 1, …, n-1}, log K_n(N) ≥ 1.198 n^2 - O(n log n). * For N = (t,0,0,…,0,-t), with given t ≥ 1, log K_n(N) ≥n/2log_2^2 t - O(n log_2 t). The implied constant is independent of t. * For N = t · 2ρ(n) = t · (n, n-2, n-4, …, -n+2, -n), with given t ≥ 1, log K_n(N) ≥n^2/2log((1+t)^1+t/t^t) - O(n log(nt)). The implied constant is independent of t. Using Theorem <ref> and Proposition <ref>, for N_i = a · n + i we have log K_n(N) = log F(a · n, n) + ∑_k=1^n-1log C_k, where C_k = 1/k+12kk is the kth Catalan number and -2(a+1)n ≤log F(a · n, n) - n^2 (a+1)^2 f(a/a+1) ≤ 0 with f(x) = x^2 log x - 1/2 (1-x)^2 log (1-x) - 1/2 (1+x)^2 log (1+x) + 2x log 2. For large a, standard computations give (a+1)^2 f(a/a+1) = 1/2log a ± O(1), and ∑_k=1^n-1log C_k = n^2 log 2 - O(n log n). This demonstrates that our bound above for N_i = a · n + i achieves the correct leading term in n and in a. §.§ Positive flows in general Here we state some general lower bounds in the cases where every entry of the netflow vector is positive. Note though that these bounds hold even in the case where the netflow vector is only non-negative. Fix N∈^n+1 (i.e., the entries are not necessarily non-negative). Denoting s_k = ∑_j=0^k N_j and c_k = N_n-k/k+1 + s_n-k/k(k+1), we have * If s_k ≥max{0,-(n-k)N_k} for all k ∈{0,1,…,n-1}, K_n(N) ≥1/n^2∏_k=0^n-1s_k^s_k/(1+s_k)^1+s_k∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k. * If s_k ≥max{0,-(n-k)N_k} and N_k ≥1/n-k - (n-k+1) for all k ∈{0,1,…,n-1}, K_n(N) ≥1/n^2 e^n (n!)^2∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k-1. As in Proposition <ref>, we define A = [ c_n c_n c_n ⋯ c_n c_n c_n; c_n-1 c_n-1 c_n-1 ⋯ c_n-1 c_n-1 b_n-1; c_n-2 c_n-2 c_n-2 ⋯ c_n-2 b_n-2 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; c_3 c_3 c_3 ⋯ 0 0 0; c_2 c_2 b_2 ⋯ 0 0 0; c_1 b_1 0 ⋯ 0 0 0; ], where c_k = s_n-k/k - s_n-k-1/k+1 = N_n-k/k+1 + s_n-k/k(k+1) and b_k = k/k+1 s_n-k-1. We first prove part (1) of Theorem <ref>. Using Theorem <ref>, we have K_n(N) ≥max_k{(1+s_k)^1+s_k/s_k^s_k}∏_k=0^n-1(s_k^s_k/(1+s_k)^1+s_k)^2 ∏_k=1^n-1(b_k+1)^b_k+1/b_k^b_k∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k. Further note that for all k we have (b_k+1)^b_k+1/b_k^b_k = k/k+1·(s_n-k-1 + k+1/k)^k/k+1 s_n-k-1+1/(s_n-k-1)^k/k+1 s_n-k-1 ≥k/k+1·(s_n-k-1/s_n-k-1+1)^s_n-k-1/k+1·(s_n-k-1 + 1)^s_n-k-1+1/(s_n-k-1)^s_n-k-1 ≥k/k+1·(1/e)^1/k+1·(s_n-k-1 + 1)^s_n-k-1+1/(s_n-k-1)^s_n-k-1, since (x/x+1)^x ≥1/e for all x > 0. Combining the above expressions then gives K_n(N) ≥∏_k=0^n-1s_k^s_k/(1+s_k)^1+s_k∏_k=1^n-1[k/k+1(1/e)^1/k+1] ∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k ≥1/n^2∏_k=0^n-1s_k^s_k/(1+s_k)^1+s_k∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k, using the fact that ∑_k=2^n 1/k≤∫_1^n 1/x dx = log n. This implies part (1) of Theorem <ref>. We next prove part (2) of Theorem <ref>, which is mainly useful in the case that N_k ≥ 0 ≥1/n-k - (n-k+1) for all k = 0,1,…,n-1. We first have (c_k+1)^c_k+1/c_k^c_k≥ c_k+1 = N_n-k/k+1 + s_n-k/k(k+1) + 1 ≥1/k(k+1)(1 + s_n-k) for k = 1,2,…,n, since we have assumed that N_n-k≥1/k - (k+1) in part (3). In addition, when k=n we have (c_n+1)^c_n+1/c_n^c_n≥ c_n + 1 = 1 + N_0/n+1 + s_0/n(n+1) = 1 + s_0/n≥1/n(1+s_0). Since (x/1+x)^x ≥1/e for all x > 0, we then further have s_k^s_k/(1+s_k)^1+s_k≥1/e(1+s_k). Applying part (1) of Theorem <ref> then gives K_n(N) ≥1/n^2∏_k=0^n-1s_k^s_k/(1+s_k)^1+s_k∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k ≥1/n^2 e^n·1/n∏_k=1^n-11/k(k+1)∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k-1 ≥1/n^2 e^n (n!)^2∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k-1. This implies part (2) of Theorem <ref>. §.§ Polynomial growth As in Proposition <ref>, we define A = [ c_n c_n c_n ⋯ c_n c_n c_n; c_n-1 c_n-1 c_n-1 ⋯ c_n-1 c_n-1 b_n-1; c_n-2 c_n-2 c_n-2 ⋯ c_n-2 b_n-2 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; c_3 c_3 c_3 ⋯ 0 0 0; c_2 c_2 b_2 ⋯ 0 0 0; c_1 b_1 0 ⋯ 0 0 0; ], where c_k = s_n-k/k - s_n-k-1/k+1 = N_n-k/k+1 + s_n-k/k(k+1) and b_k = k/k+1 s_n-k-1. Since N_k ≥ a · k^p for all k ∈{0,1,…,n-1}, (<ref>) implies c_k = N_n-k/k+1 + s_n-k/k(k+1)≥a(n-k)^p/k+1 + a(n-k)^p+1/k(k+1)(p+1) = a(n + kp)(n-k)^p/k(k+1)(p+1). §.§.§ The case of p > 1. Define a=1 and b = ⌊ n - (n/ae)^1/p⌋. Using Lemma <ref>, we first have have log∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k-1≥∑_k=a^b (k-1) log (e · c_k). Using Lemma <ref>, we have ∑_k=a^b k log(n+kp) = n^2/2[log n + (1-1/p^2)log(p+1) + log p + 1/p - 1/2] - n^1+1/plog n/(ae)^1/p± O_a,p(n^1+1/p), and ∑_k=a^b pk log(n-k) = n^2/2[plog n - 3p/2] - n^1+1/plog n/(ae)^1/p± O_a,p(n^1+1/p), and ∑_k=a^b k log k = ∑_k=a^b k log(k+1) = n^2/2[log n - 1/2] - n^1+1/plog n/(ae)^1/p± O_a,p(n^1+1/p), and ∑_k=a^b k log(ae/p+1) = n^2/2log(ae/p+1) ± O_a,p(n^1+1/p). Combining everything gives ∑_k=a^b k logae(n + kp)(n-k)^p/k(k+1)(p+1) = n^2/2[(p-1)(log n - 3/2) + log a + log p + p - log(p+1)/p^2] - O_a,p(n^1+1/p). Note that p ≥log(p+1). Further, ∑_k=a^b logae(n + kp)(n-k)^p/k(k+1)(p+1)≤∑_k=1^n log(aen^1+p) = O_a,p(n log n). By Theorem <ref> (2), we then have log K_n(N) ≥log∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k-1 - O(n log n) which implies the desired result. §.§.§ The case of p=1. First, c_k ≥a(n^2-k^2)/2k(k+1) =: c_k' is a lower bound on the possible values of c_k which we will use throughout the p=1 case. Defining S = ∑_k=1^n k log(1+c_k') + ∑_k=1^n k c_k' log(1 + (c_k')^-1), we have log∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k-1≥ S - ∑_k=1^n log(1+c_k') - ∑_k=1^n c_k' log(1 + (c_k')^-1) since (c_k+1)^c_k+1/c_k^c_k is increasing in c_k. Further note that since log(1+t) ≤ t, we have ∑_k=1^n log(1+c_k') + ∑_k=1^n c_k' log(1 + (c_k')^-1) ≤ n log((a+1)n^2) + n = O(n log n). Theorem <ref> (2) then implies log K_n(N) ≥ S - O(n log n). We now split into two subcases: a > 2 and a ≤ 2. For a > 2, we have 1+c_k' ≥an^2-(a-2)k^2/2k(k+1) = a/2[n^2-(1-2/a)k^2/k(k+1)] and 1 + 1/c_k'≥an^2-(a-2)k^2/a(n^2-k^2) = n^2-(1-2/a)k^2/n^2-k^2≥ 1. We first use kc_k' = a(n^2-k^2)/2(k+1)≥a/2(n-k), which implies the following, where ξ = √(1 - 2/a) for a > 2: S ≥∑_k=1^n k loga(n+ξ k)(n-ξ k)/2k(k+1) + a/2∑_k=1^n-1 (n-k) log(n+ξ k)(n-ξ k)/(n+k)(n-k). Using ξ = √(1 - 2/a) and Corollary <ref> gives ∑_k=1^n k loga(n+ξ k)(n-ξ k)/2k(k+1)≥n^2/2(a/a-2log(a/2)) - O_ξ(n). Next, a straightforward computation gives a/2∑_k=1^n-1 (n-k) log(n+ξ k)(n-ξ k)/(n+k)(n-k)≥∑_k=1^n-1 (n-k) logn^2/(n+k)(n-k) = ∑_k=1^n k logn^2/k(2n-k), since this expression is increasing in a because x ·log(1+x^-1) is increasing for x ≥ 0. Using Lemma <ref> we have ∑_k=1^n klog k = n^2/2log n - n^2/4 + n/2log n ± O(n), and ∑_k=1^n klog (2n-k) = n^2/2log n + 2n^2 log 2 - 9n^2/4 + n^2 + n/2log n ± O(n), and and ∑_k=1^n k log (n^2) = n^2 log n + n log n. This implies ∑_k=1^n k logn^2/k(2n-k)≥ n^2(1/4 - 2log 2 + 9/4 - 1) - O(n) ≥n^2/2 (3 - 4log 2) - O(n). Combining everything then implies S ≥n^2/2(3 - 4log 2 + a/a-2log(a/2)) - O_a(n), which bounds S in the case of a > 2. Note that the constant in this expression in front of n^2 approaches 2 - log 2 as a → 2, which aligns with the bound for a ≤ 2 below. For a ≤ 2, we instead have 1+c_k' ≥an^2-(a-2)k^2/2k(k+1)≥a/2[n^2/k(k+1)] and 1 + 1/c_k'≥an^2-(a-2)k^2/a(n^2-k^2)≥n^2/n^2-k^2≥ 1, which, for b=⌊√(a/2)· n⌋ implies S ≥∑_k=1^b k logan^2/2k(k+1) + a/2∑_k=1^n-1 (n-k) logn^2/(n+k)(n-k) = ∑_k=1^b k logan^2/2k(k+1) + a/2∑_k=1^n k logn^2/k(2n-k). Using Lemma <ref> we have ∑_k=1^b klog k = ∑_k=1^b klog(k+1) = an^2/4log n + an^2/8log(a/2) - an^2/8± O_a(n log n), and ∑_k=1^b k log(an^2/2) ≥an^2/2log n + an^2/4log(a/2) ± O_a(n log n). This and the above bounds imply S ≥an^2/4 + an^2/2(1/4 - 2log 2 + 9/4 - 1) - O_a(n log n) = a(1-log 2) n^2 - O_a(n log n), which bounds S in the case that a ≤ 2. Combining the above then finally gives log K_n(N) ≥ S - 2n log n - O_a(n) = n^2/2(a/a-2log(a/2) + 3 - 4log 2) - O_a(n log n), a > 2 n^2/2(2a - 2a log 2)) - O_a(n log n), a ≤ 2 . §.§.§ The case of p < 1. Since (n-x)^p(n+xp) is decreasing in x for x ∈ (0,n), we have for k ≤ϵ_n · n that (n-k)^p(n+kp) ≥ n(n-ϵ_n n)^p = (1-ϵ_n)^p n^1+p, where ϵ_n := 1/log n for all n. Letting A_n := a(1-ϵ_n)^p n^p+1/p+1 and c_k' := A_n/(k+1)^2, we then have in this case that c_k ≥a(n-k)^p(n+kp)/k(k+1)(p+1)≥A_n/k(k+1)≥ c_k'. For all x > 0, we have (x+1)^x+1/x^x≥(1/x)^x and (x+1)^x+1/x^x≥ ex + 1 by Lemma <ref>. With this, we have log∏_k=1^n ((c_k'+1)^c_k'+1/(c_k')^c_k')^k+1 ≥log[∏_k=1^⌊√(A_n)⌋(e · c_k')^k+1∏_k=⌊√(A_n)⌋+1^⌊ϵ_n · n⌋(1/c_k')^c_k'(k+1)] ≥∑_k=1^⌊√(A_n)⌋ (k+1)[log(e A_n) - 2log(k+1)] + ∑_k=⌊√(A_n)⌋+1^⌊ϵ_n · n⌋A_n[2log(k+1) - log A_n]/k+1. Note that ϵ_n · n ≥√(A_n)≥ 2 for n large enough. Thus we have ∑_k=1^⌊√(A_n)⌋ (k+1)log(e A_n) = A_n log A_n/2 + A_n/2± O(√(A_n)log A_n), and using Lemma <ref>, we have ∑_k=1^⌊√(A_n)⌋ -2(k+1)log(k+1) = -A_n log A_n/2 + A_n/2± O(√(A_n)log A_n), and using Lemma <ref> (with odd parameter p=1), we have ∑_k=⌊√(A_n)⌋+1^ϵ_n · n 2 A_n log(k+1)/k+1 = A_n log^2(ϵ_n · n) - A_n log^2 A_n/4± O(A_n), and ∑_k=⌊√(A_n)⌋+1^ϵ_n · n -A_n log A_n/k+1 = -A_n log A_n log(ϵ_n · n) + A_n log^2 A_n/2± O(A_n log A_n). Combining the above then gives log∏_k=1^n ((c_k'+1)^c_k'+1/(c_k')^c_k')^k+1≥ A_n log^2(ϵ_n · n/√(A_n)) - O(A_n log A_n) Using Lemma <ref>, we then further compute -2 log∏_k=1^n (c_k'+1)^c_k'+1/(c_k')^c_k'≥ -2 log∏_k=1^n e (c_k' + 1/2) ≥ -2n log(e A_n). Combining everything and using Theorem <ref> (2) and the fact that x ↦(x+1)^x+1/x^x is increasing for x > 0 then gives log K_n(N) ≥a(1-p)^2/4(p+1) n^p+1log^2 n - O(a n^p+1log n loglog n), which is the desired result. §.§ The Tesler (1,1,…,1,-n) case This case fits into the polynomial growth case, but we bound it more specifically here due to its importance. In this case, we have N_k = 1, s_k = k+1, c_k = n+1/k(k+1). Thus by Theorem <ref> (1) we have K_n(N) ≥1/n^2(n+1)^n+1∏_k=1^n (1 + n+1/k(k+1))^k ∏_k=1^n (1 + k(k+1)/n+1)^n+1/k+1 Note further that ∑_k=⌈√(n+1)⌉ - 1^n n+1/k+1log(1 + k(k+1)/n+1) ≥∑_k=⌈√(n+1)⌉^n+1n+1/klog(k^2/n+1). We now use the following lemma. Let f: (a,b) → be unimodal, and let S = (a,b) ∩. Then ∑_k ∈ S f(k) ≥∫_a^b f(t) dt - max_t ∈ [a,b] f(t). Let t_0 ∈ [a,b] be such that f(t_0) maximizes f on [a,b], and let S_- := (a,t_0) ∩ and S_+ := [t_0,b) ∩. Then, ∑_k ∈ S_- f(k) ≥∫_a^⌈ t_0 ⌉ f(t) dt - f(t_0) and ∑_k ∈ S_+ f(k) ≥∫_⌈ t_0 ⌉^b f(t) dt. Combining gives the desired result. Now consider the function f(t): (√(n+1), n+1) →, defined by f(t) := n+1/tlog(t^2/n+1), which is unimodal with maximum f(t_0) = 2/e√(n+1) achieved at t_0 = e √(n+1). Thus by Lemma <ref>, we have ∑_k=⌈√(n+1)⌉^n+1n+1/klog(k^2/n+1) ≥n+1/4log^2(n+1) - 2/e√(n+1). Note further that ∑_k=1^n k log(1 + n+1/k(k+1)) ≥∑_k=1^n k log(1 + 1/k) ≥∑_k=1^n k (1/k - 1/2k^2) = n - 1/2∑_k=1^n 1/k by standard Taylor series bounds. Standard harmonic series bounds then give n - 1/2∑_k=1^n 1/k≥ n - 1/2log n - 1. Combining everything gives log K_n(N) ≥n+1/4log^2(n+1) - (n+1) log(n+1) + n - 2/e√(n+1) - 5/2log(n) - 1. We now determine for which n we have log K_n(N) ≥ (n-1) log(n+1). First note that for n ≥ 4 we have 2log n + n - 2/e√(n+1) - 5/2log(n) - 1 ≥ 0, and thus in this case we have log K_n(N) - (n-1) log(n+1) ≥n+1/4log^2(n+1) - 2(n+1) log(n+1). A simple calculation then implies n+1/4log^2(n+1) - 2(n+1) log(n+1) ≥ 0 for n ≥ e^8 - 1, where e^8 - 1 ≤ 3000. §.§ The N_i = a · n case We use the positive average of vertices from Proposition <ref>, for which we have N_k = a n, s_k = a n(k+1), c_k = a n(n+1)/k(k+1)≥a(n+1)^2/(k+1)^2, which by Lemma <ref> implies ∏_k=0^n-1s_k^s_k/(s_k+1)^s_k+1≥∏_k=0^n-12/e(2s_k + 1)^-1≥∏_k=0^n-11/ea(n+1)(k+1) = 1/(ea)^n (n+1)^n n!. Using Lemma <ref> again, we then have (c_k+1)^c_k+1/c_k^c_k≥ e · c_k + 1 ≥ ea ·(n+1)^2/(k+1)^2, and thus ∏_k=1^n ((c_k+1)^c_k+1/c_k^c_k)^k ≥∏_k=1^n (ea ·(n+1)^2/(k+1)^2)^k = (ea(n+1)^2)^n+12∏_k=1^n (k+1)^-2k. We then have n+12log(ea(n+1)^2) = (n+1)^2 log(n+1) + (n+1)^2/2log(ea) - (n+1) log(n+1) - n+1/2log(ea), and -2 ∑_k=1^n k log(k+1) ≥ -(n+1)^2 log(n+1) + (n+1)^2/2 - (n+1) log(n+1) - 1/6log(n+1) ± O(1). Combining this and using Theorem <ref> (1), we obtain log K_n(N) ≥(n+1)^2/2 (2 + log(a)) - 4(n+1) log(n+1) - O_a(n). §.§ The N_i = a · n + i case We use the positive average of vertices from Proposition <ref>, for which we have c_k = (a + 1/2) ·n(n+1)/k(k+1) - 1/2≥(a + 1/2) ·(n+1)^2/(k+1)^2 - 1/2 =: c_k'. Using Lemma <ref>, we then have (c_k'+1)^c_k'+1/(c_k')^c_k'≥ 2c_k' + 1 = (2 a + 1) ·(n+1)^2/(k+1)^2, which implies log∏_k=1^n ((c_k'+1)^c_k'+1/(c_k')^c_k')^k-1 ≥∑_k=1^n 2(k-1) log(n+1/k+1) + ∑_k=1^n (k-1) log(2a + 1) = n^2/2(1 + log(2a + 1)) - O(n log n) §.§ The N_i = n + i case The case fits into the more general N_i = a · n + i case, but we bound it more specifically here to compare it to Proposition <ref>. We use the positive average of vertices from Proposition <ref>, for which we have c_k = 3/2·n(n+1)/k(k+1) - 1/2≥3/2·(n+1)^2/(k+1)^2 - 1/2 =: c_k'. Defining γ := 1/6 and using Lemma <ref>, we then have (c_k'+1)^c_k'+1/(c_k')^c_k'≥ e(c_k' + 1/2 - 1/24c_k') ≥3e/2((n+1)^4 - γ^2 (k+1)^4/(n+1)^2(k+1)^2) Thus we have log∏_k=1^n ((c_k'+1)^c_k'+1/(c_k')^c_k')^k-1≥∑_k=1^n+1 (k-2) [log(3e/2(n+1)^2) + log((n+1)^2 + γ k^2) + log((n+1)^2 - γ k^2/k^2)]. Since f(t) = t log((n+1)^2 + γ t^2) is increasing for t > 0, we have ∑_k=1^n+1 k log((n+1)^2 + γ k^2) ≥∫_0^n+1 t log((n+1)^2 + γ t^2) dt = 1/2γ[((n+1)^2+γ t^2) (log((n+1)^2+γ t^2) - 1)]_0^n+1 ≥(n+1)^2/2[2 log(n+1) - 1 + (1+γ^-1) log(1+γ)] Further, using Corollary <ref> we have ∑_k=1^n+1 k log((n+1)^2 - γ k^2/k^2) ≥(n+1)^2/2 (1-γ^-1) log(1-γ) + n+1/2log(1-γ) - 1/6log(n+1) ± O_γ(1), and further we have ∑_k=1^n-1 k log(3e/2(n+1)^2) = ((n+1)^2/2 - 3n+1/2) [1 + log3/2 - 2 log(n+1)], and finally we also have -2∑_k=1^n+1log((n+1)^4 - γ^2 k^4/k^2) ≥ -4(n+1) log(n+1) - 4(n+1) + 2log(n+1) + O(1). Combining everything gives log∏_k=1^n ((c_k'+1)^c_k'+1/(c_k')^c_k')^k-1≥(n+1)^2/2[log3/2 + log(1-γ^2) + 1/γlog(1+γ/1-γ)] - (n+1) log(n+1) ± O_γ(n). Since γ = 1/6, this gives log K_n(N) ≥ 1.198 n^2 - O(n log n). The coefficient of n^2 here is off from the correct coefficient given in Proposition <ref> by about 0.1. §.§ The (t,0,0,…,0,-t) case For the case that N = (t,0,…,0,-t), which implies s_k = t for all k < n, the average of the vertices is equal to M := t ·[ 2^-(n-1) 2^-(n-1) 2^-(n-2) ⋯ 2^-3 2^-2 2^-1; 2^-(n-1) 2^-(n-1) 2^-(n-2) ⋯ 2^-3 2^-2 2^-1; 2^-(n-2) 2^-(n-2) 2^-(n-3) ⋯ 2^-2 2^-1 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 2^-3 2^-3 2^-2 ⋯ 0 0 0; 2^-2 2^-2 2^-1 ⋯ 0 0 0; 2^-1 2^-1 0 ⋯ 0 0 0 ], as seen in Proposition <ref>. Applying Theorem <ref> gives K_n(N) ≥(t^t/(1+t)^1+t)^2n-1·(t · 2^-(n-1) + 1)^t · 2^-(n-1)+1/(t · 2^-(n-1))^t · 2^-(n-1)∏_k=1^n-1((t · 2^-k + 1)^t · 2^-k+1/(t · 2^-k)^t · 2^-k)^n+2-k. If log_2(et) ≤ n-1, we then have ∏_k=1^n-1((t · 2^-k + 1)^t · 2^-k+1/(t · 2^-k)^t · 2^-k)^n+2-k≥∏_k=1^n-1(et/2^k + 1)^n+2-k≥∏_k=1^⌊log_2(et)⌋ 2^(n+2-k)(log_2(et) - k), by Lemma <ref>. We further have ∑_k=1^⌊log_2(et)⌋ (n+2-k)(log_2(et) - k) ≥ (n+2) (log_2^2(et) - log_2(et)) - (n + 2 + log_2(et)) log_2(et) + 12 = n+2/2(log_2^2(et) - 3log_2(et)) - (log_2(et) + 1) log_2^2(et)/2 ≥n+2/2·log_2(et) ·log_2(et/8) - 1/2log_2^3(et) - 1/2log_2^2(et), and using (<ref>), we also have log_2((t^t/(1+t)^1+t)^2n-1) ≥ -(2n-1) log_2(e(t+1)) ≥ -2n ·(log_2(et) + log_2(e)/t) ≥ -2n ·log_2(et) - 3n/t since log_2(1 + 1/t) ≤log_2(e)/t. Combining everything then gives log_2 K_n(N) ≥n+2/2·log_2(et) ·log_2(et/8) - 2n ·log_2(et) - 3n/t - 1/2log_2^3(et) - 1/2log_2^2(et) ≥n+2/2·log_2(et) ·log_2(et/128) - 3n/t - 1/2log_2^3(et) - 1/2log_2^2(et). §.§ The t· 2ρ(n) case For the case that N = t · 2ρ(n) with s_k = t (k+1)(n-k) for all k, we use the matrix described in (<ref>), given by M := t ·[ 1 1 1 ⋯ 1 1 1; 1 1 1 ⋯ 1 1 n-1; 1 1 1 ⋯ 1 2(n-2) 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 1 1 1 ⋯ 0 0 0; 1 1 2(n-2) ⋯ 0 0 0; 1 n-1 0 ⋯ 0 0 0 ], where the subdiagonal entries are given by k(n-k) for 1 ≤ k ≤ n-1. Applying Theorem <ref> gives K_n(N) ≥max_k{(1+s_k)^1+s_k/s_k^s_k}∏_k=0^n-1(s_k^s_k/(1+s_k)^1+s_k)^2 [((1+t)^1+t/t^t)^n+12∏_k=1^n-1(1+tk(n-k))^1+tk(n-k)/(tk(n-k))^tk(n-k)]. To simplify this, note first that log∏_k=0^n-1s_k^s_k/(1+s_k)^1+s_k≥ -2n log n - n log t - O(n) and log∏_k=1^n-1(1+tk(n-k))^1+tk(n-k)/(tk(n-k))^tk(n-k)≥ 2(n-1)log(n-1) + (n-1) log t - O(n) and logmax_k{(1+s_k)^1+s_k/s_k^s_k}≥ 2log n + log t - O(1) using Lemma <ref> and Stirling's approximation. This implies log K_n(N) ≥n+12log((1+t)^1+t/t^t) - 2nlog n - n log t - O(n) ≥n^2/2log((1+t)^1+t/t^t) - 2nlog n - O(n log t). §.§ Asymptotics via maximum flow entropy In this section we finally prove Theorem <ref>. By Theorem <ref>, we have 1 ≥log K_n(N)/sup_f∈ℱ_n(N)ℋ(f)≥ 1 - 2 ∑_i=0^n-1log((1 + s_i)^1+s_i/s_i^s_i)/sup_f∈ℱ_n(N)ℋ(f), where s_i = ∑_j=0^i N_j. Thus we only need to show that the denominator dominates the numerator in the right-most expression of the above inequality. By (<ref>), we first have 2 ∑_i=0^n-1log((1 + s_i)^1+s_i/s_i^s_i) ≤ 2n + 2 ∑_i=0^n-1log(1 + s_i). By assumption we have that N_i ≤ a · (i+1)^p for all k for some fixed a,p ≥ 0, which implies s_i ≤ a ·(i+2)^p+1/p+1 for all i by (<ref>). Thus 2 ∑_i=0^n-1log((1 + s_i)^1+s_i/s_i^s_i) ≤ 2n + 2 ∑_i=0^n-1log(1 + a ·(i+2)^p+1/p+1) ≤ O_a,p(n) + 2 (p+1) ∑_i=0^n-1log (i+2), which implies 2 ∑_i=0^n-1log((1 + s_i)^1+s_i/s_i^s_i) ≤ O_a,p(n log n). On the other hand, N_i ≥ 1 for all i implies sup_f∈ℱ_n(N)ℋ(f) ≥log K_n(N) ≥n/4log^2 n - O(n log n) by Theorem <ref> and Corollary <ref>. This completes the proof. § BOUNDS FROM THE LIDSKII LATTICE POINT FORMULAS In this section we use a known positive formula for K_n(N) coming from the theory of flow polytopes to give bounds for K_n(N) in different regimes than the rest of the paper. Specifically, we consider the case N=(t,0,…,0,-t) where t is much larger than n. In the theory of flow polytopes, there is a positive formula for the number K_n(N) of lattice of points of ℱ_n(N) called the Lidskii formulas due to Lidskii <cit.> for the complete graph and for other graphs by Baldoni–Vergne <cit.> and Postnikov–Stanley <cit.>. See <cit.> for proofs of these formulas via polyhedra subdivisions. Let δ:=(n-1,n-2,…,1,0). Let N =(N_0,N_1,…,N_n-1,-∑_i N_i) where each N_i ∈ℤ_≥ 0. Then the number K_n(N) of lattice points of the flow polytope ℱ_n(N) satisfy K_n(N) = ∑_jN_0+n-1j_0N_1+n-2j_1⋯N_n-1j_n-1· K_n(j-δ), where the sums are over weak compositions j=(j_0,j_1,…,j_n-1) of n2. The values K_n(j-δ) of the Kostant partition function appearing on the formulas above are actually mixed volumes of certain flow polytopes. For i=0,…,n-1, let P_i:=ℱ_n(0,…,0_i,1,0,…,0,-1) For a composition j=(j_0,…,j_n-1), denote by V(P_0^j_0,…,P_n-1^j_n-1) the mixed volume of j_i copies of P_i. Let N =(N_0,N_1,…,N_n-1,-∑_i N_i) where each N_i ∈ℤ_≥ 0, then the flow polytope ℱ_n(N) is the following Minkowski sum ℱ_n(N) = N_0 P_0 + N_1 P_1 + ⋯ + N_n-1 P_n-1, For a weak composition j=(j_0,…,j_n-1) of n2 we have that K_n(j-δ) = V(P_0^j_0,…,P_n-1^j_n-1). Since the Lidskii formula (<ref>) for K_n(N) is nonnegative one could use it to bound K_n(N). Let 𝒮^+_n(N) be the set of compositions j=(j_0,…,j_n-1) of n2 with N_0+n-1j_0N_1+n-2j_1⋯N_n-1j_n-1· K_n(j-δ)>0, and S^+_n(N)=|𝒮^+_n(N)|. Also, let M_n(N) := max_j{N_0+n-1j_0N_1+n-2j_1⋯N_n-1j_n-1· K_n(j-δ) |j∈𝒮^+_n(N) }. Let N =(N_0,N_1,…,N_n-1,-∑_i N_i) where each N_i ∈ℤ_≥ 0, then S^+_n(N) · M_n(N) ≥ K_n(N) ≥ M_n(N). From (<ref>) the total, K_n(N), is at least the term with the largest contribution and at most the product of such a term and the number of terms. The bounds in (<ref>) are not so precise in the sense that for a given N it is unclear how to determine the j that yields the maximum M_n(N). Also, some of the terms of (<ref>) vanish. Next, we give a characterization of the compositions in 𝒮_n^+(N). A weak composition j=(j_0,j_1,…,j_n-1) of n2 is in 𝒮^+_n(N) if and only if j_i ≤ N_i+n-i-1 for i=0,…,n-1 and j (n-1,n-2,…,1,0). The first restriction comes from the binomial coefficients in the product N_0+n-1j_0N_1+n-2j_1⋯N_n-1j_n-1. Next, we show that K_n(j-δ)>0 if and only if jδ. For the forward implication, if K_n(j-δ)>0 then the polytope ℱ_n(j -δ) is nonempty. By the projection in Proposition <ref>, the polytope PS_n(j -δ) is also nonempty. By definition of this polytope, being nonempty implies that j- δ0. For the converse, given a composition j=(j_0,…,j_n-1) of n2 satisfying jδ then j-δ0. Thus by Proposition <ref> we have that K_n(j-δ)≥ K_n(0)=1 as desired. Next we look at a specific case like N=(t,0,…,0,-t) for large t. Recall that an inversion of a permutation w=w_1w_2⋯ w_n of n is a pair (i,j) with i<j and w_j>w_i. Let I_n,k (J_n,k) be the number of permutations of {1,2,…,n} with (at most) k inversions <cit.>. In particular J_n,t=n! for t≥n2. From standard facts about permutation enumeration <cit.> and “generatingfunctionology", these numbers have the following generating function for fixed n. ∑_k=0^n2 I_n,k q^k = [n]_q!, ∑_k=0^∞ J_n,k q^k = [n]_q!/1-q, where [n]_q!=(1+q)(1+q+q^2)⋯ (1+⋯+q^n-1) is the q-analogue of n!. Note that I_n,k=I_n,n2-k. Let N=(t,0,…,0,-t), then S^+_n(N)=J_n-1,t. In particular, for t≥n-12 we have that S^+_n(N)=(n-1)!. The number S^+_n(N) counts compositions j=(j_0,…,j_n-1) of n2 satisfying jδ, and j_0 ≤ t+n-1, j_1 ≤ n-2, j_2 ≤ n-3, …, j_n-1≤ 0. It suffices to show that there are I_n-1,n-12-t=I_n-1,t such compositions with j_0=t+n-1 for t=0,1,…,n-12. The first condition is implied by the others by the order reversing property of dominance order . If j_0=t+n-1, then (j_1,…,j_n-2) is a composition of n-12 satisfying j_k ≤ n-1-k for k=1,…,n-2. Such compositions, viewed as inversion tables <cit.>, are in bijection with permutations of n-1 with n-12-t inversions. Next, we determine the M_n(N) for the case N=(t,0,…,0,-t) for large enough t. Let N=(t,0,…,0,-t) for t≥ n^3/2 then M_n(N)=t+n-1n2∏_i=0^n-2 C_i which is achieved at j=(n2,0,…,0). Consider the Minkowski sum decomposition in (<ref>) of ℱ_n(N). By the definition of P_i in (<ref>), this polytope is a translation of the face Q_i of P_0: Q_i = { (f) ∈ P_0 | f_j,j+1=1, for j=0,…,i-1}. From the mixed volume interpretation of K_n(j-δ) in Theorem <ref>, since P_i is a translation of Q_i⊆ P_i, and the fact that mixed volumes are monotically increasing then K_n(n2,0,…,0) = V(P_0^n2) ≥ V(P_0^j_0,…,P_n-1^j_n-1)=K_n(j-δ), for compositions j in 𝒮^+_n(N). For j=(n2,0,…,0) by (<ref>) and the symmetry of the Kostant partition function by reversing the flows, we have that K_n(n-12,-n+2,-n+1,…,-1,0) = ∏_i=0^n-2 C_i. Next, for t ≥n^3/2 one can show that t+n-1n2≥t+n-1j_0n-2j_1⋯0j_n-1. Indeed, suppose first j' = j + e_i for some i, then n-2j_1⋯0j_n-1≥1/n-2n-2j'_1⋯0j'_n-1. Next suppose ℓ≤ L and ℓ' = ℓ-k ≥ 0, then Lℓ≥Lℓ'·(L-ℓ/ℓ)^k. Thus for t ≥n^3/2 and for any composition j in 𝒮^+_n(N) we have t+n-1n2n-20⋯00/t+n-1j_0n-2j_1⋯0j_n-1≥(n^3/2+n-1-n2/n2 (n-2))^n2 - j_0≥ 1, as desired. Combining, both inequalities (<ref>) and (<ref>) we obtain that M_n(N) has the desired value at j=(n2,0,…,0). Putting the previous results together gives the main result of this section: bounds for K_n(t,0,…,0,-t) for large values of t. For t≥ n^3/2 we have that (n-1)!·t+n-1n2∏_i=0^n-2 C_i ≥ K_n(t,0,…,0,-t) ≥ t+n-1n2∏_i=0^n-2 C_i. The result follows by using Proposition <ref> for N=(t,0,…,0,-t) and using both Propositions <ref>, <ref> to evaluate M_n(t,0,…,-t) and S_n^+(t,0,…,0,-t), respectively. Note that the regime of t in Corollary <ref> is different from that of the rest of the paper where we assume that t is constant with respect to n. It would be interesting to compare the upper bound above with the upper bound F(t,n)=∏_1≤ i<j≤ n2t+i+j-1/i+j-1 in Proposition <ref>. Note that since in the regime t≥ n^3/2 the lower bound overwhelms (n-1)!, then the log of lower bound gives the correct asymptotics for log K_n(t,0,…,0,-t). § FINAL REMARKS §.§ Integer flows of other graphs Let G be a connected directed acyclic graph with n+1 vertices, let N=(N_0,…,N_n-1,-∑_i N_i) with N_i ∈ℕ as before and denote by K_G(N) the number of lattice points of ℱ_G(N). This number is also of interest for other graphs beyond the complete graph <cit.>. It would be of interest to apply our methods and find bounds for K_G(N). The polytope ℱ_G(N) also projects to a face of a transportation polytope by zeroing out entries corresponding to missing edges in (<ref>). In the case when N=(1,0,…,-1), since the associated flow polytope is integral and has no interior points then K_G(1,0,…,0,-1) is also the number of vertices of the polytope. The associated contingency tables ϕ(f_ij) counted by K_G(1,0,…,0,-1) have marginals α=β=(1,…,1) (see (<ref>)), and so the entries of the tables are 0,1. In this case the associated polynomials are actually real stable, and thus stronger lower bounds are possible (see <cit.>). There is also the following permanent and determinant formula for this number from <cit.>. K_G(1,0,…,0,-1) = (M_G)=(N_G), where M_G is the matrix with (m_ij) with m_i,i-1=1, m_ij=1 if (i,j+1) is an edge of G and 0 otherwise. N_G is the matrix (n_ij) with n_i,i-1=-1, n_ij=1 if (i,j+1) is an edge of G and 0 otherwise. Note that this permanent formula for K_G(1,0,…,0,-1) means we can also apply Gurvits' original capacity-based lower bound in <cit.>. §.§ Other capacity lower bounds The bounds in our paper rely on lower bounding the capacity of a multivariate power series. Recently, <cit.> gave lower bounds on the capacity of real stable polynomials to further improve upon the approximation factor for the metric traveling salesman problem (after the breakthrough work of <cit.>). That paper lower bounds _α(p) based on how close the value of ∇log p(1) is to α. The techniques of that paper and of our paper are completely different, and as of now we know of no connection between these techniques other than the goal of lower bounding the capacity in order to explicitly lower bound coefficients. That said, it is an open problem whether or not the techniques of <cit.> can be generalized to apply to (denormalized) Lorentzian polynomials (see Section 9 of <cit.>). §.§ Phase transitions in the polynomial growth case Do the phase transitions observed in the lower bounds of Theorem <ref> represent the actual nature of the number of integer flows, or are they simply an artifact of the proof? Already for the Tesler case, the best known upper bound on log K_n(N) is O(n^2), and so it is possible that the phase transitions of the lower bounds are misleading. Phase transitions for the related problem of counting and random contingency tables with certain given marginals have been observed in <cit.> (predicted by <cit.>), but no analogous results have been proven for integer flows in the polynomial growth cases. We leave it as an open problem to improve or find corresponding upper bounds in these cases. §.§ A flow version of Barvinok's question for contingency tables In <cit.>, Barvinok asks the question of the general log-concavity of the number of contingency tables in terms of the marginal vectors. Concretely, let T_S(α,β) be the number of non-negative integer matrices with row sums α and column sums β and support (non-zero entries) S. If (α,β) = ∑_i=1^k c_i (α^(i),β^(i)) is a convex combination of non-negative integer vectors, then is it always true that T_S(α,β) ≥∏_i=1^k [T_S(α^(i),β^(i))]^c_i? A version of this question can be asked specifically for non-negative integer flows, which gives a special case of the above question. This special case can be explicitly asked as follows. If N = ∑_i=1^k c_i N^(i) is a convex combination of integer vectors summing to 0, then is it always true that K_n(N) ≥∏_i=1^k [K_n(N^(i))]^c_i? Finally, a different but related question is given as follows. Given N,M such that NM (that is, that N domniates M; see Section <ref>), is it always true that K_n(N+M) ≥ K_n(N) · K_n(M)? See <cit.> for other similar questions and results for contingency tables. §.§ Case of the q-analogue of the Kostant partition function The function K_n(N) has a known q-analogue by Lusztig <cit.> that we denote by K_n(N,q) and is defined as follows, K_n(N,q) := ∑_f ∈_n(N) ∩ℤ^n+12 q^|f|, where |f|=∑_i,j f_ij. Alternatively, K_n(N,q) is the coefficient of z^N in the generating function ∑_N K_n(N,q) z^N = ∏_0≤ i<j ≤ n1/1-qz_iz_j^-1, or via (<ref>) as the coefficient of x^αy^β (where α,β are defined as in Theorem <ref>) in the generating function Φ'(x,y,q) = ∏_0 ≤ i,j ≤ n-1 i+j ≤ n-11/1 - q x_i y_j∏_0 ≤ i,j ≤ n-1 i+j = n1/1 - x_i y_j. For fixed q > 0, K_n(N,q) can be bounded via capacity bounds on Φ'(x,y,q) in a way similar to that of the results of this paper (i.e, via Theorem <ref>). On the other hand, it is not clear how to adapt the results of this paper to bound or approximate the coefficients of K_n(N,q). More specifically, the expression 1/(1-xyz) does not fit well into the context of this paper since there is no obvious way to adjust it to have the necessary log-concavity properties. §.§ Approximating volumes of flow polytopes Beyond bounding the number of integer flows, we can also bound the volume of ϕ(ℱ_n(N)). To do this, we adapt results from <cit.>. In particular, we can emulate the proof of Theorems 8.1 and 8.2 of <cit.> to achieve the following bound: (ϕ(ℱ_n(N))) ≥ f(S,n,n)/e^2n-1 max_i {s_i}∏_i=0^n-11/s_i^2 _α β(∏_i+j ≤ n-1/log(x_iy_j)), where we define S = {(i,j) ∈{0,1,…,n-1}^2 : i+j ≤ n} and f(S,n,n)^2 is the covolume of the lattice ⟨ S ⟩∩⟨ϕ(ℱ_n(N)) ⟩. According to Section 8 of <cit.>, f(S,n,n)^2 counts the number of spanning trees of the bipartite graph with support given by S. In our setting, the number of such trees is (n!)^2 (see Appendix <ref>). The following analogue of Proposition <ref> then follows from essentially the same proof. Let ϕ(ℱ_n(N)) be the image of ℱ_n(N) in 𝒯(α,β) as defined in (<ref>). We have that _α β(∏_i+j ≤ n-1/log(x_iy_j)) = e^n2 + 2n - 1 sup_A ∈ϕ(ℱ_n(N))∏_i+j ≤ n a_ij. This alternate expression and the above discussion allow us to produce concrete lower bounds for the volume of flow polytopes, which are analogous to our bounds on lattice points. If f is any (not necessarily integer) point of ℱ_n(N) and A = ϕ(f) where ϕ is defined as in (<ref>), then the relative Euclidean volume of ϕ(ℱ_n(N)) can be bounded via (ϕ(ℱ_n(N))) ≥ e^n2 n! max_i {s_i}∏_i=0^n-11/s_i^2∏_i+j ≤ n a_ij. Note that ϕ is an injective linear map, and thus bounds on the volume of ℱ_n(N) can be obtained from the above volume bound. Further, we can use this to achieve specific volume bounds in a similar way to the flow counting bounds of Section <ref>. On the other hand, our vertex-averaging heuristic for choosing the matrix A does not seem to work as well in the volume case. Finally, note the difference in entropy functions within the supremums for counting and volume respectively in this paper is essentially the same as that of <cit.> for counting and volume (see also <cit.>). These functions in these two cases are the entropy functions of the multivariate geometric and exponential distributions, respectively. And further, these distributions are entropy-maximizing distributions on the non-negative integer lattice and on the positive orthant, respectively. §.§ Projecting to a Pitman–Stanley polytope We settled Yip's conjecture (Conjecture <ref>) for large enough n. However, Yip's original question was to find a projection from the polytope ℱ_n(N) for N=(1,1…,1,-n) and the classical permutahedron Π_n that preserves lattice points of the latter. We were not able to find such a projection, however we were able to find the following projections of interest. For a=(a_1,…,a_n) ∈ℤ^n, let PS_n(a)={(y_1,…,y_n) | y_i≥ 0, ∑_i=1^j y_i≤∑_i=1^j a_j, i=1,…,n} be the Pitman Stanley polytope <cit.>. This polytope is a Minkowski sum of simplices <cit.> and is an example of a generalized permutahedra <cit.>. Baldoni–Vergne <cit.> showed that PS_n-1(a) is integrally equivalent to a flow polytope of a graph G_n with edges {(0,1),(1,2),…,(n-2,n-1)}∪{(0,n),(1,n),…,(n-1,n)} and netflow (a_1,a_2,…,a_n,-∑_i a_i). Recall also that for a=1, then PS_n(1) has C_n lattice points and normalized volume (n+1)^n-1. The next result gives a projection between the flow polytope ℱ_n(N) and the Pitman–Stanley polytope. The same projection appears in work of Mészaros–St. Dizier <cit.> in the context of saturated Newton polytopes and generalied permutahedra[The projection considered by the authors <cit.> allowed for other graphs G other than the complete graph k_n+1 with a restricted netflow N depending on G.]. For N=(N_0,…,N_n-1,-∑_i=0^n-1 N_i) and N'=(N_0,…,N_n-2), the projection map π:ℱ_n(N)→ℝ^n, f↦ (f_0n,f_1n,…,f_n-2,n) is a surjective map to PS_n-1(N') that preserves lattice points. First we show that π(ℱ_n(N))⊂ PS_n-1( N'). Given f in ℱ_n(N), since each flow in an edge is nonnegative, it suffices to check that ∑_i=0^j f_i,n≤∑_i=0^j N_i for j=0,…,n-2. Note that ∑_i=0^j N_i equals the sum of the flows of the outgoing edges from vertices {0,…,j}, i.e. all the flows on edges starting and ending in vertices {0,…,j} cancel. Thus ∑_i=0^j N_i = ∑_i=0^j ∑_k=j+1^n f_i,k≥∑_i=0^j f_i,n. To show the map is onto, we use the fact that PS_n-1(N') is integrally equivalent to ℱ_G_n(N) and that G_n is a subgraph of the complete graph k_n+1. See Figure <ref>. Restricting to the flows to the second to last vertex also gives a projection to the Pitman–Stanley polytope, see Figure <ref>. Lastly, restricting to the total outgoing flow of each vertex gives a projection to a parallelepiped. For N=(N_0,…,N_n-1,-∑_i=0^n-1 N_i), the map π”:ℱ(N) →ℝ^n-1, f↦ (x_1,…,x_n-1) where x_i=∑_j=i+1^n f_ij is a surjective map to the parallelepiped [N_0,s_0]× [N_1,s_1]× [N_n-1,s_n-1] where s_i=∑_j=0^i N_j. From the netflow constraint on vertex i we have that N_i ≤∑_j=i+1^n f_ij = N_i + ∑_k=0^i f_ki Since N_i≥ 0, the flow on each edge (k,j) is at most the netflow N_i, that is f_kj≤ N_k. This shows that π'(ℱ(N))⊆ [N_0,s_0]× [N_1,s_1]× [N_n-1,s_n-1]. It is straightforward to check that this map is onto. The projections in this section do not give very strong lower bounds. For example for N=(1,1,…,1,-n), they give the lower bounds C_n and n!, respectively. § ACKNOWLEDGEMENTS Both authors acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2023-03726 and RGPIN-2024-06246]. Cette recherche a été financée par le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG), [numéro de référence RGPIN-2023-03726 et RGPIN-2024-06246]. A. H. Morales was also partially supported by NSF grant DMS-2154019 and an FRQNT Team grant. We thank the Institut Mittag-Leffler in Djursholm, Sweden and the course of the program on Algebraic and Enumerative Combinatorics in Spring 2020 where the authors met. We thankfully acknowledge the support of the Swedish Research Council under grant no. 2016-06596, and thank Institut Mittag-Leffler for its hospitality. We thank Petter Brändén, Leonid Gurvits, Joel Lewis, Jason O'Neill, Igor Pak, and Martha Yip for helpful comments and suggestions. amsalpha § SUMMARY OF BASIC BOUNDS AND ASYMPTOTICS Throughout the arguments, we use a number of bounds and asymptotic expressions. We compile them here. For all t > 0 we have e (t + 1/2) ≥(t+1)^t+1/t^t≥max{e (t + 1/2 - 1/24t), (e/t)^t} and (t+1)^t+1/t^t≥ et + 1. Note that this second bound is redundant with respect to the first bounds. We first prove the upper bound, which is equivalent to f(t) = 1 + log(t + 1/2) - (t+1) log(t+1) + t log t ≥ 0 for all t > 0. This holds in the limit as t → +∞, and thus it is sufficient to show that f'(t) = 1/t + 1/2 - log(1 + 1/t) ≤ 0 for all t > 0. This holds in the limit as t → +∞, and thus it is sufficient to show that f”(t) = 1/t^2 + t - 1/t^2 + t + 1/4≥ 0 for all t > 0. This is clear, which proves the desired bound. We next show that f(t) = (t+1)log(t+1) - t log t - 1 - log(t + 1/2 - 1/24t) ≥ 0 for all t > 0. This clearly holds for t < 1/24, so we will now prove it for t ≥1/24. This also holds in the limit as t → +∞, and thus it is sufficient to show that f'(t) = log(1 + 1/t) - 24t^2 + 1/t(24t^2 + 12t - 1)≤ 0 for t ≥1/24. This holds in the limit as t → +∞, and thus it is sufficient to show that f”(t) = 144 t^2 + 22 t - 1/t^2 (t + 1) (24 t^2 + 12 t - 1)^2≥ 0 for t ≥1/24. The denominator is positive, and the quadratic numerator is positive at t = 1/24 and negative at t=0. Thus the above inequality holds for t ≥1/24, proving the desired bound. We next show that (t+1) log(t+1) - t ≥ 0 for t > 0. This holds in the limit as t → 0^+, and thus it is sufficient to show that f'(t) = log(t+1) ≥ 0 for all t > 0. This is immediate, which proves the desired bound. Finally we show that f(t) = (t+1)^t+1/t^t - et - 1 ≥ 0 for all t > 0. This holds in the limit at t → 0^+, and thus it is sufficient to show that f'(t) = log(1 + 1/t) ·(t+1)^t+1/t^t - e ≥ 0 for t > 0. This holds in the limit as t → +∞, and thus it is sufficient to show that f”(t) = [log^2(1 + 1/t) + 1/t+1 - 1/t] (t+1)^t+1/t^t≤ 0 for all t > 0. This is equivalent to showing that g(s) = log^2(1 + s) + s/1+s - s ≤ 0 for all s = 1/t > 0. This holds for s = 0, and thus it is sufficient to show that g'(s) = 2 log(1+s) + 1/1+s - (1+s)/1+s = h(s)/1+s≤ 0 for all s > 0. Since h(0) = 0, it is thus sufficient to show that h'(s) = -(1 - 1/1+s)^2 ≤ 0 for all s > 0. This is immediate, which proves the desired bound. §.§ Bounds on e. For all x > 0 we have <ref>sec:bounds_asymptotics.B1(x/x+1)^x ≥1/e and (x+1/x)^x+1≥ e. §.§ Bounds on product of Catalan numbers. This is from <cit.>. <ref>sec:bounds_asymptotics.B2log(∏_i=1^n-1 C_i) = n^2log n -3/2log n +O(n). §.§ Other asymptotic expression and bounds. We recall various other asymptotic expressions and bounds. The bounds here are obtainable using simple integration approximation or a standard application of the Euler-Maclaurin formula (Lemma <ref>). <ref>sec:bounds_asymptotics.B3√(2π n)·n^n/e^n≤ n! ≤ e√(n)·n^n/e^n <ref>sec:bounds_asymptotics.B4k^p+1/p+1≤∑_j=0^k j^p ≤(k+1)^p+1/p+1 for p ≥ 0 <ref>sec:bounds_asymptotics.B5log(n+1) ≤∑_k=1^n 1/k≤ 1 + log n and ∑_k=1^n 1/k = log n + O(1), <ref>sec:bounds_asymptotics.B6n^2log n/2 - n^2/4 + 1/4≤ ∑_k=1^n k log k ≤(n+1)^2log(n+1)/2 - (n+1)^2/4 + 1/4 and ∑_k=1^n k log k = n^2log n/2 - n^2/4 + nlog n/2 + log n/12 + O(1), <ref>sec:bounds_asymptotics.B7log 2/2 + log^2 (n+1) - log^2 3/2≤ ∑_k=1^n log k/k≤log 2/2 + log 3/3 + log^2 n - log^2 3/2 and ∑_k=1^n log k/k = log^2 n/2 + O(1). §.§ Euler-Maclaurin formula. All of the above bounds can be derived from the Euler-Maclaurin formula, stated below. We make heavy use of Lemma <ref>, which is a straightforward corollary. Given integers a < b, a positive odd integer p, and a smooth function f: [a,b] → we have |∑_k=a^b f(k) - [∫_a^b f(t) dt + f(b) + f(a)/2 + ∑_k=1^p-1/2B_2k/(2k)!(f^(2k-1)(b) - f^(2k-1)(a))]| ≤2 ·ζ(p)/(2π)^p∫_a^b |f^(p)(t)| dt, where B_k is the k^th Bernoulli number and ζ is the Riemann zeta function. Let a<b be integers, and suppose c,d are real with r := -d/c such that ct+d > 0 for all t ∈ [a,b]. Letting γ := min{|b-r|,|a-r|}, we have ∑_k=a^b k log(ck+d) = b^2/2log|b-r| - a^2/2log|a-r| + r^2/2log|a-r/b-r| + b^2 log|c|/2 - a^2log|c|/2 - (b+r)^2/4 + (a+r)^2/4 + b/2log|b-r| + a/2log|a-r| + (a+b)log|c|/2 - 1/12log|a-r/b-r| + O(|r|+1/γ + |r|/γ^2). We compute the approximation of the sum S̃, given by Lemma <ref> with p=3 and f(t) = t log(ct+d). We have S̃ = ∫_a^b f(t) dt + f(a)+f(b)/2 + f'(b) - f'(a)/12, where ∫_a^b f(t) dt = [(t^2-r^2) log|t-r|/2 + t^2 log|c|/2 - (t+r)^2/4 + r^2/4]_a^b, and f(b)+f(a)/2 = b log|b-r| + a log|a-r| + (a+b)log|c|/2, and f'(b) - f'(a)/12 = 1/12[log|t-r| + log|ec| + r/t-r]_a^b = -1/12log|a-r/b-r| + r/12(b-r) - r/12(a-r), and ∫_a^b |f^(3)(t)| dt = [-1/t-r]_a^b + |[r/(t-r)^2]_a^b| = -1/b-r + 1/a-r + |r/(b-r)^2 - r/(a-r)^2|. We then have r/12(b-r) - r/12(a-r) + 2 ζ(3)/(2π)^3∫_a^b |f^(3)(t)| dt = O(|r|+1/γ + |r|/γ^2). Combining everything and simplifying yields the result. For any fixed ξ∈ [0,1), we have ∑_k=1^n k logn^2-ξ^2 k^2/k^2 = n^2/2(1/ξ^2 - 1) log(1/1-ξ^2) + n/2log(1-ξ^2) - 1/6log n ± O_ξ(1), where lim_ξ→ 0(1/ξ^2 - 1) log(1/1-ξ^2) = 1. Using Lemma <ref>, we compute (with parameters a=0,b=n,c=-ξ,r=n/ξ) ∑_k=1^n k log (n - ξ k) = n^2/2log n + n^2/2[(1 - 1/ξ^2) log (1-ξ) - 1/2 - 1/ξ] + n/2log n + n/2log(1-ξ) ± O_ξ(1), and (with parameters a=0,b=n,c=ξ,r = -n/ξ) ∑_k=1^n k log (n + ξ k) = n^2/2log n + n^2/2[(1 - 1/ξ^2) log (1+ξ) - 1/2 + 1/ξ] + n/2log n + n/2log(1+ξ) ± O_ξ(1), and (with parameters a=1,b=n,c=1,r=0) ∑_k=1^n k log k = n^2/2log n - n^2/4 + n/2log n + 1/12log n ± O(1). This gives ∑_k=1^n k logn^2-ξ^2 k^2/k^2 = n^2/2(1/ξ^2 - 1) log(1/1-ξ^2) + n/2log(1-ξ^2) - 1/6log n ± O_ξ(1). Note that a straightforward argument gives the same expression in the limit when ξ = 0. § NUMBER OF SPANNING TREES For a tree T with vertex set V, let m(T) = ∏_v∈ V x_v^deg_T(v)-1. let t_G be the following multivariate sum over spanning trees. t_G(x_1,…,x_n) = ∑_T m(T). Let G(n)⊂ K_n,n be the bipartite graph with edges (i,n+j) if i-j<2. Let λ⊂ m× n with λ_1=n. Let G(λ) be a bipartite graph with vertices {1,…,m}∪{m+1,…,m+n} and edges (i,m+j) if (i,j) ∈ [λ], then t_G(λ)(x_1,…,x_m; y_1,…,y_n) = ∏_i=2^m (x_1+⋯+x_λ_i) ∏_j=2^n (y_1+⋯+y_λ'_j), in particular G(λ) has ∏_i λ_i ∏_j λ'_j spanning trees. For the graph G(n) we have that t_G(n)(x_1,…,x_n;y_1,…,y_n) = (x_1+x_2)(x_1+x_2+x_3)⋯ (x_1+⋯ + x_n)· (y_1+y_2)(y_1+y_2+y_3)⋯ (y_1+⋯ + y_n), in particular G(n) has (n!)^2 spanning trees.
http://arxiv.org/abs/2406.08218v1
20240612134938
Figuratively Speaking: Authorship Attribution via Multi-Task Figurative Language Modeling
[ "Gregorios A Katsios", "Ning Sa", "Tomek Strzalkowski" ]
cs.CL
[ "cs.CL" ]
Impact of environmental interaction on bias induced circular current in a ring nanojunction Santanu K. Maiti June 17, 2024 =========================================================================================== § ABSTRACT The identification of Figurative Language (FL) features in text is crucial for various Natural Language Processing (NLP) tasks, where understanding of the author's intended meaning and its nuances is key for successful communication. At the same time, the use of a specific blend of various FL forms most accurately reflects a writer's style, rather than the use of any single construct, such as just metaphors or irony. Thus, we postulate that FL features could play an important role in Authorship Attribution (AA) tasks. We believe that our is the first computational study of AA based on FL use. Accordingly, we propose a Multi-task Figurative Language Model (MFLM) that learns to detect multiple FL features in text at once. We demonstrate, through detailed evaluation across multiple test sets, that the our model tends to perform equally or outperform specialized binary models in FL detection. Subsequently, we evaluate the predictive capability of joint FL features towards the AA task on three datasets, observing improved AA performance through the integration of MFLM embeddings. § INTRODUCTION Figurative Language (FL) constructs, such as metaphor, simile, and irony, are common in various forms of communication, such as literature, poetry, and speech. Their use can enrich the meaning, creativity, and persuasiveness of a message and help to achieve an intended impact on the reader. The use of certain forms of FL in writing reflects the authors' style and background, including their education, personality, social context, and worldviews. Therefore, we hypothesise that the choice of figurative language features in (written) communication may reveal the writer's cognitive and linguistic basis that underlie their production, and how their selection is influenced by the context, the intention, and the emotion of the writer. In this paper, we introduce a multi-task classification model designed to detect multiple Figurative Language (FL) features in a body of text. The first research question (RQ1) we seek to answer is: "Is a model that is trained to detect multiple FL features simultaneously more effective than multiple specialized models, each trained to detect a specific FL feature?" Through our research, we demonstrate that this multi-task model is indeed more effective than using several binary models. In our research, we utilize 13 publicly available datasets to train and evaluate both binary and multi-task models. We deliberately opted against integrating additional datasets specifically designed for metaphor detection, which is only one of the phenomena we study. The rationale behind this decision was creating a more balanced training data, which otherwise would have been disproportionately skewed our study towards metaphor detection, given the substantially more resources dedicated to this phenomenon. At the same time, the lack of annotated corpora for other figurative language features such as personification, metonymy, oxymoron, etc. necessarily limited our initial study to the six FL constructs that are generally well represented among these 13 datasets: Metaphor, Simile, Idiom, Sarcasm, Hyperbole, and Irony. All our binary models and the multi-task model are based on RoBERTa <cit.>. After training the specialized binary models on the combined datasets, we used them to automatically label our training corpora with all applicable FL features. This multi-label dataset was then used to train our multi-task model. Afterwards, we compare our Multi-task Figurative Language Model (MFLM) against the binary classifiers on the 13 test sets. The results showed that MFLM matched or outperformed the binary classifiers in five test sets and achieved higher task-specific performance than the binary models in another three test sets, which suggests that these features are not independent from one another. After training our multi-task figurative language classifier, we put forward a second research question (RQ2): "Does the incorporation of Figurative Language (FL) features enhance performance in Authorship Attribution (AA) tasks?" To answer this, we evaluate the impact of the FL features learned by our Multi-task Figurative Language Model (MFLM) on three publicly available AA datasets, each consisting of documents with varying topical content and number of authors. For each dataset, we train Multi-Layer Perceptron (MLP) classifiers, using MFLM sentence embeddings and other baselines as input features. The baselines consist of classical Stylometric features, character and word n-gram TF-IDF vectors, and generic sentence embeddings. Our results demonstrate that the AA task performance is indeed improved by combining MFLM embeddings with other baselines. To our knowledge, this work is the first to examine the applicability of FL features in AA. We should note here that we did not expect that the FL features alone would be sufficient to perform AA; rather we set off to demonstrate that incorporating combined FL features improves AA performance when integrated with more basic stylistic features, particularity for longer texts. The results show that the latter is generally true; however, we found that the FL features perform nearly as strong and sometimes better on their own. This supports our initial stipulation that FL use is highly personalized, and thus an excellent predictor of authorship. We make our code and data available in our GitHub repository[Figuratively Speaking: <https://github.com/HiyaToki/Figuratively-Speaking>]. § RELATED WORK Most of the previous studies on Figurative Language (FL) feature detection focus on the features independently. An earlier work, <cit.>, used lexical semantic features of the words to discriminate metaphors from literals. More recently, <cit.> utilized metaphor identification theories using RoBERTa to predict whether a word in a sentence is metaphorical or not. A similar shift from linguistic feature based approach to pre-trained language model (PLM) based approach is observed in simile detection. <cit.> extracted features such as topic-vehicle similarity and imageability to separate similes from literal comparisons. <cit.> used BERT <cit.> and RoBERTa in simile property probing tasks and concluded that the PLMs still underperformed humans. PLMs are also applied to the detection of sarcasm <cit.>, hyperbole <cit.>, irony <cit.>, and idiom <cit.>. Among the studies that work on more than one features, <cit.> used datasets cross-labeled with metaphor and hyperbole, and found that the multi-task learning approach performed better than the single-task approach on both features. <cit.> rendered the FL detection into a multi-task natural language inference (NLI) problem, developed a NLI dataset of four FL features, and tested with several experimental systems. <cit.> collected datasets on idiom and simile and developed knowledge enhanced RoBERTa-based models. However, their task was to predict the correct continuation of the given narrative, not FL feature detection. <cit.> built a dataset covering 9 FL features plus literals. They tested three baseline systems in a multi-class classification task and BERT outperformed the other two systems. There is a rich literature in the field of Authorship Attribution (AA). Various methods have been applied to the task, ranging from SVM based approaches, such as <cit.>, to transformer based models, like <cit.>. In PAN-2019 cross-domain AA challenge <cit.>, most of the submissions used n-gram features (char, word, part-of-speech) and an ensemble of classifiers (SVM, Logistic Regression, etc). <cit.> fine-tuned a BERT model for AA task and tested the model on three datasets including IMDB-62 <cit.>. In a recent review article <cit.>, feature based methods and embedding based methods were tested and compared on the same datasets. They used n-grams, summary statistics and co-occurance graphs as features, as well as static char/word embeddings and transformer-based sentence embeddings. § FIGURATIVE LANGUAGE MODELING In our study, we investigate the potential benefits of combining Figurative Language (FL) features as opposed to analyzing each feature independently. To answer our first research question, we examine whether training a FL classification model capable of jointly labeling text with relevant features would outperform a singular binary model specialized in detecting only one feature. This idea stems from noticing that in both spoken and written language, individuals intertwine various elements of figurative speech to effectively convey their intended message. Consequently, FL features frequently co-occur, and understanding the interplay between these features may offer valuable insights for improving their identification accuracy. This research builds upon prior studies that explored the simultaneous detection of metaphors and sarcasm, as well as hyperbole and sarcasm. In our investigation, we aim to simultaneously learn to detect six distinct FL features: Metaphors, Simile, Sarcasm, Hyperbole, Idiom, and Irony. §.§ Data In our research to learn to classify FL phenomena, we rely on publicly available datasets. In total, we work with 13 individual corpora, which are summarized in Table <ref> (see Appendix <ref> for additional details). While space constraints prevent exhaustive descriptions, we encourage interested readers to explore the original works by the dataset creators for comprehensive insights into the data collection and annotation processes. Among the datasets we analyze, the iSarcasm corpus <cit.> stands out as truly multi-labeled. It includes training and testing examples annotated with labels such as sarcasm, irony, overstatement (hyperbole), understatement, satire, and rhetorical questions. For instance, an excerpt from the iSarcasm training set reads: "Can’t wait to be back at uni so I can order more shoes and clothes without my mum telling me off", which is labeled with both sarcasm and hyperbole. In contrast, several other datasets adopt a multi-class approach. Each example in these datasets corresponds to a single applicable label. Additionally, some datasets focus exclusively on specific FL phenomena, employing positive and negative examples (e.g., feature_X and not_feature_X) to create a binary distinction. When dealing with FL datasets, it’s crucial to consider how negative examples are constructed. Some datasets construct the negative class (i.e., not_feature_X) by ensuring that samples represent true literal sentences devoid of any FL speech. The FLUTE corpus <cit.> is an example of this approach, where FL sentences are paired with their rephrased literal counterparts. For instance, the figurative sentence (metaphor): "A break up can leave you with a broken heart" is paired with the literal sentence: "It's hurtful when a breakup makes you feel lonely and sad". Other datasets annotate the negative class as simply not containing the FL phenomena described by the positive class. For instance, in the Irony SemEval 2018 corpus <cit.>, sentences that are labeled as not_irony may still exhibit other FL traits. Consider the sentence: "Look for the girl with the broken smile" which, although not ironic, contains a metaphor that is not explicitly annotated. In our pipeline, we apply minimal pre-processing to the sentences from these corpora, and we load them into our combined collection, retaining human annotations relevant to our work. Notably, we focus on the six FL features listed in Table <ref>, ignoring classes beyond this scope. At this stage, we clearly distinguish between literal sentences and negative class sentences labeled as not_feature_X. In our study, we encounter various datasets with distinct characteristics regarding their train/dev/test splits. Some datasets come with a predefined splits, where we merge the training and development sets into a single training set, reserving the original test set solely for evaluation. In cases where datasets lack existing splits, we adopt a systematic approach, setting aside a 10% stratified sample for testing. The entire collection consists of 69168 training and 9729 testing examples. §.§ Binary Models To detect the various FL phenomena, we create task-specific binary classifiers. This process involves combining datasets annotated with examples relevant to each specific feature. For instance, to train a classifier for metaphors, we aggregate data from PIE-English, FLUTE, LCC, and MOH datasets. Similarly, for simile classification, we gather data from PIE-English, FLUTE, MSD23, and Figurative Comparisons datasets. The combination of datasets allows us to establish both positive and negative sets for each classification task. In the context of training a metaphor classifier, the positive set comprises examples exhibiting metaphoric expressions, while the negative set encompasses instances without metaphors. As detailed in Section <ref>, certain datasets exclusively utilize literal examples for constructing the negative class, whereas others use examples not containing the FL phenomena described by the positive class. Thus, achieving a balanced representation necessitates the inclusion of negative samples from both types of datasets. In our approach, positive and negative examples are retrieved from the combination datasets corresponding to the specific task, while literal examples are sourced from across all datasets. The final training set for each task is formed by selecting all positive examples and supplementing them with an equal number of negative and literal examples. Specifically, if the size of the positive class is denoted as N, we sample N/2 negative and N/2 literal examples. In scenarios where there are insufficient negative examples, we augment the dataset with an appropriate number of literal examples to ensure a total of 2N training instances. During training, the labels of literal examples are transformed to not_feature_X, aligning with our objective to create robust binary classifiers capable of discerning sentences containing the specific feature from those that do not. For detailed information on the number of training samples per task, please refer to the Appendix <ref>. Subsequently, we train individual RoBERTa <cit.> models[RoBERTa-Large: <https://huggingface.co/FacebookAI/roberta-large>] for each task using a standardized set of hyper-parameters across all training jobs: Epochs: 5, Learning Rate: 2e-5, Weight Decay: 0.01, Warm-up Ratio: 0.1, Batch Size: 16. The time required to train a binary model averaged at approximately 80 minutes, using a single NVidia RTX A6000 GPU. §.§ Multi-Task Model We proceed to train a Multi-task Figurative Language Model (MFLM) that can label a sentence with all applicable features in a single pass. For this, we convert our combined training dataset into a multi-label format. We use the array of binary models to assign all the possible labels to each training sentence in our corpora. Consequently, we obtain an augmented FL training corpus, for which every sentence has a corresponding list of predicted FL labels. To produce a high quality training set, we keep only the examples where the predicted labels are consistent with the original human annotations. For instance, if a sentence is annotated by humans as: [metaphor, idiom], we accept predictions such as: [metaphor, idiom, simile, not_irony, not_hyperbole, not_sarcasm], but we reject predictions like: [metaphor, not_idiom, not_simile, not_irony, not_hyperbole, not_sarcasm], due to the not_idiom prediction's inconsistency. In this manner we create a dataset of 61264 sentences, discarding 7904 text-prediction pairs that conflict with human annotations. The distribution of labels in the dataset is shown in Table <ref>. We allocate 10% of this training set to be used as a development set, facilitating the identification of the optimal probability threshold for each feature. Leveraging both automatically generated labels and human annotations, we obtain two distinct sets of thresholds. One set is optimized based on the human labels, while the other set is calibrated using the automatic labels. We follow the same hyper-parameter set-up as the binary model training, and the average time to train the multi-task model is about 326 minutes, using a single NVidia RTX A6000 GPU. Our pipeline of training the individual binary FL models, augmenting the FL training collection with predicted labels and fine-tuning the MFLM, is illustrated in Figure <ref>. §.§ Evaluation and Results To evaluate both binary and multi-task approaches, we use the reserved task-specific testing sets. In Tables <ref> and <ref>, we report the weighted average F1-score obtained from a single run. The rows marked as Metaphor, Simile, Sarcasm, Hyperbole, Idiom and Irony refer to binary models while the rows marked as MFLM refer to our multi-task model. MFLM-h and MFLM-b refer to predictions acquired by tuning the probability thresholds on the development set using human annotations and binary predictions respectively. Due to space limitations, we present a single column for the multi-class test sets. Nonetheless, our binary models were evaluated appropriately, by treating annotations from unrelated tasks as not_feature_X. For instance, when evaluating the Metaphor binary model on the FLUTE test set, simile, sarcasm and idiom ground truth labels become not_metaphor. In contrast, since our MFLM can inherently support all classes, we report weighted F1-score without altering the ground truth labels. The MFLM demonstrates competitive or superior performance compared to binary classifiers across different test sets. Specifically, in 5 out of 13 tests, the MFLM either matches or surpasses binary models. Furthermore, in 3 tests, the MFLM exhibits comparable or superior performance in specific tasks. For instance, MFLM-h performs equally well as the Simile and Sarcasm models on the FLUTE test sets, achieving F1-scores of 0.98 and 0.97 respectively. Moreover, the MFLM-h surpasses the Sarcasm model on the Sarcasm Corpus test set, with F1-scores of 0.82 and 0.80 respectively. On the same test set, the Hyperbole model outperforms the MFLM-h in the hyperbole task, with F1-scores of 0.56 and 0.33 respectively. In the PIE-English test set, the MFLM-h excels over the Metaphor binary model on the metaphor task with 0.96 versus 0.92 F1-score respectively, and matches the performance of the Idiom model. This supports our first research question and highlights the versatility and effectiveness of the MFLM across different linguistic tasks and datasets. §.§.§ Error Analysis To pinpoint the weaknesses and strengths of our MFLM, we conduct a manual error analysis, scrutinizing samples where the multi-task and/or binary models disagree with the ground truth. For each case, we display a few random examples in Table <ref>, while more samples are presented in the Appendix <ref> for further reference. Our findings indicate that the majority of miss-classifications made by the MFLM stem from inaccuracies or incompleteness in the annotation of input sentences. Nonetheless, the predictions generated by the MFLM demonstrate a reasonable level of accuracy in most instances and carry on to experiment using our proposed multi-task FL model and evaluate its appropriateness on the Authorship Attribution (AA) task. § AUTHORSHIP ATTRIBUTION We proceed to investigate the effectiveness of our Multi-task Figurative Language Model (MFLM) in the closed-case Authorship Attribution (AA) downstream task. AA involves classifying texts to determine their respective authors from a known set of candidates. Specifically, given a training corpus consisting of N authors, the objective is to predict the author of each document in the test set by selecting from the set of N authors. Our second research question proposes that embeddings incorporating figurative language features will enhance performance in the AA task. This concept extends from stylometric analysis <cit.>, which traditionally concentrates on discerning patterns within written text. Stylometric analysis examines various aspects of writing style, including word selection, sentence construction, punctuation usage, and vocabulary preferences. To the best of our knowledge, our study is the first of its kind to utilize a Transformer model that has been fine-tuned for multi-task FL classification, towards the AA task. Previous research in this area minimally explored the applicability of FL features for this specific task. §.§ Data In our Authorship Attribution (AA) experiments, we employ three distinct, publicly accessible datasets. The first dataset, IMDb-62 <cit.>, comprises 1000 movie reviews from each of the 62 authors. These reviews are relatively short, averaging around 100 words. The IMDB-62 dataset does not have a predetermined train/test split, therefore we reserve a 10% stratified sample for testing. This yields a training set of 55800 examples and a testing set of 6200 samples. The second dataset, PAN-2006 <cit.>, is focused on corporate and industrial topics. It includes short texts of approximately 500 words. The training set comprises 2500 texts, with 50 texts per author. Similarly, the test set consists of 2500 texts, with 50 texts per author, ensuring no overlap with the training data. The third and final dataset, PAN-2018 <cit.>, contains medium-length texts of around 800 words each, centered on fan fiction. This dataset is divided into four problems, each with a different number of authors (20, 15, 10, and 5). However, each author consistently contributes seven texts. The test sets vary in the number of texts they contain, with 79, 74, 40, and 16 texts respectively. In our experiments, we use only the English texts. §.§ Baselines In our Authorship Attribution (AA) task, we evaluate the performance of our MFLM against four different baselines. The first baseline is built upon classical Stylometric features. We implement 52 text metrics using the cophi[cophi: <https://github.com/cophi-wue/cophi-toolbox>] and textstat[textstat: <https://github.com/textstat/textstat>] Python packages. These metrics are used to form a document vector with 52 stylometric features. For a more detailed explanation of these features, please refer to the Appendix <ref>. The second baseline utilizes the all-roberta-large-v1[all-roberta-large-v1: <https://huggingface.co/sentence-transformers/all-roberta-large-v1>] Sentence Embedding <cit.> model, which we refer to as SBERT in the following sections. This model is comparable to our MFLM since it is also based on RoBERTa-Large, but without the multi-task FL classification fine-tuning. With SBERT, we generate a 1024-dimensional document vector. This vector is computed by averaging the individual sentence embeddings for each input text. The third and fourth baselines in our study are constructed using word and character n-grams, respectively. We utilize the Python package scikit-learn <cit.> to analyze the texts and identify the 1024 most common n-grams from the training dataset, where the value of n varies from 1 to 5. We exclude stop words from the input texts during this process. Subsequently, we compute the Term Frequency-Inverse Document Frequency (TF-IDF) values for these n-grams across all documents, resulting in 1024-dimensional sparse document vectors. §.§ Evaluation and Results For the evaluation, we begin by encoding all texts in the AA datasets utilizing our MFLM model and the baselines. To create the embeddings using the MFLM, we discard the multi-task classification layer and directly utilize the underlying Transformer model. The sentence embedding is computed by mean-pooling all token embeddings, including the [CLS] token, taken from the last hidden layer. To create the document embedding, we average the embeddings of individual sentences. This allows us to create a 768-dimensional vector for each document. Following this encoding step, we construct Multi-Layer Perceptron (MLP) classifiers for each test case and features combination. These MLP models consist of a single hidden layer comprising 1024 units and are implemented using the Python package scikit-learn. Our training process involves 1000 epochs with a learning rate of 2e-5, incorporating early stopping. The activation function employed is ReLU <cit.>, and the optimizer used is Adam <cit.>. Subsequently, we apply the trained model on the test set to calculate weighted average F1-scores obtained from a single run, which are presented in Table <ref>. The training and evaluation process for Authorship Attribution (AA) is illustrated in Figure <ref>. Character and word n-grams features remain a valuable tool for AA, as their strength lies in capturing stylistic features like word choice, punctuation, and common phrases, often unique to an author. N-gram features, encompassing character sequences, spelling preferences, and even made-up words, remain consistent even with smaller datasets and paraphrasing. This robustness makes them effective for identifying rare words, misspellings, and author-specific quirks. However, they lack the ability to capture the semantic and pragmatic aspects of meaning or structural organization of text (which we do not address in this paper), both essential aspects of an author's overall style. On the other hand, MFLM document vectors address both semantic and pragmatic aspects by encoding Figurative Language (FL) features within sentences. This approach allows for a more nuanced comparison of texts, considering not only the use of metaphors, similes, and other rhetorical devices by the author, but also their unique combinations. This could potentially lead to a more effective generalization across various writing styles and genres. Prior work on FL and metaphors <cit.> has noted that authors often blend their FL constructs in a seemingly haphazard manner. Rather than conforming to any discernible "logic", this pattern seems to be a reflection of the author's individual style, as suggested by our findings. While quite powerful, FL-based features don't encompass all facets of an individual's writing style. We continue to investigate the structural aspects of texts, which is one area that remains under study. On the other end of the spectrum, we must also account for information contained in subword patterns, an area where n-grams excel. Additionally, typos, grammatical errors, and paraphrasing can significantly impact MFLM embeddings, potentially resulting to misleading attributions. Furthermore, we conducted experiments to explore the impact of integrating Figurative Language (FL) features by combining our MFLM encoding with baseline document vectors and subsequently training new MLP classifiers. Our findings demonstrate a consistent boost in performance across nearly all cases when using the combined features, thereby supporting our second research question. In Table <ref>, we also include state-of-the-art (SOTA) results, as reported in <cit.> and <cit.>. The methodologies vary across implementations, but character n-grams, part-of-speech n-grams, and summary statistics typically form the input for an ensemble of logistic regression classifiers, achieving SOTA in the AA task. It is important to note that in <cit.>, the authors report macro-averaged accuracy, while in <cit.>, the evaluation metric is macro-averaged F1-score. Although a direct comparison may not be feasible due to these differing metrics, these results offer valuable insight into the task's complexity. § CONCLUSION This study investigated two research questions regarding the detection and application of Figurative Language (FL) features in machine learning. Firstly, we explored whether a multi-task model trained to simultaneously detect multiple FL features (Metaphor, Simile, Idiom, Sarcasm, Hyperbole, and Irony) could outperform individual models specialized for each feature. By leveraging RoBERTa-Large and a multi-label training dataset derived from binary classifiers, our Multi-task Figurative Language Model (MFLM) achieved superior performance on 8 out of 13 test sets, particularly excelling in detecting Simile, Idiom, Irony, and Hyperbole. This finding highlights the increased effectiveness of a unified approach for comprehensive FL detection. Secondly, we examined the potential of incorporating FL features to enhance performance in Authorship Attribution (AA) tasks. Utilizing three diverse AA datasets and Multi-Layer Perceptron (MLP) classifiers, we evaluated the contribution of MFLM sentence embeddings alongside various baseline features like Stylometric features, SBERT Embdeddings and word and character n-gram vectors. The results showed the competitive performance achieved by MFLM embeddings alone, while their combination with other features yielded consistent performance improvements across nearly all cases. This strongly supports the second research question, indicating the positive impact of integrating FL features in AA tasks. Our study offers valuable insights into the effectiveness of multi-task learning for comprehensive FL detection and the potential of FL features to improve AA tasks. Further research could explore the applicability of MFLM to additional NLP tasks, such as sentiment analysis and information retrieval. Moreover, future studies could investigate the impact of incorporating additional FL features into a single classification model, such as personification, metonymy, onnomatopoeia, etc., as well as domain-specific knowledge for even more refined FL detection and application. § ACKNOWLEDGEMENTS This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200002 and the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001121C0186. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. § ETHICAL CONSIDERATIONS AND LIMITATIONS In this paper, we investigate the efficacy of training a multi-task classification model to detect Figurative Language (FL) features compared to specialized binary models. In this work, we also explore leveraging the multi-task model embeddings for the Authorship Attribution (AA) task. One potential limitation of our study arises from the combination of different datasets for the various Figurative Language (FL) features under consideration. The quality of annotations across these datasets is not uniform, with some lacking annotation manuals or relying on automatic and crowd-sourced approaches for dataset creation. This inconsistency can introduce errors into our model. Furthermore, the datasets, while publicly available, may contain inherent biases due to the lack of clear instructions for annotating literal sentences and potential variability in human annotator judgments. In the process of constructing annotated corpora for training machine learning algorithms for automatic figurative language detection, it's crucial to consider the interpretive discrepancies between experts and non-experts. The annotations found in the collection of datasets used in this study are all taken as ground truth of equal importance, potentially leading towards a biased FL detection model. An expert, with their nuanced understanding, can identify subtle metaphors and idioms that may elude an ordinary reader. However, non-experts, influenced by their unique cultural backgrounds and personal experiences, may interpret figurative language differently <cit.>. For instance, certain phrases may have specific connotations in one culture but be meaningless in another. Similarly, a person's familiarity with a subject matter can greatly influence their understanding of related figurative language. Therefore, to ensure a more accurate and comprehensive analysis of figurative language, these factors must be taken into account. Future work will address this issue by conducting qualitative and quantitative analyses on the annotated datasets. In our methodology, we employ specialized binary models for each feature, trained on our combined datasets, to predict figurative language labels for the training examples used in our multi-task model. This approach, while effective, can lead to error propagation, resulting in incorrect predictions from our model. However, our evaluation and manual error analysis indicate that our multi-task model's predictions are often reasonable, with errors frequently attributable to incomplete human annotations. The second part of our study applies the embeddings from our multi-task FL model to the AA task. We train MLP classifiers using document vectors as features on three publicly available datasets, each focusing on a different topic: movie reviews, corporate/industrial topics, and fan fiction. However, these topics are not very diverse, which could introduce bias into the datasets with respect to authorship. For instance, in the fan fiction dataset, some authors may exclusively write "Harry Potter" fan fiction, which could skew the evaluation of different features. Lastly, it is important to note that the predictions of deep neural language models, such as the ones used in our study, are often difficult to interpret and explain. This lack of interpretability is a common challenge in the field and is another limitation to consider in our work. § APPENDIX §.§ Appendix: Figurative Language Datasets Here we present additional details regarding our 13 figurative language datasets. In Tables <ref> and <ref> we show the number of examples per class label for the train/test sets of all datasets. The datasets that had predefined train/test splits are: FLUE, iSarcasm, and Irony SemEval 2018. For the remaining datasets, we reserve a 10% stratified sample for testing. In the following paragraphs we will be discussing some interesting datasets. The only corpus in our collection that is truly multi-labeled is the iSarcasm dataset. The curators of iSarcasm created the collection by recruiting Twitter users and asking them to specify one sarcastic and three non-sarcastic tweets from their posted messages. Then, they asked the participants to provide a literal rephrase for every sarcastic message that conveys the same meaning. Furthermore, for every sarcastic message, the authors perform a second annotation stage where they further label these messages with irony, overstatement (hyperbole), understatement, satire, and rhetorical questions. In our work, we assume that the rephrases provided by the original participants are indeed literal sentences, however, we do not make the same assumption for the non-sarcastic messages that were also provided. In addition, since in our research we focus on six figurative language types, we ignore labels that are outside of this set. In such cases, we retain the sentence with only the sarcasm / not_sarcasm, irony / not_irony or hyperbole / not_hyperbole labels. The Sarcasm Corpus is a multi-class dataset centered around the binary classification task of sarcastic sentences. However, an extension of the dataset (separate file) contains sarcastic and non-sarcastic sentences that all have hyperbole. Since this was an addition to the main corpus, we cannot assume that the remaining files are completely devoid of hyperbole. Therefore, the hyperbolic sentences also have sarcasm / not_sarcasm labels, but not the other way around. The FLUTE dataset is also multi-class dataset, which means that each example is either metaphor, simile, sarcasm, or idiom. Each figurative sentence is paired with the two literal paraphrases, one aligning with the actual meaning of the figurative sentence, and the other communicating the opposite meaning. For instance, the figurative sentence "After a glass of wine, he loosened up a bit" will have a literal counterpart "After a glass of wine, he relaxed up a bit" and the opposite paraphrase would be "After a glass of wine, he stressed up a bit". PIE-English is another interesting dataset. Here, the authors have automatically created a collection of sentences that contain possible idiomatic expressions. With further manual annotation efforts, they annotated each sentence whether its literal, therefore not idiomatic, or the idiom is constructed using euphemism, metaphor, personification, simile, parallelism, paradox, hyperbole oxymoron, or irony. Thus, every figurative sentence is an idiom plus an other figurative language class. In this work we focus on six figurative language types, so we ignore labels that are outside of this set. In such cases, we retain the sentence with only the idiom label. §.§ Appendix: Figurative Language Classification Binary Training Sets To train specilized binary models to detect FL features, we merge datasets annotated with examples relevant to each specific feature. For instance, to train a classifier for metaphors, we aggregate data from PIE-English, FLUTE, LCC, and MOH datasets. Similarly, for simile classification, we gather data from PIE-English, FLUTE, MSD23, and Figurative Comparisons datasets. Table <ref> shows the number of positive, negative and literal examples used to train each binary classifier. §.§ Appendix: Figurative Language Classification Multi-label Training Set We use the fine-tuned specialized binary classification models to automatically tag our training corpora in a multi-label format. Table <ref> shows the number of examples per figurative language class, as predicted by the binary classifiers. This dataset forms the basis of training our multi-task model. At a later step, this dataset gets split in train/dev set, where a 10% stratified sample is reserved for development. §.§ Appendix: Figurative Language Classification Error Analysis In this subsection of the appendix, we present additional randomly selected examples where the model MFLM and binary model predictions do not align with human annotations. These additional examples are presented in Table <ref>. §.§ Appendix: Authorship Attribution Baselines In this section of the appendix we provide further details regarding the Stylometric features of our Authorship Attribution (AA) baseline approach. We implement 52 text metrics using the cophi[cophi: <https://github.com/cophi-wue/cophi-toolbox>] and textstat[textstat: <https://github.com/textstat/textstat>] Python packages. These metrics are used to form a document vector with 52 stylometric features. In the Table <ref> we list the feature names along with implementation notes.
http://arxiv.org/abs/2406.09278v1
20240613161659
Boosting information transfer in a quantum correlated medium
[ "Finn Schmolke", "Etienne Springer", "Eric Lutz" ]
quant-ph
[ "quant-ph" ]
Institute for Theoretical Physics I, University of Stuttgart, D-70550 Stuttgart, Germany § ABSTRACT Sharing and receiving information plays a pivotal role in science and technology. Quantum communication relies on the principles of quantum mechanics to transmit information in a nonclassical manner. Existing quantum communication protocols are commonly based on shared entangled states between sender and receiver, while the transmitting medium is classical. We here demonstrate that information transfer may be enhanced in a quantum correlated medium without entanglement distribution. We concretely show that nonclassical correlations, with nonzero discord, between the first two spins of a spin chain that acts as a quantum wire can increase the information flow and reduce the propagation time. We relate this effect to the breaking of the spatial symmetry of the out-of-time-order correlator that characterizes the spread of information through the medium. Boosting information transfer in a quantum correlated medium Eric Lutz June 17, 2024 ============================================================ Communication is based on the transfer of information between one point and another <cit.>. A notable observation is that both communication and information are inherently physical, since information is transmitted by physical means, through a physical medium, for instance, in the form of an electric current or a light wave <cit.>. They are therefore subjected to the laws of physics, in particular, to those of quantum theory. Quantum mechanics has been shown to restrict the flow of entropy/information, that is, the number of bits sent per unit time <cit.>. For a single channel, the entropy flux İ is upper bounded by the energy current Ė according to İ^2 ≤πĖ/3 ħ <cit.>. There is hence a minimum energy cost per bit associated with communication. This bound is believed to be independent of the physical properties of the transmitting medium, and thus universal <cit.>. It is intimately related to the existence of quantum speed limits that impose strong constraints on the evolution time of any quantum system <cit.>. We here show that nonclassical correlations may boost entropy currents in a medium. Quantum physics, thus, not only fundamentally constrains information flow, it can also enhance it. We focus in the following on information transfer along a spin chain that acts as a quantum wire <cit.>. Whereas photons are ideal carriers of information over long distances, it is not straightforward to convert information between two physically different photonic qubits. Spin chains are a prominent, solid-state alternative for short- and mid-range communication, for example, to connect quantum processors or quantum registers <cit.>. In these systems, information is transmitted dynamically from one end of a chain to the other, without the requirement for any external control. We consider for concreteness the perfect state transfer protocol introduced in Ref. <cit.>, and implemented experimentally in Ref. <cit.>, where the magnetic interactions between neighboring spins are engineered such that information is transferred with unit fidelity. We demonstrate that the transmission of a qubit end-state along the spin chain is significantly improved in the presence of nonclassical correlations, with nonzero discord <cit.>, between only the first two spins of the chain, that is, sender and receiver are initially uncorrelated. This enhancement is associated with a reduced arrival time and a larger value of the maximum entropy flux. We additionally provide a sufficient condition for this increase of communication rate to occur, and illustrate it with a general class of two-qubit X states that can be treated analytically <cit.>. It is worth emphasizing that this quantum improvement strongly differs from usual quantum communication protocols, such as quantum key distribution, quantum teleportation and super-dense coding, that usually rely on a shared entangled state between sender and receiver <cit.>. Quantum correlated spin chain. We consider a quantum wire consisting of a linear XY chain with N qubits and inhomogeneous nearest-neighbor interactions <cit.> H = ω∑_j=1^N σ^z_j + ∑_k=1^N-1J_k (σ^x_kσ^x_k+1+σ^y_kσ^y_k+1), where ω is the level spacing and J_k= (λ/4)√(k·(N-k)) are mirror-symmetric interaction constants between adjacent qubits, with coupling strength λ (we put ħ = 1). We choose λ = 4J/N, such that the maximum coupling constant is max{J_k} = J, independent of N, where J sets the characteristic energy scale <cit.>. The first spin (j=1) acts as the sender and the last one (j=N) as the receiver. Information is transmitted along the wire by dynamically transferring the state ρ^t=0_1 of the first spin to the state ρ^t=τ_u_N of the last spin in time τ_u. The couplings J_k are engineered such that after time τ_u = π/λ, state transfer is perfect, with unit fidelity, F(ρ^0_1,ρ^τ_u_N) = Tr [√(√(ρ^0_1)ρ^τ_u_N√(ρ^0_1))]^2=1 <cit.>. For instance, if the first spin is initially in the excited state |1⟩⟨ 1| and all the remaining spins are in their ground state |0⟩⟨ 0|, then, after time τ_u, all the qubits will be in their ground state, except the last one which will be in the excited state. The time τ_u has been shown to correspond to the minimal possible evolution time, hence, to the quantum speed limit of perfect state transfer in one dimension <cit.>. Following Refs. <cit.>, we begin by taking the state of the first qubit to be thermal at inverse temperature β, ρ_1^0=ρ^th_1 = exp(-βωσ^z_1)/Z, with partition function Z (we shall relax this assumption later on). In this case, the information current is given by the von Neumann entropy rate (in bits) <cit.>. Instantaneous information and energy fluxes at the receiver are accordingly İ = -∂_t [ρ^t_Nlog(ρ^t_N)] and Ė = ∂_t [ρ^t_Nωσ^z_N]. We further prepare the second spin (j=2) in a thermal state at the same inverse temperature, ρ_2^0=ρ^th_2 = exp(-βωσ^z_2)/Z (we shall again relax this assumption in the following), and initialize the remaining qubits in their respective ground states. The total initial state is then of the form ρ(0) = ρ^0_12⊗0^⊗ (N-2). We examine two different scenarios: (i) in the first case, the first two qubits are uncorrelated, ρ^0_12, u = ρ_1^th⊗ρ_2^th, whereas (ii) in the second case, they are quantum correlated, ρ^0_12,c = ρ^th_1 ⊗ρ^th_2 + χ, with a correlation term χ = -iα(σ^+_1 σ^-_2 - σ^-_1 σ^+_2), with amplitude α and spin ladder operators σ_j^± = σ_x ± iσ_y. This correlated state is such that the respective reduced density operators coincide with the thermal density matrices of the uncorrelated state <cit.>; this allows one to directly compare the two situations. The parameter α controls the amount of quantum correlations; it is upper bounded, α≤1/(4cosh^2β)≤1/4, to ensure positivity of the density matrix. The state ρ^0_12,c has nonzero geometric quantum discord defined as D_g(ρ)=min_σ∈𝒞2ρ-σ^2, where 𝒞 is the set of all classically correlated states <cit.>. A nonzero value of the geometric discord indicates nonclassical correlations <cit.>. We find that the geometric quantum discord increases quadratically with the amplitude of the correlation term, D_g(ρ^0_12,c) = 2α^2. The presence of initial nonclassical correlations strongly affects the propagation of information and energy in the medium. <Ref> examines the impact of local quantum correlations on information transmission along the XY spin chain (<ref>). <Ref>a displays the fidelity F(ρ^0_1,ρ^t_N) between emitter and receiver as a function of time for N=10 qubits and α = 1/4. While the first spin state is exactly transmitted to the final qubit in time τ_u for uncorrelated initial states (blue), perfect state transfer is achieved much faster, after time τ_c, for correlated initial conditions (red). The time τ_u, therefore, no longer corresponds to the ultimate quantum speed limit, as for the uncorrelated state <cit.>. Interestingly, faithful transfer occurs a second time at time τ_u. This enhanced transmission is accompanied by an increase of the maximum information flux İ_max, as well as of the maximum energy current Ė_max (arrows in <ref>b). We will see below that those boosts are directly related to the amount of quantum correlations in the initial state. We also observe that the information-energy flow inequality, İ^2 ≤πĖ/3, is obeyed in both instances for t≤τ_u, respectively t≤τ_c (<ref>b). After full information transfer, the roles of sender and receiver are switched, and the currents are reversed. We mention that the fidelity, as well as the information and energy fluxes can be evaluated analytically for the spin chain (<ref>) with the help of the Jordan-Wigner transformation (Supplemental Material). <Ref> further shows the temporal evolution of the information I and of the energy E of the receiving qubit ρ_N. In the uncorrelated case, the two quantities reach the initial values of the sender, I= ln 2 and E = 0, simultaneously at time τ_u. By contrast, information and energy display a different time dependence in the presence of correlations. The von Neumann entropy behaves similarly to the fidelity F(ρ^0_1,ρ^t_N), attaining its initial, and maximum, value at time τ_c. However, the energy overshoots the initial energy of the sender, reaching its maximum after τ_c, until it gets reflected at the end of the chain and flows back such that it coincides with the initial energy of the sender at time τ_u, like in the uncorrelated case. This remarkable behavior is an immediate consequence of the initial quantum correlations. Enhanced information transmission. In order to quantitatively analyze the boosted transfer of information along the quantum wire, we next use the out-of-time-order correlator (OTOC) <cit.>. We specifically consider the expectation value of the squared commutator of two local, spatially separated (Hermitian) operators W(x,t) and V(y,0), C(x,y,t) = ⟨ [W(x,t),V(y,0]^2⟩ = 2- 2 Re[F(x,y,t)] for unitary dynamics <cit.>. The out-of-time-order correlator, F(x,y,t) = ⟨ W(x,t)V(y,0)W(x,t)V(y,0)⟩, characterizes the spread of quantum information along the spin chain, commonly referred to as information scrambling <cit.>. For concreteness, we choose the two single-site Pauli operators W(x,t) = σ^z_x(t) and V(y=3,0) = σ^z_3(0); the time-evolved Heisenberg operator W(x,t) can thus be regarded as a local probe (at site x and time t) of the information emitted by the (correlated) sender at t=0. <Ref>d compares the stroboscopic, wave-like evolution of the correlation function C(x,3,t) at three different times, τ_u/4, τ_u/2 and 3τ_u/4, for uncorrelated (blue) and correlated (red) initial states; we have evaluated C(x,3,t) numerically for a chain of N=250 spins <cit.>. The correlator exhibits three peaks that travel from left to right with, in both instances, the same wavefront which moves close to the Lieb-Robinson velocity v_LR = 2J. The latter velocity provides a fundamental upper bound to the spread of information in locally interacting quantum systems <cit.>. Information can hence not propagate faster than v_LR. However, we note that the correlator is no longer spatially symmetric in the presence of quantum correlations (red). The weight of the distribution C(x,3,t) is indeed shifted towards the wave front, with higher peaks near the wave front compared to the case without initial correlations. For example, at t = τ_u/4, quantum correlations increase the mean x = ∫ dx x C(x,3, τ_u/4) /∫ dx C(x,3, τ_u/4) by 8.15% from 37.41 to 40.46 for the parameters of <ref>d. As a consequence, more information flows per unit time towards the receiving end of the chain, giving rise to the faster state transfer seen in <ref>abc. Symmetries of the out-of-time-order correlator are further elaborated on in the Supplemental Material. Let us now investigate the role of initial nonclassical correlations in more detail. <Ref> clearly indicates that both the amplified maximum information flow, İ_c,max/ İ_u,max (<ref>a) and the enhanced transfer time, τ_u/τ_c (<ref>b), monotonically increase with the amount of quantum correlations, quantified by the geometric discord of the initial state D_g(ρ^0_12,c) (dark red, dashed). We may hence conclude that nonclassical correlations are a local physical resource that can boost communication rates in a wire. Interestingly, the state ρ^0_12,c is separable with zero concurrence <cit.> for all values of α. We observe, in fact, no information transmission boost when a maximally entangled Bell state is used (Supplemental Material). This quantum advantage is therefore closer to the deterministic quantum computation with one qubit (DQC1) model <cit.>, where the quantum resource for computational speedup is associated with discord, and not with entanglement <cit.>. Criterion for enhanced transmission. The above effect is not restricted to correlated thermal states or to XY spin chains. We next consider a quantum spin chain of the form H = ω∑_j σ^z_j + H_int with general coupling Hamiltonian H_int, and an initial correlated state ρ^0_12,c = ρ_u + χ with diagonal part ρ_u. We furthermore separate the unitary time evolution into uncorrelated and correlated parts, ρ_c(t) = ρ_u(t) + U (χ⊗0^⊗ (N-2)) U^† with U=exp(-iHt). In order to boost state transfer, we demand that, at short times, more probability is shifted towards the state |ψ_2⟩ = |01⟩⊗|0⟩^⊗ (N-2) than in the uncorrelated case, that is, ⟨ψ_2|ρ̇_c(t)|ψ_2⟩> ⟨ψ_2|ρ̇_u(t)|ψ_2⟩. Since the local initial states are equal in both cases, this condition implies that ⟨01|[H_int,χ]|01⟩ > 0 (Supplemental Material). Note that only the interaction Hamiltonian matters because the diagonal part does not connect different sites of the chain. A necessary requirement for enhanced transmission is therefore [H_int, χ] ≠ 0. Boosted entropy flux is thus generally possible whenever the correlation term χ does not commute with the interaction Hamiltonian H_int. The latter criterion underscores the quantum origin of the phenomenon. As an illustration, we consider the quantum XY chain (<ref>) with a correlated state given by an X state <cit.> ρ^0_12,c= ρ_X = [ a 0 0 w; 0 b z 0; 0 z^∗ c 0; w^∗ 0 0 d ], with |z| ≤√(bc) and |w| ≤√(ad). Such X states define a general class of quantum correlated bipartite qubit states that, for instance, include Werner states and Bell states <cit.>. The corresponding reduced state of the first spin to be transferred is ρ^0_1 = _2[ρ_X] =diag(a+b,c+d). Condition (<ref>) requires that |z| ≠ 0, while the entries (w,w^∗,a,d), that belong to the commuting part, may be chosen arbitrarily. The concurrence of the X states is given by C(ρ_X) = 2 max{0,z-√(ad),w-√(bc)} <cit.>, whereas their geometric discord reads D_g(ρ_X) = min{4(w^2+z^2), (a-c)^2+(b-d)^2+2(w+z)^2}/2 <cit.>. X states are hence entangled if and only if bc < |w|^2 or ad < |z|^2, but both conditions cannot be satisfied simultaneously <cit.>. States with w = w^∗ = 0 and ad = |z|^2 = α^2, with α < 1/4 (like the thermal correlated states considered before) are thus not entangled. However, in general D_g(ρ_X)= 2α^2 for z = iα, α≤ 1/2, revealing the presence of nonclassical correlations. <Ref> presents the amplified maximum information flow, İ_c,max/ İ_u,max (<ref>a), and the enhanced transfer time, τ_u/τ_c (<ref>b), for the X state (<ref>) (solid lines) as a function of the geometric quantum discord D_g(ρ_X), for N= 5, 20 and 100 spins, when α is varied. As for the thermal correlated state, ρ^0_12,c = ρ^th_1 ⊗ρ^th_2 + χ, both quantities grow monotonically with the amount of nonclassical correlations. However, since D_g(ρ_X) can take larger values when the X state is entangled, the quantum enhancement is more pronounced. For example, for the N=11 sites of the state transfer experiment reported in Ref. <cit.>, a maximal geometric discord of 1/2 would increase the maximal information flux by 32.6% and decrease the transfer time by 18.7%. <Ref>b additionally shows that these boosts are accompanied by an increased shift toward the wavefront, Δx̅ = x̅_c - x̅_u, of the averaged out-of-time-order correlator C(x,3,τ_u/4) (purple diamonds), confirming that this effect finds its origin in the breaking of the spatial symmetry of the out-of-time-order correlator by the quantum correlations. The increased maximum entropy current and transfer time both exhibit a square-root dependence on the geometric quantum discord D_g, that is, a linear dependence of the correlation strength α. We additionally note that due to the singular derivative of the square root at the origin, only a small amount of quantum discord is required to achieve a significant transmission improvement. Conclusions. Entanglement is a fundamental resource for quantum applications that outperform their classical counterparts <cit.>. However, entangled states are a costly and fragile resource that is difficult to prepare and maintain over long distances, owing to the detrimental effect of decoherence <cit.>. We have shown that information propagation can be enhanced in a quantum correlated medium without requiring entanglement distribution between emitter and receiver. Remarkably, local nonclassical correlations with nonzero discord between only the first two qubits at the emitter suffice to boost the information flow for spin chains of arbitrary length. While information cannot propagate faster than the Lieb-Robinson velocity, the breaking of the spatial symmetry of the out-of-time-order correlator induced by the quantum correlations leads to an augmented entropy current. Our results indicate that quantum correlated media are a useful resource for quantum enhanced communication. Acknowledgements. We acknowledge financial support from the Vector Foundation and from the German Science Foundation (DFG) under project FOR 2724. 99 bri65 L. Brillouin, Science and Information Theory, (Academic Press, San Diego, 1965). ger00 N. Gershenfeld, The Physics of Information Technology, (Cambridge University Press, Cambridge, 2000). yu01 F. T. S. Yu, S. Jutamulia, and S. Yin, Introduction to Information Optics, (Academic Press, San Diego, 2001). fra18 M. Franceschetti, Wave Theory of Information, (Cambridge University Press, Cambridge, 2018). leb66 D. S. Lebedev and L. B. Levitin, Information transmission by electromagnetic field, Inf. Control 9, 1 (1966). bow67 J. Bowen, On the capacity of a noiseless photon channel, IEEE Trans. Inform. Theory 13, 230 (1967). pen83 J. B. Pendry, Quantum limits to the flow of information and entropy, J. Phys. A Math. Gen. 16, 2161 (1983). bek88 J. D. Bekenstein, Communication and energy, Phys. Rev. A 37, 3437 (1988). bek90 J. D. Bekenstein and M. Schiffer, Quantum limitations on the storage and transmission of information, Int. J. Mod. Phys. C 1, 355 (1990). yue92 H. Yuen and M. Ozawa, Ultimate information carrying limit of quantum systems, Phys. Rev. Lett. 70, 363 (1992). cav94 C. M. Caves and P. D. Drummond, Quantum limits on Bosonic communication rates, Rev. Mod. Phys. 66, 481 (1994). ble00 M. P. Blencowe and V. Vitelli, Universal quantum limits on single-channel information, entropy, and heat flow, Phys. Rev. A 62, 052104 (2000). def17 S. Deffner and S. Campbell, Quantum speed limits: From Heisenbergs uncertainty principle to optimal quantum control, J. Phys. A: Math. Theor. 50, 453001 (2017). bos23 S. Bose, Quantum communication through an unmodulated spin chain, Phys. Rev. Lett 91, 207901 (2003). chr04 M. Christandl, N. Datta, A. Ekert, and A. J. Landahl, Perfect State Transfer in Quantum Spin Networks, Phys. Rev. Lett. 92, 187902 (2004). fit06 J. Fitzsimons and J. Twamley, Globally Controlled Quantum Wires for Perfect Qubit Transport, Mirroring, and Computing, Phys. Rev. Lett. 97, 090502 (2006). kay07 A. Kay, Unifying Quantum State Transfer and State Amplification, Phys. Rev. Lett. 98, 010501 (2007). cap07 P. Cappellaro, C. Ramanathan, and D. G. Cory, Simulations of Information Transport in Spin Chains, Phys. Rev. Lett. 99, 250506 (2007). fra08 C. Di Franco, M. Paternostro, and M. S. Kim, Perfect State Transfer on a Spin Chain without State Initialization, Phys. Rev. Lett. 101, 230502 (2008). ban11 L. Banchi, A. Bayat, P. Verrucchi, and S. Bose, Nonperturbative Entangling Gates between Distant Qubits Using Uniform Cold Atom Chains, Phys. Rev. Lett. 106, 140501 (2011). yao11 N. Y. Yao, L. Jiang, A. V. Gorshkov, Z.-X. Gong, A. Zhai, L.-M. Duan, and M. D. Lukin, Robust Quantum State Transfer in Random Unpolarized Spin Chains, Phys. Rev. Lett. 106, 040505 (2011). god12 C. Godsil, S. Kirkland, S. Severini, and J. Smith, Number-Theoretic Nature of Communication in Quantum Spin Systems, Phys. Rev. Lett. 109, 050502 (2012). yao12 N. Y. Yao, L. Jiang, A. V. Gorshkov, P. C. Maurer, G. Giedke, J. I. Cirac, and M. D. Lukin, Scalable architecture for a room temperature solid-state quantum information processor, Nature Commun. 3, 800 (2012). ajo13 A. Ajoy and P. Cappellaro, Quantum simulation via filtered Hamiltonian engineering: application to perfect quantum transport in spin networks, Phys. Rev. Lett. 110, 220503 (2013), sah15 S. Sahling, G. Remenyi, C. Paulsen, P. Monceau, V. Saligrama, C. Marin, A. Revcolevschi, L. P. Regnault, S. Raymond, and J. E. Lorenzo, Experimental realization of long-distance entanglement between spins in antiferromagnetic quantum spin chains, Nature Phys. 11, 255 (2015). mar16 O. V. Marchukov, A. G. Volosniev, M. Valiente, D. Petrosyan, and N. T. Zinner, Quantum spin transistor with a Heisenberg spin chain, Nature Commun. 7, 13070 (2016). bos07 S. Bose, Quantum communication through spin chain dynamics: An introductory overview. Contemp. Phys. 48, 13 (2007). nik14 G. M. Nikolopoulos and I. Jex, Quantum State Transfer and Network Engineering, (Springer, Berlin, 2014). cha16 R. J. Chapman, M. Santandrea, Z. Huang, G. Corrielli, A. Crespi, M.-H. Yung, R. Osellame, and A. Peruzzo, Experimental perfect state transfer of an entangled photonic qubit, Nature Commun. 7, 11339 (2016). oll01 H. Ollivier and W. H. Zurek, Quantum discord: a measure of the quantumness of correlations, Phys. Rev. Lett. 88, 017901 (2001). hen01 L. Henderson and V. Vedral, Classical, quantum and total correlations, J. Phys. A 34, 6899 (2001). mod12 K. Modi, A. Brodutch, H. Cable, T. Paterek, and V. Vedral, The classical-quantum boundary for correlations: Discord and related measures, Rev. Mod. Phys. 84, 1655 (2012). ber17 A. Bera, T. Das, D. Sadhukhan, S. S. Roy, A. Sen(De) and U. Sen, Quantum discord and its allies: a review of recent progress, Rep. Prog. Phys. 81, 024001 (2017). hu18 M. L. Hu, X. Hu, J. Wang, Y. Peng, Y. R. Zhang, and H. Fan, Quantum coherence and geometric quantum discord, Phys. Rep. 762764, 1 (2018). yu07 T. Yu and J. H. Eberly, Evolution from entanglement to decoherence of bipartite mixed "X" states, Quantum Info. Comput. 7, 459 (2007). ali10 M. Ali, A. R. P. Rau, and G. Alber, Quantum discord for two-qubit X states, Phys. Rev. A 81, 042105 (2010). che11 Q. Chen, C. Zhang, S. Yu, X. X. Yi, and C. H. Oh, Quantum discord of two-qubit states, Phys. Rev. A 84, 042313 (2011). que12 N. Quesada, A Al-Qasimi, and D. F. James, Quantum properties and dynamics of X states, J. Mod. Opt. 59, 1322 (2012). gis07 N. Gisin and R. Thew, Quantum communication, Nature Photon. 1, 165 (2007). chr05 M. Christandl, N. Datta, T. C. Dorlas, A. Ekert, A. Kay, and A. J. Landahl, Perfect transfer of arbitrary states in quantum spin networks, Phys. Rev. A 71, 032312 (2005). com The transfer time τ_u then scales with the system size N, in line with the Lieb-Robinson bound <cit.>. gen16 V. X. Genest, L. Vinet, A. Zhedanov, Quantum spin chains with fractional revival, Ann. Phys. 371, 348367 (2016). kni98 E. Knill and R. Laflamme, Power of One Bit of Quantum Information, Phys. Rev. Lett. 81, 5672 (1998). dat08 A. Datta, A. Shaji, and C. M. Caves, Quantum Discord and the Power of One Qubit, Phys. Rev. Lett. 100, 050502 (2008). lan08 B. P. Lanyon, M. Barbieri, M. P. Almeida, and A. G. White, Experimental Quantum Computing without Entanglement, Phys. Rev. Lett. 101, 200501 (2008). wan19 W. Wang, J. Han, B. Yadin, Y. Ma, J. Ma, W. Cai, Y. Xu, L. Hu, H. Wang, Y. P. Song, Mile Gu, and L. Sun, Witnessing Quantum Resource Conversion within Deterministic Quantum Computation Using One Pure Superconducting Qubit, Phys. Rev. Lett. 123, 220501 (2019). dak10 B. Dakić, V. Vedral, and C. Brukner, Necessary and sufficient condition for nonzero quantum discord, Phys. Rev. Lett. 105, 190502 (2010). gir12 D. Girolami and G. Adesso, Observable measure of bipartite quantum correlations. Phys. Rev. Lett. 108, 150403 (2012). yun06 M.-H. Yung, Quantum speed limit for perfect state transfer in one dimension, Phys. Rev. A 74, 030303(R) (2006). mic19 K. Micadei, J. Peterson, A. Souza, R. Sarthour, I. Oliveira, G. Landi, T. Batalhao, R. Serra, and E. Lutz, Reversing the direction of heat flow using quantum correlations, Nature Commun. 10, 2456 (2019). lar69 A. Larkin and Y. N. Ovchinnikov, Quasiclassical method in the theory of superconductivity, Sov. Phys. JETP 28, 1200 (1969). swi18 B. Swingle, Unscrambling the physics of out-of-time-order correlators, Nature Phys. 14, 988 (2018). lew19 R. J. Lewis-Swan, A. Safavi-Naini, A. M. Kaufman, and A. M. Rey, Dynamics of quantum information, Nature Rev. Phys. 1, 627634 (2019). gar23 I. García-Mata, R. A. Jalabert and D. Wisniacki, Out-of-time-order correlations and quantum chaos, Scholarpedia 18,55237 (2023). lui17 D. J. Luitz and Y. Bar Lev, Information propagation in isolated quantum systems, Phys. Rev. B 96, 020406(R) (2017). lin18 C.-J. Lin and O. I. Motrunich, Out-of-time-ordered correlators in a quantum Ising chain, Phys. Rev. B 97, 144304 (2018). bao20 J.-H. Bao and C.-Y. Zhang, Out-of-time-order correlators in the one-dimensional XY model, Commun. Theor. Phys. 72, 085103 (2020). lie72 E. H. Lieb and D. W. Robinson, The finite group velocity of quantum spin systems, Commun. Math. Phys. 28, 251 (1972). woo98 W. K. Wootters, Entanglement of Formation of an Arbitrary State of Two Qubits, Phys. Rev. Lett. 80, 2245 (1998). sch07 M. A. Schlosshauer, Decoherence and the Quantum-To-Classical Transition, (Springer, Berlin, 2007). Supplemental Material: Boosting information transfer in a quantum correlated medium In this Supplemental Material, we will (I) present the analytical evaluation of the time evolution of the local populations using the Jordan-Wigner transformation, (II) derive the condition for faster state transfer, (III) analyze the out-of-time-order correlator of the perfect state transfer and the uniformly coupled quantum spin chain and (IV) show that there is no information boost for Bell states. § ANALYTICAL RESULTS §.§ Jordan-Wigner transformation In this section we compute the local populations of the spin chain and from them derive the expressions for the state transfer fidelity and the von Neumann entropy. Our starting point is the perfect state transfer Hamiltonian for a chain of N qubits with nearest neighbor interaction, Eq. (2) of the main text, H = ω∑_j=1^N σ^z_j + ∑_k=1^N-1J_k (σ^x_kσ^x_k+1+σ^y_kσ^y_k+1) with J_k = λ/4√(k·(N-k)), where ω is the level spacing of the qubits and the J_k are the interaction constants between adjacent qubits. Let ρ(0) be the composite initial state of the full qubit chain. The interactions J_k are engineered in such a manner that after a constant time τ_u = π/λ (which we consider the final time of the process), the state of the chain corresponds to the mirror image of its initial configuration (the initial local states of the individual qubits are swapped with their mirror image states) <cit.>. To avoid unphysical Hamiltonians and violation of causality, we normalize the interaction constants such that their maximum value is bounded. Specifically, we set λ = 4J/N, so that max{J_k} = J, independent of N <cit.>. The perfect transfer time then scales linearly with the system size N, and we obtain τ_u = Nπ/(4J). The initial states considered in the main text can be written in the form ρ(0) = ρ^0_12⊗0^⊗ (N-2), where ρ^0_12 = ρ_X = [ a 0 0 w; 0 b z 0; 0 z^∗ c 0; w^∗ 0 0 d ], is an X state <cit.>. Concretely, we consider initial states that can be parametrized as ρ^0_12 = ρ_1 ⊗ρ_2 + χ, with ρ_j = [ p_j 0; 0 1-p_j ] and χ = -iα(σ^+_1 σ^-_2 - σ^-_1 σ^+_2), with real parameter α∈ [0,1/4]. For any α≠ 0, these states have non-zero geometric quantum discord <cit.> D(ρ_X) = 1/2min{4(w^2+z^2),(a-c)^2+(b-d)^2+2(w+z)^2}, but are not necessarily entangled, since their concurrence is given by <cit.> C(ρ_X) = 2 max{0,z-√(ad),w-√(bc)}. For initial states of the form <ref>, the evolution of the local populations can be conveniently calculated by performing a Jordan-Wigner transformation <cit.>. The transformed Hamiltonian reads H_JW = ω∑_k=1^N (2 c^†_k c_k - 1) + 2∑_k=1^N-1 J_k(c^†_k c_k+1 + c^†_k+1 c_k), where c^†_k and c_k are the respective fermionic ladder operators. They are related to the usual Pauli operators by c_k = exp(iπ∑^k-1_n=1 c^†_nc_n) σ^-_k, c_j^† = exp(iπ∑^k-1_n=1 c^†_nc_n) σ^+_k. If we restrict ourselves only to the evolution of the local populations and nearest-neighbor correlations, we obtain an effective von Neumann equation Ż(t) = i[Ω,Z(t)], for the correlations Z_jk(t) = ⟨σ^+_jσ^-_k⟩(t), representing the populations on the diagonal Z_jj(t) = p_j(t), and nearest-neighbor correlations Z_j,j±1 =⟨σ^+_jσ^-_j±1⟩(t) on the off-diagonals. The matrix Ω = diag(2*J,ω,2*J), where *J=(J_1,J_2…,J_N-1)^T, acts as an effective tridiagonal Hamiltonian with the interaction strengths on the off-diagonals and the qubit level spacing ω on the diagonal. §.§ One-excitation subspace The class of initial states can be further extended beyond <ref> by considering initial states that lie in the single excitation sector, i.e. in the subspace that contains only one spin excitation. Since the total magnetization in the z-direction is conserved, [H,∑_k σ^z_k] = 0, the Hamiltonian can be decomposed into mutually orthogonal subspaces corresponding to a fixed number of excitations and the evolution inside each subspace is consequently independent from the remaining Hilbert space. If we denote by P_1 = ∑_k 1_k the projector on the single excitation sector with |1_k⟩∈{|10⋯ 0⟩, |010⋯ 0⟩, |0010⋯ 0⟩, …, |0⋯ 01⟩}, then the projection of the initial X state <ref> becomes P_1 ρ_X P_1 = [ b z; z^∗ c. ] We choose z = iα, where α∈ [0,1/2] can now be twice as large as before. The one excitation sector thus supports only X states for which a = d = w = 0. The Hamiltonian inside this subspace becomes P_1 H P_1 = Ω, which is equal to the adjacency matrix of the physical graph corresponding to the spin chain and it coincides with the effective Hamiltonian of the Jordan-Wigner transform, <ref>. The local populations on the full Hilbert space can be written as p_j(t) = [ρ(t)Π_j], with projectors Π_j = 1^⊗ j-1⊗[ 1 0; 0 0 ]⊗1^⊗ N-j. In the single excitation sector, the projector Π_j reduces to P_1 Π_j P_1 = j, such that the populations coincide with the ones obtained from the Jordan-Wigner transform p_j(t) = Z_jj(t), following from the same effective von Neumann equation (<ref>). Both approaches, the Jordan-Wigner transformation and projecting onto the single excitation sector, thus lead to the same evolution equation for the local populations differing only in the permitted values for α. §.§ Local populations For initial states that either lie in the single excitation subspace, <ref>, or can be parametrized as <ref>, we accordingly only need to consider the effective von Neumann equation <ref>. The solution reads Z(t) = e^iΩ tZ(0)e^-iΩ t, where the time evolution operator <cit.> U(t) = e^iΩ t = exp(iλ S_x t) represents the rotation of a spin (N-1)/2 particle around the x-axis. In this description, the initial states take the form Z(0) = [ p_1(0) iα ⋯ 0; -iα p_2(0) ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ 0 ] corresponding to the initial X state in Eq. (14) of the main text. The populations then follow from the diagonal elements of Z(t), which can be straightforwardly computed in terms of the matrix elements of the unitary (<ref>) p_k(t) = Z_kk(t) = p_1 |U_k1|^2 + p_2 |U_k2|^2 - 2 αIm[U_k1^∗ U_k2]. We are in particular interested in the population of the last spin p_N(t) = Z_NN(t) = p_1 |U_N1|^2 + p_2 |U_N2|^2 - 2 αIm[U_N1^∗ U_N2]. The matrix elements U_kl follow from the Wigner d-matrix <cit.> d^j_m^' m(θ) = [(j+m^' )!(j-m^' )!(j+m)!(j-m)!]^1/2∑_s=s_min^s_max[(-1)^si^m-m^'cos(θ/2)^2j+m-m^' -2ssin(θ/2)^m^' -m+2s/(j+m-s)!s!(m^' -m+s)!(j-m^' -s)!], where s_min = max{0,m-m^'} and s_max = min{j+m,j-m^'} and θ = λ t. <Ref> represents the elements of the rotation matrix of a spin j = (N-1)/2 particle with magnetizations running over m = -j,-j+1,…,j-1,j. We therefore have U_kl = d^(N-1)/2_-(N-1)/2+k-1,-(N-1)/2+l-1(θ), = [(k-1)!(N-k)!(l-1)!(N-l)!]^1/2∑_s=s_min^s_max[(-1)^si^l-kcos(θ/2)^(N-1)+l-k-2ssin(θ/2)^k-l+2s/(l-1-s)!s!(k-l+s)!(N-k-s)!], with s_min = max{0,l-k} and s_max = min{l-1,N-k}. Plugging in (k,l) = (1,N) and (k,l) = (2,N), we get for the individual contributions [(k-1)!(N-k)!(l-1)!(N-l)!]^1/2 = (N-1)!, (k,l) = (1,N) √((N-1)!(N-2)!), (k,l) = (2,N) s_min = max{0,l-k} = N-1, (k,l) = (1,N) N-2, (k,l) = (2,N) s_max = min{l-1,N-k} = N-1, (k,l) = (1,N) N-2, (k,l) = (2,N) (-1)^si^l-k = (-1)^N-1i^N-1, (k,l) = (1,N) (-1)^N-2i^N-2, (k,l) = (2,N) (l-1-s)!s!(k-l+s)!(N-k-s)! = (N-1)!, (k,l) = (1,N) (N-2)!, (k,l) = (2,N) (N-1)+l-k-2s = 0, (k,l) = (1,N) 1, (k,l) = (2,N) , k-l+2s = N-1, (k,l) = (1,N) N-2, (k,l) = (2,N) . Finally, using <ref> together with <ref>, we obtain the time evolution of the local population of the Nth qubit of the chain as p_N(t) = p_1(0) [sin(θ/2)]^2(N-1) + p_2(0) (N-1) [sin(θ/2)]^2(N-2)cos(θ/2)^2 + 2√(N-1)α[sin(θ/2)]^2N-3cos(θ/2). The last term in the above equation is always greater or equal to zero for θ∈ [0,π] which is equivalent to t ∈ [0,τ_u], where τ_u = π/λ = Nπ/(4J) is the transfer time of the uncorrelated chain. §.§ Fidelity and von Neumann entropy The Hamiltonian (<ref>) does not create coherences. Therefore, the reduced density matrices of the local qubits are always diagonal. Knowledge of the local populations is hence sufficient to compute the fidelity and the von Neumann entropy as will be illustrated below. For two single-qubit states ρ and σ, the fidelity can be computed according to <cit.> F(ρ,σ) = O(ρ,σ) + √([1-O(ρ,ρ)][1-O(σ,σ)]), where O(ρ,σ) = [ρσ]. To detect the arrival of a state at the end of the chain, we compute the fidelity of the initial state at the first site with the local state at the last site. In the absence of coherences, the fidelity reduces to F(ρ_1(0),ρ_N(t)) = [√((1-p_1(0))(1-p_N(t)))+√(p_1(0)p_N(t))]^2. The initial local state is hence transferred with unit fidelity when p_N(t) = p_1(0). Initially, we have p_N(0) = 0 (the last spin starts in the ground state). In the absence of quantum correlations, for t>0, there will then be a monotonic increase in the population until the state is transferred (cf. <ref>). For every positive value of α the last term in <ref> is always positive during the transfer time, any amount of nonzero quantum discord can lead to faster state transfer. Note that even for α < 0, the Hamiltonian <ref> still guarantees a transfer time of at most τ_u=π/λ where the correlation term in <ref> vanishes. In order to evaluate the information received at the end of the chain, we use the von Neumann entropy. Because local states always remain diagonal, the von Neumann entropy is simply given by I = -[ρ_N(t)log(ρ_N(t))] = -p_N(t)log(p_N(t)) - (1-p_N(t))log(1-p_N(t)). The information flow can accordingly be conveniently expressed as İ = - ṗ_N(t)log(p_N(t)/1-p_N(t)). where p_N(t) is given in <ref>. § CONDITION FOR FASTER STATE TRANSFER In quantum systems with local interactions like the perfect state transfer model <ref>, every order of the power series of the time evolution operator U=exp(-iHt) can only connect adjacent sites. For instance, the evolution of a state |ψ_1⟩ = |1⟩⊗|0⟩^⊗ (N-1) is given by U |ψ_1⟩ = exp(-iHt)|ψ_1⟩ = ∑_n=0^∞(-i)^n/n! t^n H^n|1⟩⊗|0⟩^⊗ (N-1) . Every order of the Hamiltonian can shift the excitation through the chain by at most one site such that after applying the Hamiltonian n times, the state is generally in a superposition of the form H^n |1⟩⊗|0⟩^N-1 = ∑_s=1^n ∑_j=1^M(n) c_j(n,s) |s_σ(j)⟩⊗|0⟩^N-1-n where s ≤ n corresponds to a fixed number of excitations and |s_σ(j)⟩ denotes all possible permutations of ones and zeros for a given s (all the possibilities to distribute s excitations over n sites) with M(n) = [ n; s ] = n!/s!(n-s)! elements. The c_j(n,s) are arbitrary coefficients that depend on the specifics of the Hamiltonian. Every order of H is accompanied by a corresponding order of t such that, at each step in time, any excitation can at most reach its immediate neighbor. Information propagation is moreover upper bounded by the Lieb-Robinson velocity v_LR = 2J and can thus spread at most linearly in time. This limitation cannot be overcome even in the presence of correlations. However, transmission of a qubit end-state along the spin chain is still significantly improved in the presence of nonclassical correlations. To show this, we consider a quantum spin chain of the general form H = ∑_j ω_jσ^z_j + H_int with possibly different level spacings ω_j and general nearest-neighbor coupling Hamiltonian H_int. We assume that the initial correlated state ρ^0_12,c = ρ_u + χ can be decomposed into a diagonal part ρ_u with [ρ_u] = 1 and correlations χ with [χ] = 0. We further assume that the local initial states are diagonal and that the Hamiltonian does not generate coherences during the evolution, at least not in the subspace the evolution is restricted to. We separate the unitary time evolution into uncorrelated and correlated parts ρ_c(t) = ρ_u(t) + U χ U^†. In order to boost state transfer, only the local populations matter and we demand that, at short times, more probability is shifted towards the state |ψ_2⟩ = |01⟩⊗|0⟩^⊗ (N-2) than in the uncorrelated case, that is, ⟨ψ_2|ρ̇_c(t)|ψ_2⟩ > ⟨ψ_2|ρ̇_u(t)|ψ_2⟩, where ⟨ψ_2|ρ̇_c(t)|ψ_2⟩ = ⟨ψ_2|ρ_c(0)|ψ_2⟩ -i ⟨ψ_2| [H,ρ_u(0)]|ψ_2⟩ -i ⟨ψ_2| [H,χ]|ψ_2⟩. Since the local initial states are equal and the Hamiltonian appears only in first order (thus connecting only the first and second site), condition (<ref>) implies that -i⟨01|[H_int,χ]|01⟩ > 0. Note that only the interaction Hamiltonian matters because the diagonal part does not connect different sites of the chain. A minimal requirement for enhanced transmission is therefore that the commutator [H_int, χ] ≠ 0 does not vanish. In the perfect state transfer model (<ref>) condition <ref> ensures that local initial correlations on the sender's side, simultaneously reduce the arrival time and increase the maximum information flow İ_max (see Figs.  1 and 2 of the main text) for, in principle, arbitrary system size N (see <ref>). § OUT-OF-TIME-ORDER CORRELATOR In this section, we provide additional information on the out-of-time-order correlator in the spin chain (<ref>). We numerically compute the squared commutator C(x,y,t) = ⟨ [W(x,t),V(y,0)]^†[W(x,t),V(y,0)]⟩, of the time evolved Heisenberg operator W(x,t) localized at x and the operator V(y,0) localized at y at the initial time. For operators that are both unitary and Hermitian, the correlator conveniently reduces to C(x,y,t) = 2-2Re[F(x,y,t)], where F(x,y,t) = ⟨σ^z(x,t)σ^z(y,0)σ^z(x,t)σ^z(y,0)⟩ is the out-of-time-order correlator. The action of F(x,y,t) can be decomposed into consecutive steps: the local Pauli operator at site y acts on the initial state |ψ(y,0)⟩ = σ^z(y,0) |ψ⟩. Then, the time evolved operator located at another site x acts on the resulting state yielding |ψ(y,0;x,t)⟩ = σ^z(x,t)|ψ(y,0)⟩. The out-of-time-order correlator evaluates the overlap, F(x,y,t) = ⟨ψ(x,t;y,0)|ψ(y,0;x,t)⟩, between the perturbed state |ψ(y,0;x,t)⟩ and a state |ψ(x,t;y,0)⟩, where both actions are performed in reverse order. Initially, all Heisenberg operators commute and the overlap is one, yielding a zero correlator C(x,y,t) = 0. As time increases, the time evolution will gradually connect Heisenberg operators at increasingly distant sites. As soon as two sites j and k have overlapping support, the out-of-time-order correlator will no longer be one, and the correlator C(x,y,t)≠ 0 will be different from zero. The latter therefore provides a measure of operator spreading, and is related to information propagation <cit.>. For locally interacting quantum systems (where interactions decay at least exponentially as a function of the distance between sites), the above statement is true in general. The operator growth rate is bounded from above by the Lieb-Robinson velocity v_LR <cit.>. Generically, the correlator C(x,y,t) spreads through space with a wave front, whose tail has a universal scaling behavior <cit.> C(r,t) ∼exp(Λ (t-r(t)/v)^1+p/t^p), where r = |x-y| is the distance between sites and Λ may in some cases be seen as a quantum extension of the classical Lyapunov exponent <cit.>. In this regard, the speed of the wave front v effectively defines a causal light cone, and confines the connected region. In chaotic systems v is usually referred to as the butterfly velocity. The parameter p is model-dependent and controls the broadening of the wave front. For a given time t, the correlator thus decays exponentially in space. For sites located beyond the light cone r > vt, there is also an exponential decay of C(r,t). Note that the reverse statement is also true: Lieb-Robinson type bounds imply locality of interactions <cit.>. §.§ Symmetries We now proceed by investigating the symmetry properties of the out-of-time-order correlator in time and space. For an arbitrary uncorrelated initial condition where Z(0) = diag(p_1(0),p_2(0),…,p_N(0)), (cf. <ref>), the evolution of the populations will be of the form p_k(t) = ∑_m=1^N |U_km|^2. From the symmetry relations of the Wigner d-matrix elements (<ref>), it follows that (-1)^N-kU_km(t) = U_N-k+1,l(T-t) and thus |U_km|^2(t) = |U_N-k+1,m|^2(T-t). This implies that spins opposite to each other undergo time reversed dynamics, i.e. ⟨σ^z_k⟩(t) = ⟨σ^z_N+1-k⟩(T-t). In order to analyze the properties of the out-of-time-order correlator, we need to investigate the Heisenberg operators σ^z_k(t). The Pauli operators in the single excitation sector, or the Jordan-Wigner transformation respectively, are given by σ^z_k(0) ∼ 2k-1, where k projects onto the kth element and ∼ indicates the representation in single excitation sector or the Jordan-Wigner transformation respectively. The corresponding Heisenberg operators evolve according to σ^z_k(t) ∼ 2U^†(t) k U(t) - 1 = 2∑_m,n=1^N U^∗_km(t)U_kn(t) mn-1, (<ref>)= 2∑_m,n=1^N U^∗_N-k+1,m(t-T)U_N-k+1,n(t-T) mn-1, ∼σ^z_N+1-k(T-t). This immediately implies that the out-of-time-order correlator has the same time-reversal mirror symmetry F(x,y,t) = F(N-x+1,N-y+1,t-T). This is true irrespective of the initial condition. Again invoking symmetry relations of the Wigner d-matrix, we further obtain U_kl(T-t) = U_k,N-l+1(t) (-1)^l-1 and therefore, proceeding as before, we obtain σ^z_k(T-t) ∼ 2∑_m,n=1^N U^∗_km(t)U_kn(t) mn-1, = 2∑_m,n=1^N U^∗_k,N-m+1(t-T)U_k,N-n+1(t-T) mn-1, ∼σ^z_k(t). For symmetric initial states this implies that the out-of-time-order correlator has the additional symmetry F(x,y,t) = F(x,y,T-t). Depending on the (initial) state ρ, over which the quantum mechanical average is performed, some of those symmetries might be absent. <Ref> shows the out-of-time-order correlator for the perfect state transfer Hamiltonian (<ref>) for a chain of length N = 250 with the reference site in the middle of the chain, y = N/2 = 125. Here, the quantum mechanical average is taken over the completely mixed state ρ = 1/d, so both <ref> are valid, causing the high symmetry of the out-of-time-order correlator. For small times (compared to the length of the chain), the out-of-time-order correlator spreads linearly (cf. <ref>) with the butterfly velocity of a uniformly coupled chain, v_LR = 2J, <ref>b. Considering the full perfect state transfer time up to τ_u≈ 200 reveals how the engineered couplings conspire to perfectly invert the dynamics. After the initial linear spread, the out-of-time-order correlator bends and reaches the boundaries at τ_u/2 ≈ 98 where the causal region starts to shrink to a point again at the final time τ_u. The system size is fundamentally encoded into the Hamiltonian and therefore the system knows its size beforehand, explaining the curving of the out-of-time-order correlator near the boundaries. These features are specific to the particular form of the interactions and the resulting symmetries and are thus not expected generically. To highlight these special properties, we directly compare the behavior to a system with uniform interactions while otherwise keeping the same parameters. <Ref> displays snapshots of the out-of-time-order correlator at the same times as in <ref>. Since now the system is agnostic to its size, the out-of-time-order correlator exhibits a linear growth at the butterfly velocity throughout until hitting the boundaries. The operators hence spread to the boundaries faster than with engineered couplings but information is transmitted only incompletely. Once a region is causally connected it remains connected and after reflecting at the boundaries information gets scrambled all across the chain and becomes practically inaccessible at the receiving end. In <ref> the quantum mechanical average is taken over the local initial X state which is not symmetric. Therefore only <ref> applies and the out-of-time-order correlator is invariant only upon inverting space and reversing time. We compare the uncorrelated (blue) and correlated (red) states and find that the out-of-time-order correlators coincide for y=1,2. The effect of the correlations first becomes apparent for reference spin y=3, displayed in <ref>e,f (corresponding to to Fig. 1d of the main text). Quantum correlations break the time-reversal symmetry, <ref>, and shift the weight towards the wavefront such that more information is transmitted at the same time thereby increasing the energy flow and boosting state transfer. Because of the small amplitude of the out-of-time-correlator and the large system size, the effect is less pronounced in the illustration in <ref>f. As before, we compare the results to the uniformly coupled chain in <ref> which shows linear growth and scrambling in accordance with <ref>. §.§ Lieb-Robinson velocity In this section, we show why the perfect state transfer Hamiltonian <ref> leads to information propagation in a local region at maximum speed given by the Lieb-Robinson velocity, as observed in <ref>. The off-diagonal elements of the Hamiltonian in the single excitation sector are given by J_k = λ/4√(k· (N-k)). For elements of J_k close to the middle of the chain with k = N/2 + m where |m| ≪ N, we obtain (with λ = 4J/N) J_k = 2J[1 - 2m^2/N^2 + 𝒪(m^4)]. Thus, in the vicinity of the chain's center, the interaction strengths are constant up to second order in the distance m. Therefore, in a region where the correction is still small, i.e. 2m^2/N^2 ≪ 1, the out-of-time-order correlator exhibits the same linear growth as the uniformly coupled chain. In <ref>, the system size is N=250 and hence, to a good approximation, the operators spread linearly over the first m ≈ 25 sites (where 2m^2/N^2 = 0.02). § NO BOOST FOR BELL STATES In this section we show that using the canonical Bell states as initial states for the perfect state transfer protocol does not lead to a boost. The initial two-qubit state ρ^0_12 = ρ_X (<ref>), is separable with zero concurrence for all α∈ [0,1/4]. In particular, entanglement is thus not necessary for faster state transfer. We observe, in fact, neither a boost in the state transfer nor in the information flow when a maximally entangled Bell state is used. The canonical Bell states belong to the class of X states ρ^0_12 = ρ_X = [ a 0 0 w; 0 b z 0; 0 z^∗ c 0; w^∗ 0 0 d ], where every parameter is a real number equal to either 0 or ± 1/2. In this case the correlations take form χ^B = [ 0 0 0 w; 0 0 z 0; 0 z 0 0; w 0 0 0 ], where either w = ± 1/2 and z=0 or vice versa. Most importantly, Bell states do not satisfy the necessary condition (<ref>), i.e. [H_int,χ^B] = 0 but leave the out-of-time-order correlator invariant under time-reversal <ref>. Therefore Bell states do not affect the dynamics of the system and entanglement is thus neither necessary nor sufficient to boost information transfer. As demonstrated in Fig. 2 of the main text, maximally entangled states indeed provide the highest enhancement possible but only if the basis of the quantum correlations matches the interaction Hamiltonian in the sense of <ref>. 99 Schr04 M. Christandl, N. Datta, A. Ekert, and A. J. Landahl, Perfect State Transfer in Quantum Spin Networks, Phys. Rev. Lett. 92, 187902 (2004). Schr05 M. Christandl, N. Datta, T. C. Dorlas, A. Ekert, A. Kay, and A. J. Landahl, Perfect transfer of arbitrary states in quantum spin networks, Phys. Rev. A 71, 032312 (2005). Syu07 T. Yu and J. H. Eberly, Evolution from entanglement to decoherence of bipartite mixed "X" states, Quantum Info. Comput. 7, 459 (2007). Sali10 M. Ali, A. R. P. Rau, and G. Alber, Quantum discord for two-qubit X states, Phys. Rev. A 81, 042105 (2010). Sche11 Q. Chen, C. Zhang, S. Yu, X. X. Yi, and C. H. Oh, Quantum discord of two-qubit states, Phys. Rev. A 84, 042313 (2011). Sque12 N. Quesada, A Al-Qasimi, and D. F. James, Quantum properties and dynamics of X states, J. Mod. Opt. 59, 1322 (2012). Stak99 M. Takahashi, Thermodynamics of One-Dimensional Solvable Models, (Cambridge University Press, Cambridge, 1999). Swig12 E. P. Wigner, Group Theory, (Academic Press, New York, 1959). Sbar13 K. Bartkiewicz, K. Lemr, and A. Miranowicz, Direct method for measuring of purity, superfidelity, and subfidelity of photonic two-qubit mixed states, Phys. Rev. A 88, 052104 (2013). Slin18 C.-J. Lin and O. I. Motrunich, Out-of-time-ordered correlators in a quantum Ising chain, Phys. Rev. B 97, 144304 (2018). Sxu24 S. Xu and B. Swingle, Scrambling Dynamics and Out-of-Time-Ordered Correlators in Quantum Many-Body Systems, PRX Quantum 5, 010201 (2024). Slie72 E. H. Lieb and D. W. Robinson, The finite group velocity of quantum spin systems, Commun. Math. Phys. 28, 251 (1972). Skhe18 V. Khemani, D. A. Huse, and A. Nahum, Phys. Rev. B 98, 144304 (2018). Sxu20 T. Xu, T. Scaffidi, and X. Cao, Does scrambling equal chaos?, Phys. Rev. Lett. 124, 140602 (2020). Sbao20 J.-H. Bao and C.-Y. Zhang, Out-of-time-order correlators in the one-dimensional XY model, Commun. Theor. Phys. 72, 085103 (2020). Smal16 J. Maldacena, S. H. Shenker, and D. Stanford, A bound on chaos, J. High Energ. Phys. 106 (2016). Shas17 K. Hashimoto, K. Murata, and R. Yoshii, Out-of-time-order correlators in quantum mechanics, J. High Energ. Phys. 138 (2017). Sroz17 E. B. Rozenbaum, S. Ganeshan, and V. Galitski, Lyapunov exponent and out-of-time-ordered correlator's growth rate in a chaotic system, Phys. Rev. Lett. 118, 086801 (2017). Sgar18 I. García-Mata, M. Saraceno, R. A. Jalabert, A. J. Roncaglia, and D. A. Wisniacki, Chaos signatures in the short and long time behavior of the out-of-time ordered correlator, Phys. Rev. Lett. 121, 210601 (2018). Swil22 H. Wilming and A. H. Werner, Lieb-Robinson bounds imply locality of interactions, Phys. Rev. B 105, 125101 (2022).
http://arxiv.org/abs/2406.08193v1
20240612132226
Minimal Communication-Cost Statistical Learning
[ "Milad Sefidgaran", "Abdellatif Zaidi", "Piotr Krasnowski" ]
stat.ML
[ "stat.ML", "cs.IT", "cs.LG", "math.IT" ]
OT11011.4 Minimal Communication-Cost Statistical Learning Milad Sefidgaran^ ∤ Abdellatif Zaidi^†^∤ Piotr Krasnowski^ ∤ ^ ∤ Paris Research Center, Huawei Technologies France ^† Université Gustave Eiffel, France June 17, 2024 ============================================================================================================================================================================================= § ABSTRACT A client device which has access to n training data samples needs to obtain a statistical hypothesis or model W and then to send it to a remote server. The client and the server devices share some common randomness sequence as well as a prior on the hypothesis space. In this problem a suitable hypothesis or model W should meet two distinct design criteria simultaneously: (i) small (population) risk during the inference phase and (ii) small `complexity' for it to be conveyed to the server with minimum communication cost. In this paper, we propose a joint training and source coding scheme with provable in-expectation guarantees, where the expectation is over the encoder's output message. Specifically, we show that by imposing a constraint on a suitable Kullback-Leibler divergence between the conditional distribution induced by a compressed learning model W given W and the prior, one guarantees simultaneously small average empirical risk (aka training loss), small average generalization error and small average communication cost. We also consider a one-shot scenario in which the guarantees on the empirical risk and generalization error are obtained for every encoder's output message. § INTRODUCTION The development of distributed or decentralized machine learning solutions has witnessed a rapid increase over the last years, in particular, due to abundant applications in various areas. Examples include the federated learning (FL) of <cit.>, the split learning (SL) of <cit.> and the in-network learning (INL) of <cit.>. Often, the client devices in these architectures (also simply referred to as “clients") process their available training data samples locally and then send the output hypotheses or models to a central node or server. The process is repeated until a given loss function is minimized over the training data set; and, typically, this induces a large communication overhead. For this reason, the search for efficient model compression techniques is of paramount importance, especially in bandwidth-constrained settings such as model transmission over a finite-capacity wireless channel. Existing approaches to model training and compression are mostly based on a “separation” principle. That is, a first processing stage during which the client learns a suitable (local) model or hypothesis on the basis of the available training dataset followed by an independent second processing stage during which the client produces a compressed version of the obtained model that it conveys to the server. Specifically, there exist two main techniques for model training and transmission: Model (update) compression: In this class of methods, the client first learns a model using its available dataset. Then, it compresses the obtained model (or its update) using techniques such as quantization <cit.>, sparsification <cit.> or combination of them <cit.>, as well as “Low-rank decomposition” <cit.>. The reader is referred to <cit.> for more details on this class of methods. Codebook-based compression: This class of methods was initiated by <cit.> which uses <cit.>, and it is based on the aforementioned “separation” principle. Here, the client and the server first use a shared prior Q and randomness U to agree on a common model source codebook 𝒞={W_1,W_2,...,W_N}; and, then, the client sends a locally trained model W by choosing a suitable associated index i in the source codebook, via a variant of importance sampling <cit.>. Upon receiving the index i, the server uses the codebook 𝒞 to recover the model W_i. It is shown experimentally that this method can reduce bandwidth consumption up to 50 times compared with the classical model compression methods <cit.>. In all aforementioned prior art works, the problem of model transmission is studied irrespective of how well the conveyed hypothesis or model performs during the inference phase, i.e., the population risk. For instance, the sent compressed model is only guaranteed to perform well on the training dataset. In this paper, we consider the problem of joint design of model training and compression in a manner that guarantees simultaneously good performance during the inference phase and minimal communication cost. In doing so, we constraint the source encoder not to know the learning distribution, the conditional P_W|S induced by the learning algorithm where W is the chosen model or hypothesis and S is the available training dataset. For instance, we depart from previous analyses, such as <cit.>, in two main aspects: i) They analyzed only the communication performance (rate versus distortion), while we analyze the communication rate jointly with the generalization error and the empirical risk of the recovered model, and ii) our analysis does not require knowledge of the often difficult-to-estimate conditional P_W|S, especially in high-dimension statistical learning problems. A question similar to that of <cit.> has also been studied in <cit.> for the problem of sending n “concepts” (or equivalently stochastic mappings). In both <cit.> and <cit.>, the studied question is about sending (a possibly distorted version of) the learned model. In other words, two problems of learning and source encoding are studied separately. Besides, it is assumed in <cit.> that the communication constraints can only negatively affect performance, see e.g. the discussion after <cit.>. Contributions. In this paper, we propose a joint training and source coding scheme whose analysis reveals that by imposing a constraint on a suitable Kullback-Leibler divergence between the conditional distribution induced by a compressed learning model W given W and the prior, one can provably guarantee simultaneously small average training loss, small average generalization error, and small average communication cost. In part, the proof techniques use and extend judiciously those of <cit.>, which established formal connections between the generalization error of a statistical learning algorithm and its “compressibility” in a suitable information-theoretic sense. Furthermore, we also consider a one-shot variant of the scheme, that we analyze, in which the guarantees on the empirical risk and generalization error are shown to hold for every single encoder output message. § PROBLEM SETUP We consider a point-to-point setup for joint local training and remote source coding, as depicted in Fig. <ref>. Data. Let Z be the input data taking values over the input space 𝒵, according to an unknown distribution μ. We assume a training dataset S{Z_1,…,Z_n}∼μ^⊗ n P_S is available. Learning algorithm. In this work, we consider a general stochastic learning framework. Suppose that the learning algorithm 𝒜𝒵^n →𝒲, by having access to S, picks a hypothesis 𝒜(S)=W∈𝒲, possibly non-deterministically. Here, 𝒲∈ℝ^d denotes the hypothesis space. We denote the conditional distribution induced by this learning algorithm by P_W|S, and the joint distribution of (S,W) by P_S,W. Furthermore, denote the marginal distribution of W under P_S,W by P_W. An example of such a learning algorithm is the popular SGD algorithm. Loss function and risks. The quality of the prediction of a model w∈𝒲 is assessed using a loss function ℓ𝒲×𝒵→ℝ_+. In this work, for simplicity, we assume ℓ(z,w) ∈ [0,1]. We denote the population risk with respect to this loss function by ℒ(w)𝔼_Z∼μ[ℓ(Z,w)], and the empirical risk by ℒ̂(s,w)1/n∑_i∈[n]ℓ(z_i,w) where we used the short-hand notation [n] for the set {1,…,n}⊂ℕ^*. Finally, the generalization error is defined as (s,w) ℒ(w) - ℒ̂(s,w). Source codebook generation. Similarly to <cit.>, we assume that a common source coding codebook can be constructed at both the sending and the receiving sides using only a shared prior and common randomness. The prior can be defined over 𝒲 or more generally, over a quantized set 𝒲̂⊆𝒲. This common source codebook can be used for sending the model. Formally, fix some set 𝒲̂⊆𝒲 and let Q∈𝒬 be a prior on 𝒲̂. In addition, let U{U_1,…,U_N}∈𝒰^N, N∈ℕ^*, be the common shared randomness, where U_i are distributed i.i.d. and independent of all other variables. Next, let ℋ_U,N{W̃_U[1],…,W̃_U[N] }⊆𝒲̂^N, be the common codebook of hypotheses, where each W̃_U[j]∼ Q, for j∈ [N], is an instance drawn from the distribution Q using the randomness U_j. We assume that W̃_U[j] is a deterministic function of Q and U_j in order to enable the client and the server to agree on a common codebook. Joint learning and source coding. Suppose that given the learning algorithm 𝒜 and the codebook ℋ_U,N, a (potentially stochastic) source encoder ℰ𝒵^n ×𝒲→ [N], chooses an index K=ℰ(S,𝒜(S))∈[N]. This potentially stochastic choice can depend, among others, on Q and also on P_W|S (if known). For example in the Minimum Random Coding (MRC) <cit.> (known also as importance sampling), 𝒲̂=𝒲 and the probability of choosing index j∈[N] is proportional to P_W|S/ Q(W̃_U[j]). However, in practice (in SGD), the induced conditional distribution P_W|S is not known. Hence, in this work, in contrast to <cit.>, we assume that the encoder cannot explicitly depend on P_W|S. Overall, given 𝒜 and Q, the joint learning and source coding algorithm, by taking S and the generated codebook ℋ_U,N, chooses the model W̃_U[K]=W̃_U[ℰ(S,𝒜(S))]. In our framework, in addition to the chosen index K, we allow sending further precision W_ϵ∈𝒲. This further precision together with W̃_U[K], can be alternatively seen as source coding using a more “refined codebook”. In case when W_ϵ=W- W̃_U[K], W_ϵ is called “full precision” model and when W_ϵ=0, it is called “no precision” model. In practice, W_ϵ is a quantized version of the difference W- W̃_U[K]. For simplicity, we always assume that given (K,W,U), W_ϵ is chosen deterministically, (W̃_U[K]+W_ϵ)∈𝒲, and W_ϵ≤ W- W̃_U[K]. The server, by having access to the codebook ℋ_U,N and by receiving the index K∈ [N] and the further precision W_ϵ∈𝒲, uses the decoder 𝒟 [N] ×𝒲→𝒲 with the following simple rule 𝒟(K,W_ϵ) = W̃_U[K]+W_ϵ. Note that for ease of notation, the dependencies of ℰ and 𝒟 on the codebook ℋ_U,N are dropped in the notations. We further assume that for all (W,U), Δ_U(W,K) W- W̃_U[K]-W- 𝒟(K,W_ϵ) , is non-negative and increases with W_ϵ. The rationale behind this assumption is that in a proper source coding scheme, the larger W_ϵ, the refiner should be the decoded codeword 𝒟(K,W_ϵ), or in other words, the closer 𝒟(K,W_ϵ) should be to W. Therefore, the second term decreases as W_ϵ increases, which makes Δ_U increase with W_ϵ. The recovered model 𝒟(K,W_ϵ) depends on both the learning algorithm 𝒜(S) and the encoder ℰ(S,W), as well as the sent further precision W_ϵ. In this work, we propose two P_W|S-agnostic encoding schemes and, for each of them, we analyze both the performance of the model 𝒟(K,W_ϵ) and the required communication cost. More precisely, we analyze jointly i. the empirical risk of the model 𝒟(K,W_ϵ), ii. the generalization error of the model 𝒟(K,W_ϵ), iii. the communication rate needed to send the index picked by the encoder ℰ(S,W), iv. and the effect of the further precision W_ϵ on the communication cost, generalization error, and empirical risk. Note that by studying the generalization error and the empirical risk, we indirectly provide an analysis of the population risk of the model 𝒟(K,W_ϵ). § IN-EXPECTATION PERFORMANCE In this section, we propose a source encoder based on the Ordered Random Coding (ORC) method and provide an in-expectation guarantee for the performance of the joint training and source coding scheme. ORC is a variant of MRC, with “near-optimal” in-expectation communication cost, that is proposed in <cit.> to overcome the communication cost of MRC, which grows with N. However, applying the ORC scheme directly to our problem setup would require the knowledge of P_W|S. Moreover, for deterministic learning algorithms 𝒜(S), the induced distribution P_W|S is degenerate. As a result, both “vanilla” MRC and ORC fail in our case. To overcome this shortcoming, here, we consider an ORC-based encoder that uses an arbitrary quantization rule P_Ŵ|W that can be chosen by the client, instead of P_W|S which is induced by 𝒜(S). The performance of this modified ORC-based source encoder jointly with the learning algorithm 𝒜(S) is investigated in the following result. Suppose that the learning algorithm 𝒜(S) induces P_W|S. Suppose the loss function is 𝔏-Lipschitz, |ℓ(z,w)-ℓ(z,w')|≤𝔏w-w' for all w,w'∈𝒲 and z ∈𝒵. Consider a quantization set Ŵ⊆𝒲. Then, for any prior Q defined over Ŵ, there exists a proper source encoder ℰ(S,W)=K, agnostic to P_W|S, such that for every t>0 the following conditions hold simultaneously. i. [Empirical risk] For every (S,W), with probability at least 1-2√(b_W), 𝔼_K [ℒ̂(S,𝒟(K,W_ϵ))] ≤𝔼_K[ℒ̂(S,W)]+ 2𝔏√(𝔼_Ŵ∼ P_Ŵ|W[W-Ŵ^2] b_W)-𝔏𝔼_K[Δ_U(W,K)]/1-√(b_W), where 𝔼_K[·] denotes the expectation with respect to the stochasticity of the encoder and b_W e^-t/4+2√(ℙ_Ŵ(log P_Ŵ|W/ Q>D_KL(P_Ŵ|WQ)+t/2)), where P_Ŵ|W/ Q_Ŵ is the Radon–Nikodym derivative of P_Ŵ|W with respect to Q_Ŵ and depends on W. ii. [Generalization error] With probability at least 1-δ,[Note that D_KL(P_Ŵ|WQ) = 𝔼_Ŵ∼ P_Ŵ|W[log ( P_Ŵ|W/ Q_Ŵ)] is a random variable that depends on W.] 𝔼_K[ (S,𝒟(K,W_ϵ))] ≤ √(D_KL(P_Ŵ|WQ)+t+log(√(2n)/δ)/2n-1+ε), where ε≤ inf_α_Wsup_ν_W ∈𝒢_δ(W){𝔼_W∼ν_W[ℙ(𝔼_K[W_ϵ] > α_W)+4𝔏α_W ]} ≤ ε_δ. Here, 𝒢_δ(W) is the set containing all distributions ν_W over 𝒲 such that D_KL(ν_WP_W) ≤log(1/δ) and ε_δsup_ν_W ∈𝒢_δ(W){ 2𝔼_W∼ν_W[b_W] +8√(𝔏ε_ν_W𝔼_W∼ν_W[b_W])}, ε_ν_W√(𝔼_W,Ŵ∼ν_W P_Ŵ|W[W-Ŵ^2]). iii. [Communication rate] The expected communication cost (over S and W) of sending W̃_U[K] is no larger than C+log(C+1)+4, where C𝔼_W∼ P_W[D_KL(P_Ŵ|WQ)]. Moreover, such source encoder can be constructed by ORC using the distributions P_Ŵ|W and Q. The theorem is proved in Section <ref>. Here, we make a few remarks about this result. First, the bound on empirical risk in part i is composed of two terms. The first term is 𝔼_K[ℒ̂(S,W)] and it can be made small at the transmitter side. The second term can be made small (note that D_KL(P_Ŵ|WQ)=𝔼[log( P_Ŵ|W/ Q)]) either by sending “more precision” (as discussed below) or by increasing t which results in more computational complexity due to the increase of N_W in (<ref>) (see <cit.>). However, increasing t results in increasing the generalization bound and, as can be verified, in slightly increasing the optimal communication rate. Second, the bounds on the generalization error and the communication cost can be made small by minimizing the quantity D_KL(P_Ŵ|WQ). Hence, by minimizing this term one can guarantee both good generalization performance and a low communication rate. This shows that there exists an alignment between these two criteria. This finding suggests that using D_KL(P_Ŵ|WQ) as a regularizer jointly improves the generalizability and the needed communication rate. This is why in Fig <ref> we assumed that the learning algorithm 𝒜 may have access to Q which is used primarily for the source codebook generation. We note that a similar finding could be achieved for the recovered model using the ORC encoder with respect to P_W|S and some Q_W. Third, in part i, it can be shown that as Δ_U increases (equivalently as W_ϵ increases), the bound on the empirical risk decreases. Hence, sending further precision will improve (reduce) the empirical risk bound. In contrast, from the bound on ε in part ii., it can be verified that ε=0 for no-precision case, and in general sending more “precision” affects negatively the generalization bound. Moreover, increasing W_ϵ naturally results in more communication overhead. Hence, overall, more precision will benefit the empirical risk guarantee, while having a negative effect on the generalization bound and the communication cost. Fourth, it is easy to show that similar to the proof of part ii, the following bound on 𝔼_W∼ P_W|S𝔼_K[(S,W)] holds with probability at least 1-δ, √(C_S+t_S+log(√(2n)/δ)/2n-1+ε_S), where C_S 𝔼_W∼ P_W|S[D_KL(P_Ŵ|WQ)], t_S min(t,log(C_S+1)+4), ε_S 2𝔼_W∼ P_W|S[b_W] +8√(𝔏ε_P_W|S𝔼_W∼ P_W|S[b_W]). A similar bound can be established when the expectation with respect to (S,W)∼ P_S,W is considered. Furthermore, it can be verified that similarly to part iii, for a given S, the expected communication rate of C_S+log(C_S+1)+4 can be achieved. Finally, in our scheme, ORC is performed using the densities P_Ŵ|W and Q in a way that is agnostic to P_W|S. This is a more practical result than the result of, <cit.>, in which probability distribution P_W|S induced by the learning algorithm should be known. Furthermore, using ORC allowed us to establish part iii. on the communication rate. A similar result cannot be established using the scheme of <cit.>, as they use MRC <cit.> in which the communication complexity grows linearly with t. The reader is referred to <cit.> for more details. § ONE-SHOT PERFORMANCE The scheme proposed in the previous section provides an “in average” guarantee for the empirical risk and the generalization error of the recovered model at the receiving side. This is a relevant criterion mainly for multi-round schemes like federated learning. However, this may be insufficient for the one-shot scenario which is investigated in this section. More precisely, we study here the performance of a “vector quantizer encoder” defined as follows. Let N be a fixed integer and ℰ_VQ(S,W) = _n ∈ [N]W-W̃_U[n]. For this encoder, the following result holds. Consider the setup of Theorem 1. Fix some ϵ>0. Let K=ℰ_VQ(S,W). i. [Empirical risk] With probability at least 1-τ_ϵ, ℒ̂(S,𝒟( K,W_ϵ)) ≤ℒ̂(S,W)+ 2 𝔏 (ϵ-Δ_U(W,K)), where τ_ϵ is defined as inf{𝔼_W[ℙ((W,Ŵ) ∉ℱ_ϵ| W)^N_2]+N_2 e^-exp(γ) + N_2ℙ_W,Ŵ((W,Ŵ)∉ℐ_N_1,γ)}. In this definition, ℐ_N _1,γ {(w,ŵ) P_Ŵ|W=w/ Q_Ŵ (ŵ) ≤log(N_1)-γ}, ℱ_ϵ {(w,ŵ)w-ŵ≤ϵ}, and the infimum is over all γ >0, all Markov Kernels P_Ŵ|W, and all N_1,N_2 that satisfy the following conditions: a) N_1 × N_2 ≤ N, b) For all W, 1< λ^N_2+N_2 (1-λ) holds for λ =ℙ((W,Ŵ)∉ℱ_ϵ|W), c) ℙ((W,Ŵ)∉ℱ_ϵℐ_N_1,γ|W) +e^-exp(-γ)≤ 1. ii. [Generalization error] With probability at least 1-δ-τ_ϵ, (S,𝒟(K,W_ϵ)) ≤√(log(N)+log(1/δ)/2n) +2𝔏 ϵ. The theorem is proved in Section <ref>. The significance of this theorem is that it does not consider the expectation with respect to the encoder, in contrast to Theorem <ref>. It should be further noted that the scheme is agnostic to the distribution P_Ŵ|W which appears only in the “failure probability” analysis. § PROOF OF THEOREM <REF> Denote ρ_W(Ŵ) P_Ŵ|W/ Q_Ŵ, where for a better clarity, Q is denoted by Q_Ŵ. Furthermore fix some t>0 and for any w∈𝒲, define N_w e^L_w+t, L_w D_KL(P_Ŵ|W=w Q_Ŵ). We start by defining the stochastic source encoder ℰ(S,W). Fix some (W,U). Let G_n, n∈[N_W] be i.i.d. instances from the Gumbel distribution <cit.> with scale 1. Denote their ordered sequence by G̃_1,…,G̃_N_W, G̃_1 ≥⋯≥G̃_N_W. We define the encoder using the ORC method introduced in <cit.>. That is, ℰ(S,W)=K is chosen according to the following rule: K = _n≤ N_W{logρ_W(W̃_U[n])+G̃_n}. Now, we analyze the performance of this encoder. Part i. Similar to the related proofs in <cit.>, this proof is also inspired by the ideas introduced in <cit.>. Using the definition of Δ_U(W,K) and the Lipschitz continuity assumption, to prove this part, it is sufficient to show that for every (S,W), with probability at least 1-2√(b_W), W-W̃_U[K]≤2√(𝔼_Ŵ∼ P_Ŵ|W[W-Ŵ^2] b_W)/1-√(b_W). Due to <cit.>, the distribution of W̃_U[K] is the same as the one picked using MRC introduced in <cit.>. Hence, for this section, we consider the following MRC encoder: Given (W,U), let ℰ(S,W) pick the index i∈[N_W], with probability ρ_W(W̃_U[i])/∑_j∈[N_W]ρ_W(W̃_U[j]). Define ℑ(W) 𝔼_Ŵ∼ P_Ŵ|W[W-Ŵ] = 𝔼_U[1/N∑_i∈[N]W-W̃_U[i]ρ_W(W̃_U[i])], where the equality comes from the fact that for every i∈[N], W̃_U[i] ∼ Q_Ŵ. Furthermore, let ℑ_N(U,W) 1/N∑_i∈[N]W-W̃_U[i]ρ_W(W̃_U[i]), The dependence of ℑ_N(U,W) on U shows the dependence on a given draw of the codebook, using the common randomness U. We first claim that for each w, 𝔼_U [| ℑ_N(U,w)-ℑ(w) |] ≤σ_0,w b_w , where σ_0,w^2 𝔼_Ŵ∼ P_Ŵ|W=w[w-Ŵ^2]. This claim can be proved using <cit.>, as shown in Appendix <ref>. Now, due to the choice of the encoder, we have 𝔼_K[W-W̃_U[K]]= ∑_i∈[N]W-W̃_U[i]ρ_W(W̃_U[i]) /∑_i∈[N]ρ_W(W̃_U[i]) = ℑ_N(U,W)/1/N∑_i∈[N]ρ_W(W̃_U[i]). Then, from one hand, for any ϵ_w>0, (<ref>) concludes that ℙ(|ℑ_N(U,w)-ℑ(w)|> τ_w) ≤σ_0,w b_w/τ_w, and from the other hand, due to <cit.>, ℙ(|1/N∑_i∈[N]ρ_w(W̃_U[i]) -1 |> ϵ_w)≤ b_w/ϵ_w. Moreover, whenever |ℑ_N(U,w)-ℑ(w)|≤τ_w and 1/N∑_i∈[N]ρ_w(W̃_U[i]) -1 ≤ϵ_w, we have 𝔼_K[w-W̃_U[K]] ≤τ_w+ℑ(w)ϵ_w/1-ϵ_w. Let τ_w=σ_0,wϵ_w. Then, for every w, with probability at least 1-2b_w/ϵ_w, we have 𝔼_K[w-W̃_U[K]] ≤2σ_0,wϵ_w/1-ϵ_w. Letting ϵ_w=√(b_w) completes the proof of this part. Part ii. The generalization bound can be established using <cit.>. To show this, it is sufficient to show that the learning algorithm Ã𝒵^n →𝒲̂, defined as Ã(S) = 𝒟(K,W_ϵ)=W̃_U[ℰ(S,𝒜(S))]+W_ϵ, is (log(N_W),ε_,δ;d_m)-compressible in the sense defined in <cit.>. The condition on the rate is trivial due to the way the codebook ℋ_U,N is constructed and the encoder is defined. It remains to bound the distortion constraint ε=sup_ν_W∈𝒢_δ(W)𝔼_W,K[(S,𝒟(K,W_ϵ))^2-(S,W̃_U[K])^2]. Now, for any α_W and any distribution ν_W ∈𝒢_δ(W), we have ε_α_W,ν_W𝔼_W,K[(S,𝒟(K,W_ϵ))^2-(S,W̃_U[K])^2] (a)≤ 𝔼_W[ℙ(𝔼_KW_ϵ > α_W )]+2× 𝔼_W,K[|(S,𝒟(K,W_ϵ))-(S,W̃_U[K])|| 𝔼W_ϵ≤α_W ] (b)≤ 𝔼_W[ℙ(𝔼_KW_ϵ > α_W )]+4𝔏𝔼_W[α_W], where W∼ν_W, (a) is concluded since ℓ(z,w) ∈ [0,1], and (b) is derived by Lipschitzness and since W_ϵ=𝒟(K,W_ϵ)-W̃_U[K]. Now, combining the above inequality with ε≤inf_α_Wsup_ν_W ∈𝒢_δ(W)ε_α_W,ν_W, proves the first bound on ε. Next, note that by assumption W_ϵ≤W-W̃_U[K]. Hence, ε_α_W,ν_W is upper bounded further by 𝔼_W[ℙ(𝔼_K[W-W̃_U[K](U)] >α_W )]+4𝔏𝔼_W[α_W]. Now, using (<ref>) by letting α_W ↦2σ_0,Wϵ_W/1-ϵ_W and ϵ_W = √(b_W)/√(b_W)+√(4𝔏σ_0,W), we have ε_α_W,ν_W≤ 2𝔼_W[b_W/ϵ_W] +8𝔏 𝔼_W[σ_0,Wϵ_W/1-ϵ_W] = 2𝔼_W[b_W] +8√(𝔏)𝔼_W[√(b_W σ_0,W)] ≤ 2𝔼_W[b_W] +8√(𝔏)√(𝔼_W[b_W] 𝔼_W[σ_0,W]) ≤ 2𝔼_W[b_W] +8√(𝔏ε_ν_W𝔼_W[b_W]) . This concludes that ε≤ε_δ, which completes the proof. Part iii. This part can be concluded similar to <cit.>. The only difference with <cit.> is that here, we allow different sizes of the codebook N_W. The proof however remains the same using <cit.>. § PROOF OF THEOREM <REF> Part i. Using the Lipschitz continuity assumption and the definition of Δ_U[K,W], it suffices to show that ℙ( W-W̃_U[K] > ϵ) ≤τ_ϵ. Note that ℙ ( W-W̃_U[K] > ϵ) =ℙ(∀ i∈[N] (W,W̃_U[i])∉ℱ_ϵ). For simplicity denote ℐℐ_N_1,γ. Now, first using the proof of <cit.>, for any j∈[N_2] and a given W, we have ℙ(∀ i∈ 𝒩_j (W,W̃_U[i])∉ℱ_ϵ|W ) ≤ ℙ((W,Ŵ)∉ℱ_ϵℐ|W) +e^-exp(-γ), where 𝒩_j={(j-1)N_2+1,…,(j-1)N_2+N_1}. Hence, ℙ(∀ i∈ [N] (W,W̃_U[i])∉ℱ_ϵ|W ) ≤ ( ℙ((W,Ŵ)∉ℱ_ϵℐ|W) +e^-exp(-γ))^N_2 = ( ℙ((W,Ŵ)∉ℱ_ϵ|W)+ℙ((W,Ŵ)∈ℱ_ϵ∖ℐ|W) +e^-exp(-γ))^N_2 (a)≤ ℙ((W,Ŵ)∉ℱ_ϵ|W)^N_2 +N_2 ( ℙ((W,Ŵ)∉ℐ|W) +e^-exp(-γ)), where the last step holds using the union bound, the conditions of the theorem, and due to the inequality (x+y)^n ≤ x^n+ny, that holds whenever x,y ≥ 0, x+y ≤ 1, and 1 ≤ x^n+n(1-x). Finally averaging with respect to W yields the result. Part ii. The proof of this part follows from the Lipschitz assumption, the fact that ℓ(z,w)∈[0,1], and part i, similar to part ii of Theorem <ref>. IEEEtran § PROOF OF THE CLAIM (12) In this appendix, we prove the claim (<ref>). Fix some w∈𝒲 and for ease of notations, let a e^L_w+t/2. Define w-x_-a as follows: w-x_-a =0, whenever ρ_w(x)> a, and equals w-x otherwise. Now, we can bound 𝔼_U[| ℑ_N(U,w)-ℑ(w) |] as following: 𝔼_U[| ℑ_N(U,w)-ℑ(w) |] ≤ 𝔼_U[| ℑ_N(U,w)-1/N∑_i∈[N]w-W̃_U[i]_-aρ_w(W̃_U[i])|] + | 𝔼_Ŵ∼ P_Ŵ|W=w[w-Ŵ_-a]-ℑ(w) | +𝔼_U[| B |], where in (<ref>), B 1/N∑_i∈[N]w-W̃_U[i]_-aρ_w(W̃_U[i]) -𝔼_Ŵ∼ P_Ŵ|W=w[w-Ŵ_-a]. The term (<ref>) is bounded by 𝔼_U [| ℑ_N(U,w)-1/N∑_i∈[N]w-W̃_U[i]_-aρ_w(W̃_U[i]) |] (a)≤ 𝔼_U[1_{ρ_w(W̃_U[1])>a}w-W̃_U[1]ρ_w(W̃_U[1]) ] = 𝔼_Ŵ∼ P_Ŵ|W=w[1_{ρ_w(Ŵ)>a}w-Ŵ] (b)≤ √(𝔼_Ŵ∼ P_Ŵ|W=w[w-Ŵ^2] ℙ_Ŵ(ρ_w(Ŵ)>a)) = σ_0,w√(ℙ_Ŵ(ρ_w(Ŵ)>a)), where (a) is derived using the definition of I_n(U,w) and since W̃_U[i] are generated in an i.i.d. manner and (b) is derived using the Cauchy–Schwarz inequality. Similarly (<ref>) can be upper bounded as 𝔼_U[| 𝔼_Ŵ∼ P_Ŵ|W=w [w-Ŵ_-a]-I |] ≤σ_0,w√(ℙ_Ŵ(ρ_w(Ŵ)>a)). Finally the last term (<ref>) squared can be upper bounded as 𝔼_U[| B |]^2 ≤ Var(1/N∑_i∈[N]w-W̃_U[i]_-aρ_w(W̃_U[i]) ) ≤ Var(w-W̃_U[1]_-aρ_w(W̃_U[1])])/N ≤ 𝔼_U[ w-W̃_U[1]_-a^2 ρ_w(W̃_U[1])^2]] /N ≤ a 𝔼_U[ w-W̃_U[1]^2 ρ_w(W̃_U[1])]/N = a 𝔼_Ŵ[ w-Ŵ^2 ]/N = a σ_0^2 /N. This completes the proof of (<ref>).
http://arxiv.org/abs/2406.08016v1
20240612091149
Studying $π^+π^-$ photoproduction beyond Pomeron exchange
[ "Łukasz Bibrzycki", "Nadine Hammoud", "Vincent Mathieu", "Robert J. Perry", "Alex Akridge", "César Fernández-Ramírez", "Gloria Montaña", "Alessandro Pilloni", "Arkaitz Rodas", "Vanamali Shastry", "Wyatt A. Smith", "Daniel Winney", "Adam P. Szczepaniak" ]
hep-ph
[ "hep-ph", "hep-ex" ]
JLAB-THY-24-4078 feynDiags § ABSTRACT Forward photoproduction of π^+π^- pairs with invariant mass of the order of m_ρ∼ 770 MeV is traditionally understood to be produced via Pomeron exchange. Based on a detailed analysis of the CLAS photoproduction data, it is shown that the dynamics of two-pion photoproduction for |t|≳ 0.5 GeV^2 cannot be explained by Pomeron exchange alone. This motivates the development of a new theoretical model of two-pion photoproduction which incorporates both two-pion and pion-nucleon resonant contributions. After fitting free parameters, the model provides an excellent description of the low moments of the angular distribution measured at CLAS, and enables an assessment of the relative contributions of particular production mechanisms and an interpretation of the various features of the data in terms of these mechanisms. lukasz.bibrzycki@agh.edu.pl nadine.hammoud.28@gmail.com Institute of Nuclear Physics, Polish Academy of Sciences, PL-31-342 Kraków, Poland perryrobertjames@gmail.com Joint Physics Analysis Center Studying π^+π^- photoproduction beyond Pomeron exchange Adam P. Szczepaniak0000-0002-4156-5492 June 17, 2024 ======================================================= § INTRODUCTION Two-pion photoproduction has long been a reaction of interest for studies of hadron spectroscopy. Since free pion targets are difficult to obtain, multipion hadro- and photoproduction measurements are necessary to understand the spectrum of light meson resonances. In recent years, the field of hadron spectroscopy has experienced a revolution due to the observation of a number of resonances in the heavy sector which do not fit into the conventional quark model (for reviews, see <cit.>). The existence of such exotic states, although long heralded <cit.>, has nonetheless recently motivated further experimental studies of light-meson spectroscopy <cit.>. This modern high-precision data necessitates sophisticated amplitude analysis methods to extract the full physics content. The study of the two-pion final state has significant theoretical value because the current understanding of this reaction is primarily limited to the region of small momentum transfer between the impinging photon and the produced π^+π^- system (|t|≲ 0.4 GeV^2). In the limit where the total squared center-of-mass energy, s becomes large while the momentum transfer, t remains small, it is natural to describe the data using Regge theory <cit.>. This theory predicts that in this kinematical region the scattering amplitude is dominated by Regge exchanges, leading to asymptotic energy dependence of the form s^α(t), where α(t) is known as the Regge trajectory. These trajectories may be calculated from low-energy properties of resonances in the related t-channel process. The asymptotic behaviour of the scattering amplitude, and via the optical theorem, the total cross section, is determined by the Regge trajectory with the largest t=0 intercept. A fit to high-energy data for the total cross section for a range of hadronic reactions yields a value α(t=0)≈1 <cit.>. This trajectory is known as the Pomeron (ℙ), and provides an explanation within Regge theory of the approximately constant hadronic total cross sections at large center-of-mass energies. Several features of two-pion photoproduction measurements at small momentum transfer are easily understood in terms of Pomeron exchange. Firstly, in this kinematic region (|t|≲ 0.4 GeV^2), the process is known to be P-wave dominated with a prominent ρ(770) peak. The line-shape of the ρ(770) in the two-pion spectrum appears deformed with respect to pion-initiated ρ(770) production <cit.> or to the electromagnetic pion form factor <cit.>. This feature is effectively described by the or Deck models <cit.>. Secondly, from a study of the spin-density matrix elements (SDMEs) for and , it has been shown that the produced ρ(770) resonance inherits the helicity of the incoming photon in the s-channel center-of-mass frame <cit.>. This phenomenon is known as s-channel helicity conservation (SCHC). Regge theory provides a natural explanation for SCHC because it predicts the factorization of the Regge pole residue into two vertices, which are proportional to (-t)^n/2 near the forward limit, where in this case n is the net helicity flip between the photon and the ρ(770) <cit.>. Thus, Regge theory provides a natural explanation of the dominance of n=0 helicity-conserving processes in the forward (t→0) limit. Recently, high-precision measurements at higher photon energies (E_γ≈8.5 GeV) show that SCHC approximately holds in the forward limit <cit.>, with increasing violations at larger momentum transfer. Finally, from this reaction, it is possible to extract the cross section for the process γ p →ρ^0 p. Regge models in which ρ(770) photoproduction is assumed to be saturated by Pomeron exchange alone reproduce the slope of the differential cross section for |t|≲ 0.4 GeV^2, while data for |t|≳ 0.4 GeV^2 suggest that additional Reggeon exchanges are also required. This is shown in <ref>, where the Regge model employed in Ref. <cit.> which incorporates both Pomeron- and Reggeon-exchange contributions is compared with experimental data. Thus at small momentum transfer (|t|≲ 0.4 GeV^2), a relatively simple picture of the process emerges, in which at least three of the prominent features of the data (the ρ(770) lineshape, SCHC and the t-dependence of the ρ(770) photoproduction differential cross section) can be explained by a model which incorporates both a Pomeron-induced ρ(770) resonant amplitude and a nonresonant Deck background. Less is known about the reaction at larger momentum transfers (|t|≳0.4 GeV^2). Theoretically speaking, the validity of the Pomeron exchange picture at larger t is questionable, since this is no longer in the kinematical limit where Regge theory is applicable. In addition, while a good description of the ρ(770) is essential to reproduce the two-pion lineshape, the Particle Data Group (PDG) <cit.> also lists several other light meson resonances with quantum numbers and masses which can be expected to contribute to the experimental cross section. While their influence on the cross section is subleading with respect to the ρ(770), precise data in this region offer the prospect of further study of these resonances and their associated production mechanisms. Rather than working directly with the multidimensional differential cross sections, it is possible to use the set of moments of the angular distribution, Y^L_M. These moments are rigorously defined observables and are bilinear in the partial wave amplitudes. Results for the experimentally measured low angular moments for two-pion photoproduction were presented in Ref. <cit.> at photon energies of E_γ =2.8 and 4.7 GeV and a momentum transfer range of 0.02 < |t| < 0.4 GeV^2. The moments ⟨Y_0^0|$⟩,⟨Y^2_0|$⟩, and ⟨Y_2^2|$⟩ revealed a prominent peak in the two-pion invariant mass distribution corresponding to theP-wave contribution, primarily associated with theρ(770)meson. However, the angular moment⟨Y_0^1|$⟩ was relatively small, indicating that other resonance contributions are small in this kinematic region. A reasonable description of these low angular moments was obtained by employing a model <cit.>. The more recent CLAS dataset <cit.> analyzed in this work covers a range of larger momentum transfers, of 0.4 < |t| < 1.0 GeV^2, for similar photon energies and two-pion invariant masses. A characteristic subset of the data is shown in Fig. <ref>. The presence of the ρ(770) resonance is again clear in ⟨Y_0^0|$⟩,⟨Y^2_0|$⟩, and ⟨Y_2^2|$⟩; however, the more precise data suggests some evidence that a broad enhancement is present at higher two-pion invariant masses (√(s_12)∼1.2 GeV). There are at least two relevant known resonances in this mass region: thef_2(1270)andf_0(1370). In contrast to the older data of Ref. <cit.>, a detailed analysis of the angular moments⟨Y^1_0|$⟩ and ⟨Y^1_1|$⟩ around1 GeVhas been interpreted as evidence for the presence of thef_0(980)<cit.> resonance. The complex interference patterns exhibited by this precise modern data present challenges for models of two-pion photoproduction, including the Regge models of the type mentioned above. In particular, it will be shown that a model which incorporates only a Pomeron-induced resonant amplitude and a non-resonantP-wave amplitude cannot reproduce the higher precision angular moments measured at larger momentum transfer (|t|≳0.4 GeV^2). The failure of this simple model to describe the angular moments motivates the development of a more detailed description of two (charged) pion photoproduction, which is valid at intermediate momentum transfers (0.4 GeV^2<|t|<1.0 GeV^2). The model parameters are determined from a global fit of the available experimental data for angular moments up toL=2andM=2. The angular moments fulfill⟨Y^L_M|=⟩(-1)^M⟨Y^L_-M|$⟩, thus only moments for M≥0 are considered. The resulting model provides a good description of the angular moments for all t-bins studied. By examining the fitted model in more detail, it is observed that the three features mentioned above — the deformation of the ρ(770) lineshape, SCHC and the t-dependence of the ρ(770) photoproduction differential cross section — cannot be explained by simple models which incorporate only Pomeron exchange. Having fit the model to the low angular moments, the model is validated by comparison to the higher angular moments in the two-pion channel. This paper is organized as follows: in <ref>, kinematical variables and key observables are defined, in <ref> the theoretical model is developed, and in <ref> the fits to the data and the model predictions are presented. Finally, in <ref> the conclusions are discussed, and possible further studies are suggested. § KINEMATICS AND ANGULAR MOMENTS The following process is considered: γ(q,λ_γ)+p(p_1,λ_1)→π^+(k_1)+π^-(k_2)+p(p_2,λ_2). The helicities of the particles are defined in the π^+π^- helicity frame, where the two-pion three-momenta are related via and the recoiling proton (𝐩_2^H) defines the negative z-axis. The reaction (x-z) plane is defined by the three-vectors of the photon, target and recoiling proton. Then the unit vector normal to this plane defines the y-axis, that is, . With this choice of axes, define the angles of the π^+. The orientation of the helicity frame with respect to relevant kinematic variables is shown in <ref>. The helicity amplitudes for a 2→3 process are maximally described by five independent kinematic variables. Thus in addition to the two angles for the π^+, the following kinematic invariants shall be used: s =(p_1+q)^2=(p_2+k_1+k_2)^2, t =(p_1-p_2)^2=(k_1+k_2-q)^2, s_12 =(k_1+k_2)^2=(p_1-p_2+q)^2. The set of kinematic variables, (s,t,s_12,Ω^H) is complete, in the sense that all other kinematic invariants may be computed by knowing these five. Further details on the kinematics and the definitions of additional kinematic invariants are given in <ref>. The differential cross section is given by dσ/dt d√(s_12) dΩ^H= κ∑_λ_1λ_γλ_2|ℳ_λ_γλ_1 λ_2(s,t,s_12,Ω^H)|^2, where κ contains the kinematical factors and is given by[The normalization of the κ defined here is related to the version defined in Ref. <cit.>, κ^prev by κ=(2π)κ^prev/2. This choice is made to keep the notation for the unpolarized moments simple.] κ=1/81/(2π)^4λ^1/2(s_12,m_π^2,m_π^2)/16√(s_12)(s-m_N^2)^2, where λ(a,b,c)=a^2+b^2+c^2-2(ab+ac+bc) is the Källén function. The angular moments of this distribution are defined as ⟨Y^L_M|=⟩√(4π)∫ dΩ^Hdσ/dt d√(s_12) dΩ^HY^L_M(Ω^H), where the normalization is chosen to ensure that the moment ⟨Y_0^0|$⟩ is equal to the integrated cross section, i.e.,[These moments may be related to the H_LM^0 defined in Ref. <cit.> via Y_M^L = 2π√(2L+1)H_LM^0. Only the real part of the spherical harmonics is retained in the definition, as the imaginary part identically vanishes for parity-conserving interactions.] Y_0^0=dσ/dt d√(s_12). By decomposing the amplitude into partial-waves, ℳ_λ_γλ_1λ_2(s,t,s_12,Ω^H)=∑_lmℳ_λ_γλ_1λ_2 m^l(s,t,s_12)Y^l_m(Ω^H), it is possible to see that the angular moments are bilinear in the partial waves: ⟨Y^L_M| ⟩=√(4π)κ∑_lml^' m^'A^Lll^'_Mmm^' ×∑_λ_γλ_1λ_2ℳ_λ_γλ_1 m^'λ_2 ^l^' *(s,t,s_12)ℳ_λ_γλ_1 m λ_2 ^l(s,t,s_12), where A^Lll^'_Mmm^' =∫ dΩ^H Y^l_m(Ω^H)Y^l^'*_m^'(Ω^H) Y^L_M(Ω^H). Parity imposes relations on the helicity amplitudes. In particular, the amplitudes should obey ℳ_λ_γλ_1 Mλ_2 ^l(s,t,s_12) =(-1)^λ_2-λ_1+λ_γ-M ℳ_-λ_γ-λ_1-M -λ_2 ^l(s,t,s_12). This relation can be used to reduce by half the number of helicity amplitudes one must consider. In this work, the above symmetry will be used to relate the positive and negative photon polarizations. It will sometimes be advantageous to use spectroscopic notation for the partial-waves. To this end, the notation [l]_m(s,t,s_12)≡ℳ_+1λ_1 mλ_2 ^l(s,t,s_12), where[l]=S,P,D,…forl=0,1,2,…will be used. Note that the photon helicity is fixed to beλ_γ=+1and the dependence on the nucleon helicities in the[l]_m's is left implicit. The negative photon helicityλ_γ=-1can be obtained using Eq. (<ref>). The advantage of theπ^+π^-helicity frame lies in the fact that the helicity of the two-pion system coincides with themquantum number of the partial waves. Full expressions for these angular moments in terms of the partial waves are given in Appendix <ref>. § THE MODEL Since the focus of this work is theπ^+ π^-angular moments, particular emphasis is placed on the known low-energy resonances which decay toπ^+π^-. These contributions are modeled via a two-step process, whereby a resonance is produced fromt-channel scattering between the target nucleon and the photon beam, which then decays to form the two-pion final state. This generic process is depicted in <ref>. The most obvious of these resonances is the large enhancement at a two-pion invariant mass of√(s_12)∼0.77 GeVwhich may be attributed to the presence of theρ(770). However, a closer inspection of the two-pion angular moments suggests the presence of several other resonance-like structures. In this work, the resonancesf_0(500),f_0(980),f_0(1370)andf_2(1270)are considered.[Some evidence exists for the presence of a radial excitation of the ρ in the kinematic region studied here <cit.>, although at present, this state is not included in the PDG, and is not included in this analysis.] In addition to these direct resonance contributions, the leading background <cit.> arises from the so-called Deck or mechanism <cit.>. Here a photon diffractively dissociates into a pion pair, with one of the pions being off-shell and brought on-shell by a further elastic scattering off the target proton. As a result, one anticipates the dominance of one-pion exchange. The resulting amplitude may be factorized into an electromagneticγππvertex and a2→2subprocess related to elasticπNscattering, which is well-known. This process is represented diagrammatically in <ref>. The full amplitude for this model may be written ℳ_λ_γλ_1 λ_2(s,t,s_12,Ω^H) =ℳ_λ_γλ_1 λ_2^R(s,t,s_12,Ω^H) + ℳ_λ_γλ_1λ_2^NR(s,t,s_12,Ω^H), where the first term describes the resonant component of the model, while the second term describes the nonresonant (Deck) component. In the following sections, these two components are described in detail. §.§ Resonant Amplitude The resonant contribution to the full amplitude may be written as a sum of individual lineshapes: ℳ_λ_γλ_1 λ_2^R(s,t,s_12,Ω^H)=∑_ℛℳ_λ_γλ_1 λ_2 ^ℛ(s,t,s_12,Ω^H). where the sum overℛruns over all resonances considered and listed in Table <ref>. Each of these amplitudes is assumed to be decomposable into the product of a production amplitude, which describes the formation of an approximately stable resonanceℛwith spinJ, and a decay amplitude, which describes the decay of the resonance to two pions: ℳ_λ_γλ_1 λ_2 ^ℛ (s,t,s_12,Ω^H) =∑_M=-J^J ℳ_λ_γλ_1 Mλ_2 ^γ p →ℛ p(s,t) A^ℛ(s_12)Y^J_M(Ω^H). In principle, it is possible for the production amplitude,ℳ_λ_1λ_γλ_2 M^γp →ℛ p(s,t), to depend on the two-pion invariant mass. However, in this work, all two-pion mass dependence is assumed to originate fromA^ℛ(s_12), which is an unconstrained dynamical function describing the two-pion invariant mass spectrum and is taken to be a distribution. More sophisticated approaches to the modelling of these resonances have been discussed extensively in the literature. Generally, modern spectroscopy analyses of the two-pion spectrum eschew the use of Breit-Wigner distributions in favor of a more theoretically sound approach. This point has been emphasized in particular in the study of the broadf_0(500)resonance <cit.>. In this work, the focus is on a global description of the angular moments, rather than a precise study of two-pion resonances. It is therefore not necessary to employ these more sophisticated modelling techniques. To complete the definition of the model, one must propose a form for the production amplitudes,ℳ_λ_1λ_γλ_2 M^γp→ℛ p. In the kinematic region considered here, it is necessary to go beyond effective field theoretic techniques which are appropriate near threshold <cit.>. Thus, a combination of Regge theory and effective Lagrangians are employed to inspire the forms of these amplitudes. In particular, a version of the formalism employed in Ref. <cit.> is used here. In order to fix the notation, the essential results are restated. A generict-channel exchange may be written ℳ_λ_γλ_1 M λ_2 ^E(s,t)=∑_j 𝒯_λ_γ M^α_1⋯α_j𝒫_α_1⋯α_j;β_1·β_j^Eℬ_λ_1λ_2^β_1⋯β_j, where𝒯,ℬ, and𝒫are the top vertex, bottom vertex, and propagator, respectively. At high energies, and low momentum transfer,tthese amplitudes may be matched to Born-termt-channel diagrams where a hadronEwith spinjis exchanged (not to be confused with the spinJof the produced resonance). This fact is used to fix the form of the vertices𝒯andℬ. Summation of all allowed exchanged spins in <ref> results in a Regge pole amplitude. In this study, the dominance of leading Regge pole amplitudes is assumed, corresponding to the exchange ofρandωtrajectories in production of the two-pion system withJ^P=0^+,2^+, andℙanda_2,f_2trajectories in production ofJ^P=1^-. Explicitly, theγp→ℛ pproduction amplitude is then written ℳ_λ_γλ_1 Mλ_2 ^γ p→ f_0,2 p(s,t) = ℳ_λ_γλ_1 Mλ_2 ^ρ/ω(s,t), ℳ_λ_γλ_1 Mλ_2 ^γ p→ρ p(s,t) = ℳ_λ_γλ_1 Mλ_2 ^ℙ(s,t)+ℳ_λ_γλ_1 Mλ_2 ^a_2/f_2(s,t). A Regge pole dominated amplitude has a generic factorized form, which is analogous to <ref> with 𝒫^E→ R^E(s,t) =1/s_0α^E(t)/α^E(0)1+τ^E e^-iπα^E(t)/sinπα^E(t)(s/s_0)^α^E(t)-1, being the Regge pole propagator. Hereτ^Eis the signature factor,α^E(t)=α_0^E+t α_1^E, is the Regge trajectory ands_0is a scale parameter conventionally chosen to bes_0=1 GeV^2(the factor of1/s_0has been included to preserve the dimensions of the amplitude). Numerical values for these parameters and the Regge trajectories for the Reggeons employed in this work are given in Table <ref>. For the product of the helicity-dependent couplings of the Reggeon to the photon (top vertex) and to the nucleon (bottom vertex) the form 𝒯×ℬ→𝒯^α_λ_γ,Mu_λ_2(p_2)γ_α u_λ_1(p_1), is chosen,[This bottom vertex is sometimes referred to as the `Vector Pomeron Model'. Despite the name, it does not assume that the Pomeron carries vector quantum numbers, but is rather a model for the helicity structure that fulfills SCHC at high energies.] where the top vertex, consistent with gauge invariance is given by 𝒯_λ_γ^α =a^E,ℛ(t) [q^αϵ_λ_γ^σ(q)-q^σϵ_λ_γ^α(q)]k_σ, 𝒯_λ_γ M^α =a_λ_γ M^E,ℛ(t) [q^αϵ_λ_γ^σ(q)-q^σϵ_λ_γ^α(q)]ϵ_Mσ^*(k), 𝒯_λ_γ M^α =a_λ_γ M^E,ℛ(t) {[q^μϵ_λ_γ^ρ(q) - q^ρϵ_λ_γ^μ(q)] (k-q)_ρϵ_Mμ^α*(k) -[q^μϵ_λ_γ^α(q)-q^αϵ_λ_γ^μ(q)](k-q)^νϵ_Mμν^*(k)}, for the production of a two-pion system in theJ=0,1,2partial waves respectively. Hereϵ_λ_γ^α(q)is the polarization vector for the incoming photon,ϵ_Mσ^*(k)is the polarization for the outgoing spin-1 particle with momentumk=k_1+k_2, whileϵ_M σ_1σ_2^*(k)is the polarization vector for the outgoing spin-2 particle. It may be written as ϵ_Mμν(k) = ∑_m_1,m_2 C^JM_1m_1,1m_2ϵ_m_1μ(k)ϵ_m_2ν(k), where
http://arxiv.org/abs/2406.09292v1
20240613162918
Neural Assets: 3D-Aware Multi-Object Scene Synthesis with Image Diffusion Models
[ "Ziyi Wu", "Yulia Rubanova", "Rishabh Kabra", "Drew A. Hudson", "Igor Gilitschenski", "Yusuf Aytar", "Sjoerd van Steenkiste", "Kelsey R. Allen", "Thomas Kipf" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
RoTipBot: Robotic Handling of Thin and Flexible Objects using Rotatable Tactile Sensors Jiaqi Jiang^1,*, Xuyang Zhang^1,*, Daniel Fernandes Gomes^1, Thanh-Toan Do^2 and Shan Luo^1 Manuscript received 31 January 2024; This work was supported by the EPSRC project “ViTac: Visual-Tactile Synergy for Handling Flexible Materials" (EP/T033517/2). ^1Jiaqi Jiang, Xuyang Zhang, Daniel Fernandes Gomes and Shan Luo are with the Department of Engineering, King's College London, London WC2R 2LS, U.K. Emails: {jiaqi.1.jiang, xuyang.zhang, shan.luo}@kcl.ac.uk. ^2T.-T. Do is with Department of Data Science and AI, Monash University, Clayton, VIC 3800, Australia. E-mail: toan.do@monash.edu. * represents equal contributions. June 17, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT We address the problem of multi-object 3D pose control in image diffusion models. Instead of conditioning on a sequence of text tokens, we propose to use a set of per-object representations, Neural Assets, to control the 3D pose of individual objects in a scene. Neural Assets are obtained by pooling visual representations of objects from a reference image, such as a frame in a video, and are trained to reconstruct the respective objects in a different image, a later frame in the video. Importantly, we encode object visuals from the reference image while conditioning on object poses from the target frame. This enables learning disentangled appearance and pose features. Combining visual and 3D pose representations in a sequence-of-tokens format allows us to keep the text-to-image architecture of existing models, with Neural Assets in place of text tokens. By fine-tuning a pre-trained text-to-image diffusion model with this information, our approach enables fine-grained 3D pose and placement control of individual objects in a scene. We further demonstrate that Neural Assets can be transferred and recomposed across different scenes. Our model achieves state-of-the-art multi-object editing results on both synthetic 3D scene datasets, as well as two real-world video datasets (Objectron, Waymo Open). Additional details and video results are available at our https://neural-assets-paper.github.io/project page. ^†Work done while interning at Google. Contact: mailto:tkipf@google.com § INTRODUCTION From animation movies to video games, the field of computer graphics has long relied on a traditional workflow for creating and manipulating visual content. This approach involves the creation of 3D assets, which are then placed in a scene and animated to achieve the desired visual effects. With the recent advance of deep generative models <cit.>, a new paradigm is emerging. Diffusion models have achieved promising results in content creation <cit.> by training on web-scale text-image data <cit.>. Users can now expect realistic image generation, depicting almost everything describable in text. However, text alone is often insufficient for precise control over the output image. To address this challenge, an emerging body of work has investigated alternative ways to control the image generation process. One line of work studies different forms of conditioning inputs, such as depth maps, surface normals, and semantic layouts <cit.>. Another direction is personalized image generation <cit.>, which aims to synthesize a new image while preserving particular aspects of a reference image (placing an object of interest on a desired background). However, these approaches are still fundamentally limited in their 3D understanding of objects. As a result, they cannot achieve intuitive object control in the 3D space, rotation. While some recent works introduce 3D geometry to the generation process <cit.>, they cannot handle multi-object real-world scenes as it is hard to obtain scalable training data (paired images and 3D annotations). We address these limitations by taking inspiration from cognitive science to propose a scalable solution to 3D-aware multi-object control. When humans move through the world, their motor systems keep track of their movements through an efference copy and proprioceptive feedback <cit.>. This allows the human perceptual system to track objects accurately across time even when the object’s relative pose to the observer changes <cit.>. We use this observation to propose the use of videos of multiple objects as a scalable source of training data for 3D multi-object control. Specifically, for any two frames sampled from a video, the naturally occurring changes in the 3D pose (3D bounding boxes) of objects can be treated as training labels for multi-object editing. With this source of training data, we propose Neural Assets – per object latent representations with consistent 3D appearance but variable 3D pose. Neural Assets are trained by extracting their visual appearances from one frame in a video and reconstructing their appearances in a different frame in the video conditioned on the corresponding 3D bounding boxes. This supports learning consistent 3D appearance disentangled from 3D pose. We can then tokenize any number of Neural Assets and feed this sequence to a fine-tuned conditional image generator for precise, multi-object, 3D control. Our main contributions are threefold: (i) A Neural Asset formulation that represent objects with disentangled appearance and pose features. By training on paired video frames, it enables fine-grained 3D control of individual objects. (ii) Our framework is applicable to both synthetic and real-world scenes, achieving state-of-the-art results on 3D-aware object editing. (iii) We extend Neural Assets to further support compositional scene generation, such as swapping the background of two scenes and transferring objects across scenes. We show the versatile control ability of our model in Fig. <ref>. § RELATED WORK 2D spatial control in diffusion models (DMs). With the rapid growth of diffusion-based visual generation <cit.>, there have been many works aiming to inject spatial control to pre-trained DMs via 2D bounding boxes or segmentation masks. One line of research achieves this by manipulating text prompts <cit.>, intermediate attention maps <cit.> or noisy latents <cit.> in the diffusion process, without the need to change model weights. Closer to ours are methods that fine-tune pre-trained DMs to support additional spatial conditioning inputs <cit.>. GLIGEN <cit.> introduces new attention layers to condition on bounding boxes. InstanceDiffusion <cit.> further supports object masks, points, and scribbles with a unified feature fusion block. To incorporate dense control signals such as depth maps and surface normals, ControlNet <cit.> adds zero-initialized convolution layers around the original network blocks. Recently, Boximator <cit.> demonstrates that such 2D control can be extended to video models with a similar technique. In our work, we build upon pre-trained DMs and leverage 3D bounding boxes as spatial conditioning, which enables 3D-aware control such as object rotation and occlusion handling. 3D-aware image generation. Earlier works leverage differentiable rendering to learn 3D Generative Adversarial Networks (GANs) <cit.> from monocular images, with explicit 3D representations such as radiance fields <cit.> and meshes <cit.>. Inspired by the great success of DMs in image generation, several works try to lift the 2D knowledge to 3D <cit.>. The pioneering work 3DiM <cit.> and follow-up work Zero-1-to-3 <cit.> directly train diffusion models on multi-view renderings of 3D assets. However, this line of research only considers single objects without background, which cannot handle in-the-wild data with complex backgrounds. Closest to ours are methods that process multi-object real-world scenes <cit.>. OBJect-3DIT <cit.> studies language-guided 3D-aware object editing by training on paired synthetic data, limiting its performance on real-world images <cit.>. LooseControl <cit.> converts 3D bounding boxes to depth maps to guide the object pose. Yet, it cannot be directly applied to edit existing images. In contrast, our Neural Asset representation captures both object appearance and 3D pose. It can be easily trained on real-world videos to achieve multi-object 3D edits. Personalized image generation. Since the seminal works DreamBooth <cit.> and Textual Inversion <cit.> which perform personalized generation via test-time fine-tuning, huge efforts have been made to achieve this in a zero-shot manner <cit.>. Most of them are only able to synthesize one subject, and cannot control the spatial location of the generated instance. A notable exception is Subject-Diffusion <cit.>, which leverages frozen CLIP embeddings for object appearance and 2D bounding boxes for object position. Still, it cannot explicitly control the 3D pose of objects. Object-centric representation learning. Our Neural Asset representation is also related to recent object-centric slot representations <cit.> that decompose scenes into a set of object entities. Object slots provide a useful interface for editing such as object attributes <cit.>, motions <cit.>, 3D poses <cit.>, and global camera poses <cit.>. Nevertheless, these models show significantly degraded results on real-world data. Neural Assets also consist of disentangled appearance and pose features of objects. Different from existing slot-based models, we fine-tune self-supervised visual encoders and connect them with large-scale pre-trained DMs, which scales up to complex real-world data. § METHOD: NEURAL ASSETS Inspired by 3D assets in computer graphics software, we propose Neural Assets as learnable object-centric representations. A Neural Asset comprises an appearance and an object pose representation, which is trained to reconstruct the object via conditioning a diffusion model (Sec. <ref>). Trained on paired images, our method learns disentangled representations, enabling 3D-aware object editing and compositional generation at inference time (Sec. <ref>). Our framework is summarized in Fig. <ref>. §.§ Background: 3D Assets in Computer Graphics 3D object models, or 3D assets, are basic components of any 3D scene in computer graphics software, such as Blender <cit.>. A typical workflow includes selecting N 3D assets {â_1, ..., â_N} from an asset library and placing them into a scene. Formally, one can define a 3D asset as a tuple â_i ≜ (𝒜_i,𝒫_i), where 𝒜_i is a set of descriptors defining the asset's appearance (canonical 3D shape and surface textures) and 𝒫_i describes its pose (rigid transformation and scaling from its canonical pose). §.§ Neural Assets Inspired by 3D assets in computer graphics, our goal is to enable such capabilities (3D control and compositional generation) in recent generative models. To achieve this, we define a Neural Asset as a tuple a_i ≜ (A_i, P_i), where A_i ∈ℝ^(K × D) is a flattened sequence of K D-dimensional vectors describing the appearance of an asset, and P_i ∈ℝ^D' is a D'-dimensional embedding of the asset's pose in a scene. In other words, a Neural Asset is fully described by learnable embedding vectors, factorized into appearance and pose. This factorization enables independent control over appearance and pose of an asset, similar to how 3D object models can be controlled in traditional computer graphics software. Importantly, besides the 3D pose of assets, our approach does not require any explicit mapping of objects into 3D, such as depth maps or the NeRF representation <cit.>. §.§.§ Asset Encoding In the following, we describe how both the appearance A_i and the pose P_i of a Neural Asset a_i are obtained from visual observations (such as an image or a frame in a video). Importantly, the appearance and pose representations are not necessarily encoded from the same observation, they can be encoded from two separate frames sampled from a video. We find this strategy critical to learn disentangled and controllable representations, which we will discuss in detail in Sec. <ref>. Appearance encoding. At a high level, we wish to obtain a set of N Neural Asset appearance tokens A_i from a visual observation x_src, where x_src can be an image or a frame in a video. While one could approach this problem in a fully-unsupervised fashion, using a method such as Slot Attention <cit.> to decompose an image into a set of object representations, we choose to use readily-available annotations to allow fine-grained specification of objects of interest. In particular, we assume that a 2D bounding box b_i is provided for each Neural Asset a_i, specifying which object should be extracted from x_src. Therefore, we obtain the appearance representation A_i as follows: A_i = Flatten(RoIAlign(H_i, b_i)) , H_i = Enc(x_src) , where H_i is the output feature map of a visual encoder Enc. RoIAlign <cit.> extracts a fixed size feature map using the provided bounding box b_i which is flattened to form the appearance token A_i. This factorization allows us to extract N object appearances from an image with just one encoder forward pass. In contrast, previous methods <cit.> crop each object out to extract features separately, and thus requires N encoder passes. This becomes unaffordable if we jointly fine-tune the visual encoder, which is key to learning generalizable features as we will show in the ablation study. Pose encoding. The pose token P_i of a Neural Asset a_i is the primary interface for controlling the presence and 3D pose of an object in the rendered scene. In this work, we assume that the object pose is provided in terms of a 3D bounding box, which fully specifies its location, orientation, and size in the scene. Formally, we take four corners spanning the 3D bounding box[Only three corners are needed to fully define a 3D bounding box, but we found a 4-corner representation beneficial to work with. Previous research <cit.> also shows that over-parametrization can benefit model learning.] and project them to the image plane to get {c_i^j=(h_i^j, w_i^j, d_i^j)}_j=1^4, with the projected 2D coordinate (h_i^j, w_i^j), and the 3D depth d_i,j. We obtain the pose representation P_i for a Neural Asset as follows: P_i = MLP(C_i) , C_i = Concat[c_i^1, c_i^2, c_i^3, c_i^4] , where we first concatenate the four corners c_i^j to form C_i ∈ℝ^12, and then project it to P_i ∈ℝ^D' via an MLP. We tried the Fourier coordinate encoding in prior works <cit.> but did not find it helpful. There are alternative ways to represent 3D bounding boxes (concatenation of center, size, and rotation commonly used in 3D object detection <cit.>), which we compare in Appendix <ref>. In this work, we assume the availability of training data with 3D annotations – obtaining high-quality 3D object boxes for videos at scale is still an open research problem, but may soon be within reach given recent progress in monocular 3D detection <cit.>, depth estimation <cit.>, and pose tracking <cit.>. Serialization of multiple Neural Assets. We encode a set of N Neural Assets into a sequence of tokens that can be appended to or used in place of text embeddings for conditioning a generative model. In particular, we first concatenate the appearance token A_i and the pose token P_i channel-wise, and then linearly project it to obtain a Neural Asset representation a_i as follows: a_i = Linear(ã_i) , ã_i = Concat[A_i, P_i] ∈ℝ^K × D + D' . Channel-wise concatenation uniquely binds one pose token with one appearance representation in the presence of multiple Neural Assets. An alternative solution is to learn such association with positional encoding. Yet, it breaks the permutation-invariance of the generator against the order of input objects and leads to poor results in our preliminary experiments. Finally, we simply concatenate multiple Neural Assets along the token axis to arrive at our token sequence, which can be used as a drop-in replacement for a sequence of text tokens in a text-to-image generation model. Background modeling. Similar to prior works <cit.>, we found it helpful to encode the scene background separately, which enables independent control thereof (swapping out the scene, or controlling global properties such as lighting). We choose the following heuristic strategy to encode the background: to avoid leakage of foreground object information, we mask all pixels within asset bounding boxes b_i. We then pass this masked image through the image encoder Enc (shared weights with the foreground asset encoder) and apply a global RoIAlign, using the entire image as region of interest, to obtain a background appearance token A_bg∈ℝ^(K × D). Similar to a Neural Asset, we also attach a pose token P_bg to A_bg. This can either be a timestep embedding of the video frame (relative to the source frame) or a relative camera pose embedding, if available. In the serialized representations, the background token is treated the same as Neural Assets, we concatenate A_bg and P_bg channel-wise and linearly project it. Finally, the foreground assets a_i and the background token are concatenated along the token dimension and used to condition the generator. §.§.§ Generative Decoder To generate images from Neural Assets, we make minimal assumptions about the architecture or training setup of the generative image model to ensure compatibility with future large-scale pre-trained image generators. In particular, we assume that the generative image model accepts a sequence of tokens as conditioning signal: for most base models this would be a sequence of tokens derived from text prompts, which we can easily replace with a sequence of Neural Asset tokens. As a representative for this class of models, we adopt Stable Diffusion v2.1 <cit.> for the generative decoder. See Appendix <ref> for details on this model. Starting from the pre-trained text-to-image checkpoint, we fine-tune the entire model end-to-end to accept Neural Assets tokens instead of text tokens as conditioning signal. The training and inference setup is explained in the following section. §.§ Learning and Inference Learning from frame pairs. As outlined in the introduction, we require a scalable data source of object-level "edits" in 3D space to effectively learn multi-object 3D control capabilities. Video data offers a natural solution to this problem: as the camera and the content of the scene moves or changes over time, objects are observed from various view points and thus in various poses and lighting conditions. We exploit this signal by randomly sampling pairs of frames from video clips, where we take one frame as the "source" image x_src and the other frame as the "target" image x_tgt. As described earlier, we obtain the appearance token A_i of Neural Assets from the source frame x_src by extracting object features using 2D box annotations. Next, we obtain the pose token P_i for each extracted asset from the target frame x_tgt, for which we need to identify the correspondences between objects in both frames. In practice, such correspondences can be obtained, for example, by applying an object tracking model on the underlying video. Finally, with the associated appearance and pose representations, we condition the image generator on them and train it to reconstruct the target frame x_tgt, using the denoising loss of Stable Diffusion v2.1 in our case. Such a paired frame training strategy forces the model to learn an appearance token that is invariant to object pose and leverage the pose token to synthesize the new object, avoiding the trivial solution of simple pixel-copying. Test-time controllability. The learned disentangled representations naturally enable multi-object scene-level editing as we will show in Sec. <ref>. Since we encode 3D bounding boxes to pose tokens P_i, we can move, rotate, and rescale objects by changing the box coordinates. We can also compose Neural Assets a_i across scenes to generate new scenes. In addition, our background modeling design supports swapping the environment map of the scene. Importantly, as we will see in the experiments, our image generator learns to naturally blend the objects into their new environment at new positions, with realistic lighting effects such as rendering and adapting shadows correctly. § EXPERIMENTS In this section, we conduct extensive experiments to answer the following questions: (i) Can Neural Assets enable accurate 3D object editing? (ii) What practical applications does our method support on real-world scenes? (iii) What is the impact of each design choice in our framework? §.§ Experimental Setup Datasets. We select four datasets with object or camera motion, which span different levels of complexity. OBJect <cit.> is introduced in 3DIT <cit.>, which is one of our baselines. It contains 400k synthetic scenes rendered by Blender <cit.> with a static camera. Up to four Objaverse <cit.> assets are placed on a textured ground and only one object is randomly moved on the ground. For a fair comparison with 3DIT, we use 2D bounding boxes plus rotation angles as object poses, and follow them to base our model on Stable Diffusion v1.5 <cit.>. MOVi-E <cit.> consists of Blender simulated videos with up to 23 objects. It is more challenging than OBJect as it has linear camera motion and there can be multiple objects moving simultaneously. Objectron <cit.> is a big step up in complexity as it captures real-world objects with complex backgrounds. 15k object-centric videos covering objects from nine categories are recorded with 360^∘ camera movement. Waymo Open <cit.> is a real-world self-driving dataset captured by car mounted cameras. We follow prior work <cit.> to use only the front view and filter out cars that are too small. See Appendix <ref> for more details on datasets. Baselines. We compare to methods that can perform 3D-aware editing on existing images and have released their code. 3DIT <cit.> fine-tunes Zero-1-to-3 <cit.> on the OBJect dataset to support translation and rotation of objects. However, it cannot render big viewpoint changes as it does not encode camera poses. Following <cit.>, we create another baseline (dubbed Chained) by using SAM <cit.> to segment the object of interest, removing it using Stable Diffusion inpainting model <cit.>, running Zero-1-to-3 to rotate and scale the object, and stitching it to the target position. Since none of these baselines can control multiple objects simultaneously, we apply them to edit all objects sequentially. Evaluation settings. We report common metrics to measure the quality of the edited image – PSNR, SSIM <cit.>, LPIPS <cit.>, and FID <cit.>. Following prior works <cit.>, we also compute object-level metrics on cropped out image patches of edited objects. To evaluate the fidelity of edited objects, we take the DINO <cit.> feature similarity metric proposed in <cit.>. On video datasets, we randomly sample source and target images in each testing video and fix them across runs for consistent results. Implementation Details. For all experiments, we resize images to 256 × 256. DINO self-supervised pre-trained ViT-B/8 <cit.> is adopted as the visual encoder Enc, and jointly fine-tuned with the generator. All our models are trained using the Adam optimizer <cit.> with a batch size of 1536 on 256 TPUv4 chips. For inference, we generate images by running the DDIM <cit.> sampler for 50 steps. For more training and inference details, please refer to Appendix <ref>. §.§ Main Results Single-object editing. We first compare the ability to control the 3D pose of a single object on the OBJect dataset. Fig. <ref> presents the results on the unseen object subset. We do not show FID here as it mainly measures the visual quality of generated examples, which does not reflect the editing accuracy. For results on the seen object subset and FID, please refer to Appendix <ref>, where we observe similar trends. Compared to baselines, our model does not condition on text (the category name of the object to edit) as in 3DIT and is not pre-trained on multi-view rendering of 3D assets as in Zero-1-to-3. Still, we achieve state-of-the-art performance on all three tasks. This is because our Neural Assets representation learns disentangled appearance and pose features, which is able to preserve object identity while changing its placement smoothly. Also, the fine-tuned DINO encoder generalizes better to unseen objects compared to the frozen CLIP visual encoder used by baselines. Multi-object editing. Fig. <ref> shows the results on MOVi-E, Objectron, and Waymo Open, where multiple objects are manipulated in each sample. Similar to the single-object case, we compute metrics inside the object bounding boxes, and leave the image-level results to Appendix <ref>. Our model outperforms baselines by a sizeable margin across datasets. Fig. <ref> presents the qualitative results. When there are multiple objects of the same class in the scene (boxes in the MOVi-E example and cars on Waymo Open), 3DIT is unable to edit the correct instance. In addition, it generalizes poorly to real-world scenes. Thanks to the object cropping step, Chained baseline can identify the correct object of interest. However, the edited object is simply pasted to the target location, leading to unrealistic appearance due to missing lighting effects such as shadows. In contrast, our model is able to control all objects precisely, preserve their fidelity, and blend them into the background naturally. Since we encode the camera pose, we can also model global viewpoint change as shown in the third row. See Appendix <ref> for additional qualitative results. §.§ Controllable Scene Generation In this section, we show versatile control of scene objects on Waymo Open. For results on Objectron, please refer to Appendix <ref>. As shown in Fig. <ref>, we can translate and rotate cars in driving scenes. The model understands the 3D world as objects zoom in and out when moving, and show consistent novel views when rotating. Fig. <ref> presents our ability of compositional generation, where objects are removed, segmented out, and transferred across scenes. Notice how the model handles occlusion and inpaints the scene properly. Finally, Fig. <ref> demonstrates background swapping between scenes. The generator is able to harmonize objects with the new environment. For example, the car lights are turned on and rendered with specular highlight when using a background image from a night scene. §.§ Ablation Study We study the effect of each component in the model. All ablations are run on Objectron since it is a real-world dataset with complex background, and has higher object diversity than Waymo Open. Visual encoder. Previous image-conditioned diffusion models <cit.> usually use the frozen image encoder of CLIP <cit.> to extract visual features. Instead, as shown in Fig. <ref>, we found that both MAE <cit.> and DINO <cit.> pre-trained ViTs give better results. This is because CLIP's image encoder only captures high-level semantics of images, which suffices in single-object tasks, but fails in our multi-object setting. In contrast, MAE and DINO pre-training enable the model to extract more fine-grained features. Besides, DINO outperforms MAE as its features contain richer 3D information, which aligns with recent research <cit.>. Finally, jointly fine-tuning the image encoder learns more generalizable appearance tokens in Neural Assets, leading to the best performance. Background modeling. We compare our full model with two variants: (i) not conditioning on any background tokens (dubbed No-BG), and (ii) conditioning on background appearance tokens but not using relative camera pose as pose tokens (dubbed No-Pose). As shown in Fig. <ref>, our background modeling strategy performs the best in image-level metrics as backgrounds usually occupy a large part of real-world images. Interestingly, our method also achieves significantly better object-level metrics. This is because given background appearance and pose, the model does not need to infer them from object tokens, leading to more disentangled Neural Assets representations. Training strategy. As described in Sec. <ref>, we train on videos and extract appearance and pose tokens from different frames. We compare such design with training on a single frame in Fig. <ref>. Our paired frame training strategy clearly outperforms single frame training. Since the appearance token is extracted by a ViT with positional encoding, it already contains object position information, which acts as a shortcut for image reconstruction. Therefore, the model ignores the input object pose token, resulting in poor controllability. One way to alleviate this is removing the positional encoding in the image encoder (dubbed NO-PE), which still underperforms paired frame training. This is because to reconstruct objects with visual features extracted from a different frame, the model is forced to infer their underlying 3D structure instead of simply copying pixels. In addition, the generator needs to render realistic lighting effects such as shadows under the new scene configuration. § CONCLUSION In this paper, we present Neural Assets, vector-based representations of objects and scene elements with disentangled appearance and pose features. By connecting with pre-trained image generators, we enable controllable 3D scene generation. Our method is capable of controlling multiple objects in the 3D space as well as transferring assets across scenes, both on synthetic and real-world datasets. We view our work as an important step towards general-purpose neural-based simulators. Limitations. An ideal Neural Asset should enable control over all potential configurations of an object such as deformation (a walking cat), rigid articulation (opening of a scissor), and structural decomposition (tomatoes being cut). In this work, we first tackle the foremost important aspect, controlling 3D rigid object pose and background composition which applies to almost all the objects. However, it can be adapted when suitable datasets are developed that capture other changes in objects. Another limitation is that our approach is currently limited to existing datasets that have 3D bounding box annotations. Yet, with recent advances in vision foundation models <cit.>, we may soon have scalable 3D annotation pipelines similar to their 2D counterparts. However, this is out of scope for this work. § ACKNOWLEDGEMENTS We would like to thank Etienne Pot, Klaus Greff, Shlomi Fruchter, and Amir Hertz for their advise regarding infrastructure. We would further like to thank Mehdi S. M. Sajjadi, João Carreira, Sean Kirmani, Yi Yang, Daniel Zoran, David Fleet, Kevin Murphy, and Mike Mozer for helpful discussions. plainnat § DETAILED EXPERIMENTAL SETUP In this section, we provide full details on the datasets, baselines, evaluation settings, and the training and inference implementation of our model. §.§ Datasets OBJect <cit.> consists of Blender <cit.> rendered scenes where multiple (up to four) objects are placed on a flat textured ground. The objects come from a 59k subset of Objaverse dataset <cit.>. A total of 18 background maps are used to provide environmental lighting. Four types of object-level editing are provided – translation, rotation, removal, and insertion, each with 100k simulated data. Notably, only one object is edited in each data, and the translation and rotation is always on the ground (perpendicular to the gravity vector). For a fair comparison with the 3DIT baseline <cit.>, we use the 2D rotated bounding box to represent object pose, which is composed of two corners of the 2D bounding box and the rotation angle over the gravity axis. This dataset is under the Open Data Commons Attribution License (ODC-By)[<https://huggingface.co/datasets/allenai/object-edit/blob/main/README.md>]. MOVi-E <cit.> contains 10k videos simulated using Kubric <cit.>. Each scene contains 11 to 23 real-world objects from the Google Scanned Objects (GSO) repository <cit.>. At the start of each video, several objects are thrown to the ground to collide with other objects. Similar to OBJect, environmental lighting is provided by a randomly sampled environment map image. The camera follows a small linear motion. The full data generation pipeline is under the Apache 2.0 license[<https://github.com/google-research/kubric/blob/main/LICENSE>]. Objectron <cit.> contains 15k object-centric video clips of common daily objects covering nine categories. Each video comes with object pose tracking throughout the video, and we process it to obtain 3D bounding boxes. Since this dataset does not provide 2D bounding box labels, we project the eight corners of 3D boxes to the image, and take the tight bounding box of projected points as 2D boxes. Objectron is licensed under the Computational Use of Data Agreement 1.0 (C-UDA-1.0)[<https://github.com/google-research-datasets/Objectron#license>]. Waymo Open <cit.>. The Waymo Open Dataset consists of 1k videos of self-driving scenes recorded by car mounted cameras. Following prior works <cit.>, we take the front view camera and bounding box annotations of cars. Notably, the 3D bounding boxes only have a heading angle (rotation along the yaw-axis) annotation, and thus we treat the other two rotation angles as 0. Besides, the provided 2D boxes and 3D boxes are not aligned, preventing us from doing paired frame training. We instead project 3D boxes to get associated 2D boxes similar to on Objectron. Waymo Open is licensed under the Waymo Dataset License Agreement for Non-Commercial Use (August 2019)[<https://waymo.com/open/terms>]. Data Pre-processing. For all datasets, we resize the images to 256 × 256 regardless of the original aspect ratio. On Objectron, we discard all videos from the bike class as it contains many blurry frames and inaccurate 3D bounding box annotations. On Waymo, we remove all cars whose 2D bounding box is smaller than 1% of the image area. We do not apply data augmentation except on Waymo Open, where we apply random horizontal flip and random resize crop following <cit.>. §.§ Baselines 3DIT <cit.> fine-tunes Zero-1-to-3 <cit.> to support scene-level 3D object edits. We generate the editing instruction from the target object pose, such as the translation coordinate and the rotation angle. However, this method does not support large viewpoint changes as it does not encode camera poses. We take their official code and pre-trained weights of the Multitask variant. 3DIT is under the CreativeML Open RAIL-M license[<https://github.com/allenai/object-edit/blob/main/LICENSE>]. Chained. This baseline is inspired by <cit.>, where we chain multiple models together to achieve 3D-aware object editing. An editing step usually contains three steps: (i) crop out the object of interest and inpaint its region with backgrounds, (ii) synthesize the object under the new pose, and (iii) place the object to the new location. For (i), we apply SAM <cit.> to segment the object using 2D bounding box prompt, and inpaint the original object region with Stable Diffusion v2 inpainting model <cit.>. For (ii), we run Zero-1-to-3 <cit.> to re-pose the object according to the target 3D bounding box. For (iii), following <cit.>, we first get the alpha mask of the re-posed object using an online tool[<https://github.com/OPHoperHPO/image-background-remove-tool>], and insert it to the new position via alpha blending. It is worth noting that Zero-1-to-3 does not support camera rotation over the roll axis. For all models, we take their official code and pre-trained weights. SAM is under the Apache 2.0 license[<https://github.com/facebookresearch/segment-anything#license>]. Stable Diffusion v2 inpainting model is under the CreativeML Open RAIL++-M License[<https://huggingface.co/stabilityai/stable-diffusion-2-inpainting>]. Zero-1-to-3 is under the MIT license[<https://github.com/cvlab-columbia/zero123/blob/main/LICENSE>]. The online alpha mask extraction tool is under the Apache 2.0 license[<https://github.com/OPHoperHPO/image-background-remove-tool/blob/master/LICENSE>]. §.§ Evaluation Settings We report PSNR, SSIM <cit.>, LPIPS <cit.>, and FID <cit.> to measure the accuracy of the edited image. We compute metrics both on the entire image, and within the 2D bounding box of edited objects. For box-level metrics, we follow <cit.> to crop out each object and directly run the metric without resizing. We also evaluate the identity preservation of objects using the DINO <cit.> feature similarity proposed in <cit.>, which runs a DINO self-supervised pre-trained ViT on cropped object patches to extract features and compute the cosine similarity between predicted and ground-truth image. §.§ Our Implementation Details Model architecture. We take Stable Diffusion (SD) v2.1 <cit.> as our image generator except for experiments on the OBJect dataset, where we use SD v1.5 for a fair comparison with baselines. Similar to prior works <cit.>, we also observe clearly better performance using SD v2.1 compared to v1.5. However, we note that our Neural Assets framework generalizes to any image generator that conditions on a sequence of tokens. We implement the visual encoder Enc with a DINO self-supervised pre-trained ViT-B/8 <cit.>, which outputs a feature map of shape 28 × 28 given a 256 × 256 image. For each object, we apply RoIAlign <cit.> to extract a 2 × 2 small feature map and flatten it, the appearance token A_i has a sequence length of K=4. Since the conditioning token dimension of pre-trained SD v2.1 is 1024, we use a two-layer MLP to transform the 3D bounding boxes input to D'=1024, and linearly project the concatenated appearance and pose token back to 1024. For background modeling, we mask all pixels within object boxes by setting them to a fixed value of 0.5, and extract features with the same DINO encoder. Instead, the pose token is obtained by applying a different two-layer MLP on the relative camera pose between the source and the target image. Training. We implement the entire Neural Assets framework in JAX <cit.> using the Flax <cit.> neural network library. We train all model components jointly using the Adam optimizer <cit.> with a batch size of 1536 on 256 TPUv5 chips (16GB memory each). We use a peak learning rate of 5×10^-5 for the image generator and the visual encoder, and a larger learning rate of 1×10^-3 for remaining layers (MLPs and linear projection layers). Both learning rates are linearly warmed up in the first 1,000 steps and stay constant. A gradient clipping of 1.0 is applied to stabilize training. We found that the model overfits more severely on real-world data with complex backgrounds compared to synthetic datasets. Therefore, we train the model for 200k steps on OBJect and MOVi-E which takes 24 hours, and 50k steps on Objectron and Waymo Open which takes 6 hours. In order to apply classifier-free guidance (CFG) <cit.>, we randomly drop the appearance and pose token (i.e., setting them as zeros) with a probability of 10%. CFG improves the performance and also alleviates overfitting in training. Inference. We run the DDIM sampler <cit.> for 50 steps to generate images. We found the model works well with CFG scale between 1.5 and 4, and thus choose to use 2.0 in all the experiments. § ADDITIONAL EXPERIMENTAL RESULTS §.§ Full Benchmark Results We present full quantitative results on OBJect in Tab. <ref>, and on MOVi-E, Objectron, and Waymo Open in Tab. <ref>. Compared to the main paper, we report additional FID metrics and results on the unseen object subset for OBJect, while for the other three datasets, we report additional FID and DINO feature similarity metrics, plus results computed over the entire image (Image-Level). Overall, we observe similar trends as in the main paper, where our Neural Assets model significantly outperforms baselines across all datasets. In Fig. <ref>, we show additional qualitative comparisons. §.§ Full Ablation Results We present all quantitative results of our ablation studies on Objectron (Sec. <ref>) in Tab. <ref>, Tab. <ref>, and Tab. <ref>. We observe similar trends on all metrics at both image- and object-level. §.§ Controllable Scene Generation In Fig. <ref> and Fig. <ref>, we show controllable scene generation results on Objectron. Objectron videos only have global camera movement, while the objects are static. Still, our Neural Assets model learns disentangled foreground and background representations. As can be seen from the results, we can rotate the foreground objects while keeping the background fixed, or swap background between scenes. Importantly, our model inpaints the masked background regions not occupied by the novel object, and renders realistic shadows around the object, which is far beyond simple pixel copying. §.§ Ablation on 3D Pose Representations In Fig. <ref>, we visualize the object pose representation we use. Given a 3D bounding box of an object, we project its four corners to the image space, and concatenate their 2D coordinates and depth values to obtain a 12-D pose vector. The 2D projected points resemble a local coordinate frame for the object, specifying its position, rotation, and scale. On the other hand, the depth is useful for determining the occlusion of objects. There are alternative ways to represent the object pose, the coordinate of the 3D box center C with its size and rotation which is commonly used in 3D object detection <cit.>. These representations achieve similar results on MOVi-E and Objectron. However, their learned rotation controllability is significantly worse than our representation on Waymo. This is because most of the cars on Waymo are not rotated (turn left / right), leading to very few training data on object rotation. If we directly input the rotation angle to the model, it tends to ignore it. In contrast, due to prospective projection, the projected local coordinate frame of unrotated cars still look "rotated" when they are not strictly in front of the ego vehicle. This provides much more training signal to learn the rotation of objects. § BACKGROUND ON STABLE DIFFUSION Diffusion model <cit.> is a class of generative models that learns to generate samples by iteratively denoising from a standard Gaussian distribution. It consists of a denoiser ϵ_θ, usually implemented as a U-Net <cit.>, which predicts the noise ϵ added to the data x. Instead of denoising raw pixels, Stable Diffusion introduces a VAE <cit.> tokenizer to map images to low-dimensional latent code z and applies the denoiser on it. In addition, the denoiser is conditioned on text and thus supports text-to-image generation. In this work, we simply replace the text embeddings with Neural Assets a_i and fine-tune the model to support appearance and pose control of 3D objects. § BROADER IMPACTS Controllable visual generation is an important task in computer vision. Neural Assets equip generative models with an intuitive interface to control their behaviors, which enables more interpretable AI algorithms and may potentially benefit other fields such as computer graphics and robotics. We believe this work will benefit the whole research community and the society. Potential negative societal impacts. Since we fine-tune large-scale pre-trained generative models in our pipeline, we inherit limitations of these base models, such as dataset selection bias. Such bias might be problematic when human subjects are involved, though our current approach is only capable of rigid object control and does not consider humans as an "asset" yet. Further study on how such bias affects model performance is required for mitigating negative societal impacts that could arise from this work. Safeguards. For a potential future code release, we will be careful to apply automated content filters (NSFW classifiers) to the generation results of the model. There are no immediate plans to release model weights.
http://arxiv.org/abs/2406.07996v1
20240612084223
Semantic-Aware Resource Allocation Based on Deep Reinforcement Learning for 5G-V2X HetNets
[ "Zhiyu Shao", "Qiong Wu", "Pingyi Fan", "Nan Cheng", "Qiang Fan", "Jiangzhou Wang" ]
cs.NI
[ "cs.NI", "eess.SP" ]
Semantic-Aware Resource Allocation Based on Deep Reinforcement Learning for 5G-V2X HetNets Zhiyu Shao, Qiong Wu, Senior Member, IEEE, Pingyi Fan, Senior Member, IEEE, Nan Cheng, Senior Member, IEEE, Qiang Fan, Jiangzhou Wang, Fellow, IEEE This work was supported in part by the National Natural Science Foundation of China under Grant No. 61701197, in part by the National Key Research and Development Program of China under Grant No.2021YFA1000500(4), in part by the 111 Project under Grant No. B12018. Zhiyu Shao, Qiong Wu are with the School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China (e-mail: zhiyushao@stu.jiangnan.edu.cn, qiongwu@jiangnan.edu.cn) Pingyi Fan is with the Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China (e-mail: fpy@tsinghua.edu.cn). Nan Cheng is with the State Key Lab. of ISN and School of Telecommunications Engineering, Xidian University, Xi’an 710071, China (e-mail: dr.nan.cheng@ieee.org). Qiang Fan is with Qualcomm, San Jose, CA 95110, USA (e-mail: qf9898@gmail.com). Jiangzhou Wang is with the School of Engineering, University of Kent, CT2 7NT Canterbury, U.K. (email: j.z.wang@kent.ac.uk). June 17, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT This letter proposes a semantic-aware resource allocation (SARA) framework with flexible duty cycle (DC) coexistence mechanism (SARADC) for 5G-V2X Heterogeneous Network (HetNets) based on deep reinforcement learning (DRL) proximal policy optimization (PPO). Specifically, we investigate V2X networks within a two-tiered HetNets structure. In response to the needs of high-speed vehicular networking in urban environments, we design a semantic communication system and introduce two resource allocation metrics: high-speed semantic transmission rate (HSR) and semantic spectrum efficiency (HSSE). Our main goal is to maximize HSSE. Additionally, we address the coexistence of vehicular users and WiFi users in 5G New Radio Unlicensed (NR-U) networks. To tackle this complex challenge, we propose a novel approach that jointly optimizes flexible DC coexistence mechanism and the allocation of resources and base stations (BSs). Unlike traditional bit transmission methods, our approach integrates the semantic communication paradigm into the communication system. Experimental results demonstrate that our proposed solution outperforms traditional bit transmission methods with traditional DC coexistence mechanism in terms of HSSE and semantic throughput (ST) for both vehicular and WiFi users. Semantic communication, vehicular networks, resource allocation, unlicensed spectrum bands, proximal policy optimization (PPO). § INTRODUCTION The fifth-generation (5G) networks aim to provide high-speed, low-latency, and reliable communication services for a wide range of applications, such as vehicle-to-everything (V2X) communications <cit.>. However, challenges arise in capacity and spectrum efficiency due to the increasing number of users, data demands and proliferation of connected devices <cit.>. In dense urban areas, where user densities and traffic volumes are high, traditional networking mechanisms may not suffice to address capacity challenges <cit.>. Deploying small cells and heterogeneous networks (HetNets) <cit.> becomes a promising solution to enhance network capacity by increasing the number of antennas on smaller base stations <cit.>. Spectrum scarcity presents another challenge, as traditional communication methods fail to efficiently utilize the limited spectrum. This inefficiency prompts a shift to semantic communication <cit.>. Semantic communication, focusing on transmitting the meaning of information rather than raw data, has a potential in reducing network traffic and alleviating spectrum scarcity. Recent studies have begun exploring semantic communication in various domains such as images <cit.>, text <cit.>, audio <cit.> and so on. 5G New Radio Unlicensed (NR-U) is a new radio technology that expands 5G HetNets' capacity by operating in unlicensed bands <cit.>. In this case, coexistence mechanisms with other wireless networks such as WiFi has to be taken into account due to interference. There are two existing coexistence mechanisms between NR-U and WiFi: one is listen before talk (LBT) <cit.>; the other mechanism is carrier sensing adaptive transmission (CSAT) <cit.>, where base stations (BSs) reserve specific time slots for WiFi access points (APs) using duty cycle (DC) transmission. However, owing to the uncertainty of channel access, NR-U has a worse performance than continuous operation in licensed bands, and thus traditional NR-U and WiFi coexistence mechanisms pose a new challenge for 5G networks. Furthermore, Traditional bit-based resource allocation which is derived from statistical knowledge of source symbols neglects semantic information, complicating spectrum utilization. This approach focuses on the quantity of bits transmitted rather than meaning, leading to inefficient spectrum use. To improve spectrum efficiency, resource allocation must be reconsidered from a semantic perspective, optimizing transmission and addressing the limitations of existing NR-U and WiFi coexistence mechanisms. To handle the above issue, this letter introduces semantic communication into a two-tier HetNet vehicular communication system in urban environments for the first time. We propose a flexible DC approach to address coexistence issues between vehicular and WiFi users in NR-U networks, while introducing HRS and HSSE metrics in the resource allocation optimization. We design a semantic-aware resource allocation framework with flexible DC coexistence mechanism (SARADC) algorithm, which applies PPO DRL to optimize flexible DC and resource allocation based on semantic awareness to maximize semantic throughput (ST) [The source code has been released at: https://github.com/qiongwu86/Semantic-Aware-Resource-Allocation-Based-on-Deep-Reinforcement-Learning-for-5G-V2X-HetNets]. Experimental results demonstrate that our proposed algorithm outperforms other baselines in terms of HSSE and ST for both vehicular and WiFi users. § SYSTEM MODEL As shown in Fig.<ref>, we focus on V2X communication in a high-speed urban environment where N vehicles are initially distributed on the roads with different mobility directions. Each vehicle moves along its selected direction randomly in the environment at a constant speed V, resulting in N V2I links. In this work, we consider a two-tier HetNets deployment: the first tier comprises B_1 macro base stations (MaBs), providing broad coverage and high-capacity communication services operating in licensed frequency bands; the second tier comprises B_2 micro base stations (MiBs) operate in unlicensed frequency bands and provide dense coverage, high-speed and low-latency communication services primarily for areas with high vehicular density. These two tiers of base stations, denoted as B = {B_1, B_2}, accompanied by W APs. The MaBs and MiBs each have R_1 and R_2 RBs, respectively. The N vehicles and W WiFi APs are equipped with DeepSC models <cit.> to encode textual data "How is the road condition?" into semantic information and decode semantic information back to the original textual data "Road ahead is congested, please proceed with caution." Note that the model is pre-trained on BS, and then the trained semantic DeepSC transmitter models are broadcasted and employed directly by all vehicles due to the complex semantic information extraction during training process. §.§ DeepSC Transceivers and Novel Metrics The n-th transmitter vehicle generates a sentence S_n with l words, where S_n = [ s_n,1,s_n,2, … ,s_n,l, … ,s_n,l_q]. The sentence is then input into the encoding part of the DeepSC in the vehicle in order to extract semantic information X_n from S_n, given by: X_n = ch_β( se_α( S_n)), where se_α( ·) and ch_β( ·) are the semantic and channel encoder networks with parameters α and β , respectively. The semantic symbol vector is X_n = [ x_n,1,x_n,2, … ,x_n,ul], where u represents the average number of semantic symbols used for each word. Then the encoded semantic information is transmitted via the wireless channel. The received semantic signal is represented as: Y_n = H_n,iX_n + N, where H_n,i is the channel gain for the i-th V2I link, N is the noise. So the signal-to-interference-noise Ratio (SINR) for the n-th vehicle is SINR_n,b,r = η _n,b,rP_n,b,rH_n,b,r/I_n,b,r + δ ^2, where η _n,b,r is a binary indicators, and η _n,b,r = 1 means that n-th vehicle transmits semantic data to b-th BS using r-th RB. P_n,b,r and H_n,b,r are the transmit power and channel gain from n-th vehicle to b-th BS on the r-th RB, I_n,b,r is the interference on the r-th RB of b-th BS incurred by other BS including both MaBs and MiBs, which can be represented as: I_n,b,r = {[ ∑_n̂ = 1n̂ n^N ∑_b̂ = 1^B_2η _n̂,b̂,rP_n̂,b̂,rH_n̂,b,r ifb ∈B_1; ∑_n̂ = 1n̂ n^N ∑_b̂ = 1^B_1η _n̂,b̂,rP_n̂,b̂,rH_n̂,b,r ifb ∈B_2 , ]. So the decoded signal can be represented as Ŝ_n = se_μ ^ - 1( ch_ν ^ - 1( Y_n)), where se_μ ^ - 1( ·) is the semantic decoder network with the parameter μ, and ch_ν ^ - 1( ·) is the channel decoder network with the parameter ν. The Cross-entropy (CE) is used as the loss function to quantify the difference between Ŝ_n and the original sentence S_n to get the DeepSC model. To evaluate the semantic communication performance, we employ semantic similarity as a performance metric ξ <cit.>, given by: ξ = B(S_n)B(Ŝ_n)/B(S_n)B(Ŝ_n) . Here, B( ·) represents the bidirectional encoder representation of the Transformers (BERT) model for sentences, and ξ indicates the highest similarity between the two sentences where 0≤ξ≤ 1. Assuming text datasets is Υ = ∑_i = n^N S_n, the average semantic information for each sentence is I = ∑_i = n^N I_np( S_n), where I_n is the semantic information of sentence S_n. p( S_n) represents the probability of sentence S_n appearing in the text datasets. Similarly, the average length for each sentence is L = ∑_i = n^N l_np( S_n). The HSR can be expressed as HSR = WI/uLξ, where the unit of u is sut <cit.>, thus the unit of HSR is suts/s.Therefore, we can further derive the metric HSSE: HSSE = HSR/W = I/u_qLξ , which stands for the efficiency of transmitting semantic information in symbols per unit of available bandwidth. Note that the unit of HSSE is suts/s/Hz. In Eq. (<ref>), the value of I/L depends on the type of source, and thus, it is a constant, which can be omitted during optimization. ξ is modeled as ξ = Ψ( u,SINR), as described in <cit.>. §.§ Flexible DC Mechanism Since vehicles use NR-U for semantic transmission and WiFi employs unlicensed spectrum, collisions may occur due to overlapping frequency bands. Fig. <ref> shows that a fixed time slot T is divided into two segments: in T_1, vehicles exclusively transmit semantic data packets over NR-U, WiFi users access the unlicensed spectrum in T_2. The time durations of T_1 and T_2 are adaptable based on network requirements. According to (<ref>), the ST of vehicle users is ST_n = HSR_n×T_1 As the transmission rate of WiFi users within proximity of n-th vehicle is R_w. The ST of WiFi users according to IEEE 802.11ax (WiFi 6) <cit.> is ST_w = R_w/μ×T_2, where R_w/μ is HSR_w and μ is the number of average semantic symbol per word. §.§ Optimization Problem The objective of jointly optimizing SARADC mechanism is to maximize the average HSSE. That is to find the optimal channel allocation β, power allocation p, and the time period T_1 for vehicles exclusively transmitting semantic data packets over NR-U and the number of average semantic symbols per word μ. The optimization problem can be expressed as follows: P_0: max_β ,p,T_1,μ HSSE_n/N s.t. ∑_n = 1^N ST_n≥ST_n , ∑_n = 1^N ST_w≥ST_w , ∑_n = 1^N η _n,b,r≤ 1 ∀ b ∈ B,∀ r ∈ R, xi ≥ξ _th, u ∈{0,1, ⋯ ,u_max}. Constraints (<ref>) and (<ref>) ensure that both vehicle users and WiFi users meets their minimum thresholds ST_n and ST_v, respectively. Constraint (<ref>) ensures that each resource block (RB) is allocated to at most one vehicle, following OFDMA principles. Constraint (<ref>) imposes a minimum requirement on semantic similarity ξ _th. Constraint (<ref>) limits the average number of semantic symbols per word within u_max range . § PROPOSED SARADC ALGORITHM APPROACH To address the above problem, we propose a SARADC algorithm utilizes Deep Reinforcement Learning (DRL), specifically Proximal Policy Optimization (PPO), to address the challenges posed by rapidly changing channel conditions in high-speed vehicular networks. By leveraging DRL, the algorithm can adapt to dynamic environments and complex state spaces, thus optimizing resource allocation, including semantic data transmission, thereby enhancing communication efficiency. The algorithm is expressed as follows: At each time step t, the state of each vehicle agent within the network includes various parameters such as instant channel gain h_n,b,r^t , SINR SINR_n,b,r^t when connected to b-th BS on r-th RB, HSSE of the vehicle HSSE_n_n,b,r^t, the HSSE of the WiFi at the previous time HSSE_w_n,b,r^t - 1 and the previous interference I_n,b,r^t - 1 from other vehicles to b-th BS on r-th RB. Thus, the state of each agent can be represented as s_n^t = [ h_n,b,r^t,SINR_n,b,r^t,HSSE_n_n,b,r^t,HSSE_w_n,b,r^t - 1,I_n,b,r^t - 1] . After observing the environment state s_n^t, each agent takes an action a_n^t according to a specific policy π. The actions include BS and RB allocation β _n^t, allocated transmission power p_n^t, the proportion of DC for vehicle connected to MiBs θ _1_n^t and the number of semantic symbols represented by each word μ _n^t. Thus, the action of each agent can be represented as a_n^t = [ β _n^t,p_n^t,T_1_n^t,μ _n^t] . After taking action a_n^t, each agent receives a reward r_n^t to evaluate its behavior, and the environment transits to the next state s_n^t + 1. The system further penalizes the agent by including a penalty factor C if a RB is shared by multiple vehicles, and rewards the agent if the ST of WiFi exceeds a minimum threshold. The reward for each agent can be expressed as r_n^t = ∑_n = 1^N ST_n/NST_nω( ST_w,ST_w) - Cψ( β _n^t) . Here, the function ω satisfies constraint (<ref>), and function ψ satisfies (<ref>), which are shown as below: ω( x,y) = {[ 1,x ≥ y; 0,x < y ]. ψ( β _v^t) = {[ 1,∑_r = 1^R β _v^t > 1; 0,otherwise . ]. Meanwhile, the cumulative discounted reward is R_n^t = ∑_τ = 0^∞γ ^τr_n^t + τ, where γ is the discount factor, which lies in the range ( 0,1). PPO algorithm consists of two neural networks: actor network and critic network. The actor network ,represented by parameters θ, determines the probability distribution of actions a_n^t given a state s_n^t. The critic network, represented by parameters θ ^v, estimates the expected return given a state. Meanwhile, the old parameters of the actor and critic neural networks, denoted as θ _old and θ _old^v, respectively, are used to constrain the variation of the current policy. The agent interacts with the environment using the old policy network and collects a batch of experiences ( s_n^t,a_n^t,r_n^t,s_n^t + 1). Subsequently, these experiences are utilized to update the parameters θ and θ ^v iteratively. During this process, the advantage function is computed and normalized based on the cumulative discounted reward while the value function estimation can be represented as A_t = ( R_n^t - V( s_n^t))/A_tmax Once a sufficient number of experiences is collected, a batch of experiences is randomly sampled from the replay experience buffer R. The log probability of actions π( a_n^t| s_n^t;θ.) and state values V( s_n^t;θ ^v) under the current policy are computed using θ and θ ^v. Similarly, π( a_n^t| s_n^t;θ _old.) and V( s_n^t;θ _old^v) under the old policy are computed based on θ _old and θ _old^v . The loss of the actor network is L^actor(θ ) = - min( r_t(θ )A_t, clip ( r_t(θ ),1 - ε ,1 + ε)A_t) , where r_t(θ ) = exp( π( a_n^t| s_n^t;θ.) - π( a_n^t| s_n^t;θ _old.)) is the ratio of the log probability of new actions to that of old ones, and clip( ·) limits the range of r_t(θ )A_t by clipping. The loss of the critic network is L^critic(θ ^v) = 0.5( R_n^t - V( s_n^t;θ ^v))^2 the total loss of the actor-critic network is L^total(θ ,θ ^v) = L^actor(θ ) + L^critic(θ ^v) - c ×( entropy), where entropy = - π( a_n^t| s_n^t;θ.) ×log( π( a_n^t| s_n^t;θ.)) is entropy bonus, and c is entropy regularization coefficient. The parameters θ and θ ^v of the actor and critic networks are updated using gradient descent θ = θ - α·∇θ·L^total(θ ,θ ^v) θ ^v = θ ^v - α·∇θ ^v·L^total(θ ,θ ^v) The pseudo code of training process for the proposed SARADC scheme is summarized in Algorithm 1. § SIMULATION RESULTS In this section, we will evaluate the performance of the proposed SARADC algorithm using Python 3.7 as the simulation tool. We consider a scenario with 5 vehicles in a square area spanning 1000×1000 m^2 with a fixed speed of 36 km/s. The channel condition is updated every 100ms with the path loss 128.1 + 37.6 log( d ), where d represents the distance between vehicles and BSs. The parameters are shown in Table <ref>. To evaluate the performance of the proposed SARADC algorithm with flexible DC, we compare our algorithm against four baselines as below while selecting HSSE and ST as performance metrics: * DDPG: SARA based on DDPG (deep deterministic policy gradient) RL with flexible DC. * TD3: SARA based on TD3 (twin delayed deep deterministic policy gradient) RL with flexible DC. * Fixed and random: SARA based on PPO RL with fixed DC and random DC. * DDPG_NO_SC: A bits resource allocation algorithm based on DDPG RL with flexible DC, which does not consider semantic symbols. Fig. <ref> illustrates the reward of the training process. Our SARADC algorithm achieves faster convergence and higher stability, yielding superior cumulative rewards with equivalent training iterations. Compared to TD3 and DDPG algorithms, SARADC with PPO DRL ensures training stability by constraining policy changes. Additionally, PPO effectively balances exploration and exploitation, expediting convergence to optimal solutions for complex tasks like SARA and DC allocation. Fig. <ref>(a) illustrates the HSSE of vehicles and WiFi users under different algorithms. Overall, WiFi users demonstrate higher HSSE than vehicles due to its higher bandwidth demands. Our SARADC algorithm excels in HSSE, surpassing TD3 and DDPG algorithms. The stability of PPO DRL enables SARADC to explore and exploit the environment effectively, resulting in a higher HSSE than TD3 and DDPG. Additionally, all algorithms incorporating semantic information outperform DDPG_NO_SC that is based on traditional communication. This is attributed to the fact that the meaningful and user-oriented data conveyed by semantic information ensures more efficient resource utilization. Fig. <ref>(b) illustrates the relationship between HSSE and μ. As μ increases, the HSSE of our proposed algorithm and other semantic-information-based algorithm remains constant. This resulted constancy arises from the fact that the system transmitting semantic information is independent of the transforming factor μ. DDPG_NO_SC algorithm gradually decrease as μ increases. Specifically, when μ is less than 8 bits/word (i.e., a word is encoded by less than 8 bits), our proposed algorithm outperforms traditional algorithms. This suggests that the choice of semantic source coding scheme is crucial. Fig. <ref>(c) illustrates the total ST of vehicles and WiFi users under different algorithms and DC strategies. It can be seen that fixed or random DC strategies leads to lower total ST due to their lower flexibility and adaptability in meeting network demands. In contrast, our proposed algorithm consistently outperforms the other two algorithms, TD3 and DDPG, irrespective of the DC strategy employed. This is because our adoption of PPO DRL enables faster identification of optimal strategies. § CONCLUSION This letter proposed a SARADC framework tailored for 5G-V2X HetNets using DRL with PPO. It incorporated semantic communication in high-speed vehicular networking, offering a more efficient and user-oriented approach by maximizing the HSSE for all vehicles to get the optimal resource allocation strategy. We compared our proposed algorithm with other four baseline methods. Our proposed SARADC can achieve excellent performance in terms of HSSE and ST, proving the effectiveness of the proposed scheme and semantic communication. The conclusions are summarized as follows: 1) Our proposed SARADC algorithm achieves higher HSSE with meaningful, user-oriented semantic data, ensuring efficient resource utilization. 2) Fixed or random DC esult in lower total system throughput due to their lack of adaptability. In contrast, our flexible DC adapts to demand, boosting performance. 3) When data was mapped to less than 8 bits using traditional encoding, the use of semantic information transmission showed noticeable advantages. * IEEEtran
http://arxiv.org/abs/2406.09224v1
20240613152619
Cascaded injection locking of optomechanical crystal oscillators
[ "David Alonso-Tomás", "Guillermo Arregui", "Laura Mercadé", "Alejandro Martínez", "Amadeu Griol", "Néstor E. Capuj", "Daniel Navarro-Urrios" ]
physics.optics
[ "physics.optics", "cond-mat.mes-hall", "nlin.AO" ]
APS/123-QED MIND-IN2UB, Departament d'Enginyeria Electrónica i Biomédica, Facultat de Física, Universitat de Barcelona, Martí i Franquès 1, Barcelona 08028, Spain DTU Electro, Department of Electrical and Photonics Engineering, Technical University of Denmark, Ørsteds Plads 343, Kgs. Lyngby, DK-2800, Denmark Nanophotonics Technology Center, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain Nanophotonics Technology Center, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain Nanophotonics Technology Center, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain Depto. Física, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Spain Instituto Universitario de Materiales y Nanotecnología, Universidad de La Laguna, 38071 Santa Cruz de Tenerife, Spain dnavarro@ub.edu MIND-IN2UB, Departament d'Enginyeria Electrónica i Biomédica, Facultat de Física, Universitat de Barcelona, Martí i Franquès 1, Barcelona 08028, Spain § ABSTRACT Optomechanical oscillators stand out as high-performance and versatile candidates for serving as reference clocks in sequential photonic integrated circuits. Indeed, they have the unique capability of simultaneously generating mechanical tones and optical signal modulations at frequencies determined by their geometrical design. In this context, the concept of synchronization introduces a powerful means to precisely coordinate the dynamics of multiple oscillators in a controlled manner, thus increasing efficiency and preventing errors in signal processing photonic systems or communication interfaces. In this work, we demonstrate the cascaded injection locking of a pair of silicon-based optomechanical crystal cavities to an external reference signal that subtly modulates the laser driving one of the oscillators. Both cavities interact solely through a weak mechanical link, making the extension of this synchronization mechanism to an increased number of optomechanical oscillators within a common chip more feasible than relying solely on optical interactions. Thus, the combination of the obtained results, supported by a numerical model, with remote optical injection locking schemes discussed in the literature, lays the groundwork for the distribution of reference signals within large networks of processing elements in future phonon-photon hybrid circuits. Cascaded injection locking of optomechanical crystal oscillators Daniel Navarro-Urrios June 17, 2024 ================================================================ Among the diverse array of optomechanical (OM) devices <cit.>, optomechanical oscillators (OMOs) have attracted significant attention due to their potential in addressing various scientific and technological challenges. These oscillators are exceptional platforms, characterized by high-amplitude, self-sustained and coherent mechanical motion driven and controlled by optical fields <cit.>. In this context, although they have already found applications in precision sensing <cit.> as well as in frequency generation and conversion <cit.> among others, their properties make them excellent candidates for serving as clock signals in photonic integrated processing circuits. In fact, with the recent advances in this field and the possibility of combining optomechanical crystal cavities (OMCs) with routed microwave phonons <cit.>, the approach could be naturally extended to chip-integrated phonon-photon platforms where multiple of these oscillators interact with each other. However, the extension of this system to a large network of OMOs generates the need of a mechanism for preventing errors related to frequency dispersion and improving the efficiency of individual OMOs in terms of phase noise and frequency jitter. In this context, injection locking, an established process in electronic and radio-frequency (RF) systems pioneered by the seminal paper of Adler et al. <cit.>, holds the key to synchronize actions and further improve the overall response of the system. In an injection locking mechanism, the frequency and phase of an oscillator become synchronized with those of an external signal. This external signal is injected into the oscillator and originates from another local oscillator (LO) that acts as a reference clock. The first observation of injection locking in an OMO was documented in a silicon microtoroid subjected to a modulation of the same optical drive that leads to the optomechanical oscillations <cit.>. Subsequently, additional works have demonstrated injection locking in various OMOs using a similar optical injection scheme where the LO comes from a signal generator <cit.> or even from another OMO <cit.>. Injection locking has also been achieved through electrical capacitive actuation <cit.> and by mechanical actuation with propagating acoustic waves <cit.>. In order to distribute the reference signal within a chip, a natural strategy is that of synchronizing several OMOs to the external signal, physically placed at different locations of the chip, in a cascaded injection locking (CIL) configuration. To achieve this, supplying a modulated optical pump concurrently or sequentially to multiple OMOs, either in a parallel or series arrangement, appears to be a viable approach. However, the practical execution of this strategy on a reasonably large scale presents a significant challenge due to the requirement that all these OMOs must share an identical resonant optical wavelength. In this context a small number of reports have claimed the spontaneous synchronization of OMOs sharing a common optical mode <cit.>. In fact, up to date there is only one work reporting the cascaded locking of individual OMOs <cit.>. In that study, the OMOs are optically interconnected by means of a common waveguide in a sequential arrangement and no external reference was fed to the first OMO. Even though this configuration allows scalability, its implementation with silicon microdisk difficult the direct extraction of coherent mechanical signals. Moreover, OMCs suffer from stronger optical dispersion, making an all-optically mediated synchronization complicated. Here, we demonstrate an approach for achieving CIL of the dynamics of a pair of chip-integrated OMCs acting as OMOs to an external reference signal. In our platform, the resonators are optically isolated from each other and interact through a weak mechanical link. Exploiting a mechanical interaction in spite of an optical one allows a more straightforward means of achieving spontaneous synchronization of OMCs <cit.>, given that the mechanical resonant frequency dispersion among different OMCs is usually much smaller than the optical counterpart. The interaction is considered unidirectional since one of them (referred to as the "main" or "leader") oscillates with significantly larger amplitude and injection-locks to the reference tone, while the other one (designated as the "secondary" or "follower") spontaneously synchronizes to the main one and/or the external reference only through the weak mechanical interaction. In this context, we also report that it is possible to injection-lock the secondary OMO to the external reference, even though the main OMO is the one receiving the reference tone and oscillating independently. § RESULTS Optical-Mechanical-Optical Configuration. The experiment conducted in this work closely parallels the scenario depicted in Figure <ref>a. Two OM cavities, represented as Fabry-Perot cavities with a mirror attached to a spring, weakly interacts through another spring of lower elastic constant. Both cavities support different optical resonances, with red representing the main OMO (λ_M) and blue representing the secondary one (λ_S), which are resonantly pumped by evanescently coupled laser light propagating through a tapered fiber. Each oscillator has a resonant mechanical angular frequency of Ω_i, where i = M, S denotes main and secondary respectively. The oscillation in the cavity length leads to a spectral shift of the optical resonances. The strength with which these resonant frequencies (ω = 2π c/λ) are pulled (linearly) in terms of mechanical displacement can be defined as G = ∂ω/∂ x, which is related to the vacuum optomechanical coupling rate g_0 = G x_ZPF. Here, x_ZPF = √(ħ/2mΩ) represents the mechanical zero-point fluctuation of the mechanical mode with effective mass m and frequency Ω. It is worth noting that if the incident laser wavelength is in resonance with an optical mode, the light transmitted will undergo a modulation enveloped by the characteristic Lorentzian shape of a Fabry-perot cavity mode. Hence, input light acts as a probe to observe the mechanical oscillation of OMOs, but at the same time can play an active role in the optomechanical system through radiation pressure forces. Both OM cavities are optically driven into a state of high amplitude, coherent and self-sustained mechanical motion, referred to as mechanical lasing, in such a way that one oscillates with significantly larger amplitude than the other. This configuration effectively emulates a leader-follower arrangement of oscillators <cit.>. Subsequently, a periodic external force is applied solely to the main oscillator, resulting from an external modulation in the amplitude of the input light in resonance with its supported optical mode. Thus, the secondary one perceive its actuation only through the weak mechanical interaction between the systems. If the frequency of the external modulation is close enough to the resonant frequency of one of the OMOs, its dynamics can be locked to the external modulation <cit.>. CIL occurs when the two resonators synchronize to the external perturbation. Consider the presented scheme, where the interaction can be understood as a restoring force emerging from the mechanical link, the equations governing the dynamics of the leader-follower arrangement of oscillators can be described as two reactively coupled linear forced harmonic oscillators: ẍ_i + Γ_iẋ_i + Ω_i^2 x_i = F_0,i(t)/m_i - 2JΩ_ix_Mδ_i,S where δ is the dirac delta, J corresponds to the reactive coupling strength, Γ_i is the mechanical decay rate of each resonator and F_0,i(t) = g_0,iħ n_0,i(t)/x_ZPF, the radiation pressure force exerted by the temporally dependent intra-cavity photon number (n_0,i). The temporal behaviour of this magnitude is governed by the self-induced modulation associated to the driving mechanism, as well as a periodic external modulation with frequency (f_mod = Ω_mod/2π) in the case of the main OMO equation (see <cit.> and supplementary section S3 for more details). The coupling term only appears in the equation corresponding to the follower oscillator since the main cavity oscillates with much higher amplitude than the secondary one, emulating the unidirectionality of the system. Tested Device and Experimental Setup. In this work we use silicon one-dimensional OMCs, behaving as OMOs. These structures are planarly chip-integrated suspended nanobeams which behave as photonic crystals with a defect region, whose design and fabrication methodology are detailed in the Methods section. The tested device is composed of a pair of nominally identical OMOs, where the outermost five cells of each nanobeam are anchored to the frame, restricting the flexural modes of the geometry to its central area. The specific geometry of the OMO devices, allows positioning the tapered fiber between both geometries, enabling their simultaneous optical excitation. Here, the OM coupling is understood as the change in resonant optical frequency induced by the moving boundaries and photo-elastic effect contributions of the mechanical mode <cit.>. The interaction between both OMOs is provided through an engineered mechanical link (Fig. <ref>b) between them, placed on the anchored zone, thus ensuring a weak mechanical coupling. In this platform, the pumping mechanism enabling mechanical lasing of the OMOs is the anharmonic radiation pressure modulation F_0,i(t) due to the activation of a thermo-optic/free-carrier dispersion self-pulsing (SP) limit-cycle that emerges in silicon under a certain threshold of number of photons inside the cavity (see Methods and supplementary information S2). This driving mechanism is useful when the system is in the sideband unresolved regime, where the decay rate of the optical cavity modes is much higher than the frequencies of mechanical resonators, and has been reported by us and other groups previously <cit.>. In particular, we have driven to the mechanical lasing regime the three-antinodes flexural mechanical modes of the secondary and main OMOs, whose room temperature natural frequencies are (f_S,f_M) = (Ω_S, Ω_M)/2π = (99.57, 99.65) MHz. Here, fabrication disorders break the symmetry, causing the OMCs to exhibit different mechanical natural frequencies. These fabrication deviations also lead to an increase in the OM coupling of these in-plane flexural modes to hundreds of kHz <cit.>, even though the simulated OM coupling of the nominal geometry is null. The experimental setup, which is schematically represented in the Methods section, implements the configuration described in Fig. <ref>a. The two OMOs are excited using tunable diode lasers, respectively denoted by L1 and L2, whose polarization state is set to be transverse electric (i.e., electric field in the plane of the sample), using fiber polarization controlers (FPC). Light emitted from L1 is amplitude modulated using a Mach-Zehnder electro-optic modulator (EOM), with a half-wave voltage V_π = 3.5 V. A signal generator (SG) provides the periodic external modulation. Both lasers are then combined into a single tapered fiber that has been shaped into a microloop at its thinnest region <cit.>. The bottom part of the loop acts as a probe that allows the local excitation of the optical cavity modes of the OMOs if the cavity regions are in the near field of the fiber and the lasers are in resonance with the respective optical modes. The signal decoupled from the resonators is then split and passed through Fabry-Perot wavelength filters (WF), to afterwards impinge on two photodetectors (PD1 and PD2). The electrical output can be analyzed in the frequency domain by a spectrum analyzer (SA) and in the temporal domain with a 4-channel oscilloscope (OSC). All the electronic equipment owns a response bandwidth of 11 GHz. The optical transmission spectra of the OMOs displayed in Figure <ref>c shows that the first optical resonances of each of the OMOs are separated by about 7nm (1535.4 nm for the secondary OMO and 1542.4 nm for the main one). This large separation (0.45%) emerges from fabrication imperfections and the presence of the fiber, since optical properties are very sensible to small deviations in the geometry or the surrouding media. In this case, the optical power coupled fraction for both OMOs is roughly 35% and the loaded optical quality factors are Q_S=7.3 · 10^3 and Q_M=11.3 · 10^3, while the intrinsic values are 9.4 · 10^3 and 14.3 · 10^3, respectively. This configuration ensures the absence of direct optical interaction between the two OMOs. Spontaneous Synchronization. First, we focus on the case where no external modulation is provided. As previously explored in <cit.>, the weak interaction between the two self-sustained OMOs in the mechanical lasing regime can induce spontaneous synchronization of the follower oscillator to the leader within a specific range of detuning between their natural frequencies. In the present study, the oscillation frequencies of both OMOs just above the self-oscillation threshold are too far apart to enable spontaneous synchronization of their dynamics. To control the frequency difference, we increased the wavelength of the laser driving the OMO with the larger mechanical frequency, which in this case is the main cavity. This leads to an increase in the time-averaged number of intracavity photons, causing more photons to be absorbed and the geometry to heat up. Figure <ref>d and <ref>e illustrate this experiment. The top panels show, in a contour plot representation, the RF spectrum of the signal detected by PD1 and PD2 as the wavelength of L1 is modified. The bottom panels illustrate the RF spectrum at the initial wavelength. The spectra corresponding to the main OMO (Figure <ref>d) show that f_M reduces gradually as the wavelength increases, in an almost linear way due to a relaxation of the elastic constants of the material <cit.>. Here, the only signs of the mechanical feedback coming from the secondary OMO appear as RF peaks at f_S and 2f_M - f_S. These sidebands slightly approach f_S and disappear within a range of wavelengths between 1544.8 nm and 1546.1 nm, which corresponds to an OMO frequency difference of |f_M - f_S| < 25 KHz. The secondary OMO dynamics displayed in Figure <ref>e reveals a similar behaviour. At the initial condition, a high-amplitude RF peak is placed at f_S, surrounded by two principal sidebands at f_M and 2f_S - f_M of much larger amplitude than in the previous case, which confirms the unidirectional character of the interaction. Here, it is clear that the range in which the sidebands disappear matches with an abrupt jump of f_S becoming now equal to f_M, which is evidence of spontaneous synchronization of the secondary OMO dynamics to that of the main OMO. The magnitude of the main RF peak associated to the secondary OMO oscillation keeps the same value along the whole spectral range analyzed in the experiment, which rules out the possibility of resonant forcing. The wide range of tunability of f_M, expanded when compared to the case reported in Ref. <cit.>, allows displaying the whole synchronization range, i.e., the whole Arnold tongue. Under these conditions, the Arnold tongue appears to be symmetric with respect to the natural value of f_S. Cascaded Injection Locking. The previous experiment demonstrates the spontaneous synchronization of the two OMOs in a leader-follower scheme. In the upcoming studies, we keep the wavelength of L1 at 1544 nm (see the yellow dashed line in Figure <ref>e), where |f_M - f_S| = 45 KHz, placing the two OMOs outside the Arnold tongue, i.e., they do not synchronize spontaneously. We now introduce a harmonic signal of frequency f_mod and amplitude V_max generated by the SG, which modulates the amplitude of the output of L1. Figure <ref> shows the result of monitoring the RF signal of the secondary OMO optical channel as a function of f_mod for different V_max values. When f_mod is sufficiently detuned from the natural oscillation frequencies, the RF spectrum consists of the peaks reported in Figure <ref>e, with the main peak at f_S surrounded by two main sidebands at f_M and 2f_M - f_S (see Figure <ref>a or the grey spectra in Figure <ref>e). Additionally, there is the contribution of the modulation tone (green dashed arrow in Figure <ref>a). Given that the modulation is applied to L1, the appearance of this tone in the signal corresponding to the secondary OMO is a consequence of the interaction with the main OMO. Indeed, it follows the spectral response of the flexural in-plane mechanical modes, which have intrinsic natural linewidths of typically 0.2 MHz at atmospheric pressure. Now, as f_mod increases and approaches to f_S we observe sidebands corresponding to the beat between the external modulation and higher harmonics of the secondary OMO signal. When the frequency of the external modulation is close enough to f_S, the latter suffer a frequency pulling effect towards f_mod which ends up with a narrow injection locking region of the secondary OMO dynamics to the external tone. By further increasing f_mod, so that we analyze the region around f_M, the same effects are displayed but in a stronger extent. This is consistent with the fact that the main OMO is directly experiencing the modulation of the radiation pressure force while the secondary OMO senses a much weaker mechanical perturbation through the mechanical link. It is worth noting that the frequency pulling effect in both injection locking ranges is the expected behaviour in a phase locking mechanism <cit.>. If we focus now on panels <ref>b-d, we observe that the two locking regions described before, denoted with blue and red arrows for the secondary and main OMO injection locking ranges respectively, widen as V_max increases. Above V_max = 0.0051V_π, CIL of the two oscillators to the external drive is achieved, which is highlighted with a light green arrow. The CIL range starts on the lower-frequency end of the main OMO injection locking range, which is where the frequencies of the two OMOs are closer. Within this regime, the RF spectrum collapses into a single tone at f_mod (see green spectra of Fig. <ref>e). The frequency range of CIL widens as V_max increases, achieving few tens of kHz in the extreme case (see Fig. <ref>d). For large enough f_mod values, the two OMO frequencies become too far apart to keep the CIL and it is lost, however, the main OMO remains locked to the external reference signal through a larger bandwidth. In this later situation, the secondary cavity oscillates independently again (red spectra of figure <ref>e) and sidebands corresponding to the beat between the external modulation and secondary OMO harmonics appear at a combination of their frequencies. It is important to mention that figure <ref>b and <ref>c-d represent different accesses to the CIL regime. In the first case, the secondary oscillator spontaneously synchronizes to the perturbation generated by the main one when it is locked to the external frequency. This occurs because the detuning between mechanical frequencies after the injection-locking of the main OMO, falls below the threshold for spontaneous synchronization (Fig. <ref>e). On the other hand, when the external modulation generates a mechanical perturbation large enough to maintain the secondary OMO locked until the main Arnold tongue, the CIL is accessed by the locking of both oscillators to the external feedback. The system of coupled differential equations corresponding to the mechanical resonators (Eq. <ref>) and the SP mechanism (see Methods and supplementary S2) can be solved numerically to compute the temporal dynamics of intra-cavity photons and, hence, the transmission temporal traces for the main and secondary OMO (see Supplementary Material S3). In order to compare numerical simulations with experimental data, a Fast Fourier Transform (FFT) is performed on the computed transmission temporal trace of the secondary OMO as f_mod is swept over a range between the natural frequencies. Figures <ref>g-i show the same contour RF representation used to visualize experimental measurements for different amplitudes of the external modulation. Here, simulations clearly reproduce the different regimes observed, including the injection locking range induced in the secondary OMO by the mechanical perturbation, as well as the CIL of both OMOs for higher amplitudes of modulation. Temporal Traces and Phase Noise. Temporal measurements of the dynamics of the two OMOs in the CIL regime (f_mod = 99.61 MHz) were performed using the oscilloscope. Figure <ref>a represents the time traces of the external modulation signal and the secondary OMO when it is locked to the external tone by means of CIL. Both traces are taken simultaneously using the first one as a trigger signal in the oscilloscope. A zoom of a single period of the two traces is shown in Figure <ref>b, where the secondary OMO transmission is now represented in a color scale. Figure <ref>c displays the transmission signal in a polar representation suitable to avoid self-intersections and to superimpose all the full cycles recorded in the transmission temporal trace, which spans a total of 1 μ s, i.e., about 100 cycles. This is a specific type of Poincaré map in which the transmission of the secondary OMO is sampled at a frequency equal to f_mod over all the cycles. The radius is associated to the limit-cycle optical transmission trace and the angle to the relative phase of the harmonic modulation signal with respect to its maximum, i.e., we express the reference signal as cos(2 π f_modt), where 2 π f_modt is the polar angle of the plot. To better illustrate this, the transmission curves have been plotted in both sub-panels with a common color scale linked to each phase point. If there is no frequency locking between the two signals, the resulting curve will slightly rotate with time, filling the whole polar space if the temporal acquisition is long enough. However, the trajectory drawn in Figure <ref>c repeats itself in every cycle and no drift is observed regardless of the acquisition time. Finally, high-precision linewidth measurements were registered using a phase noise analyzer integrated in the SA. In Figure <ref>d we compare the phase noise of the free-running secondary OMO (blue), with that of the signal generator (dark green) and that of the secondary OMO (light green) within the CIL regime. The part of the phase noise associated to the secondary OMO that is greatly improved is that involving the long-term stability of the OMOs, that is, below offset frequencies of some kHz. We associated this effect to the weak nature of the synchronization mechanism. Indeed, in Ref <cit.> we demonstrated that it takes several hundreds of oscillations to restore the spontaneous synchronization dynamics when the system is perturbed externally. The performance of the secondary OMO remains the same for frequencies above 10 kHz, which involve faster noise mechanisms <cit.> to which spontaneous synchronization does not provide any improvement. § DISCUSSION We have studied the dynamics of a pair of Si-based OMCs acting as OMOs, that are weakly coupled mechanically. In the absence of an external signal, the system behaves as a leader-follower configuration, where the main resonator exhibits a larger oscillation amplitude. Additionally, we observed regions of injection-locking in both OMO dynamics when introducing an external harmonic modulation on the laser driving the main oscillator at frequencies close to their natural ones. This behavior is anticipated in the case of the main OMO, given that the modulation is directly applied to it <cit.>, but it is not straightforward to observe in the case of the secondary OMO. The origin of this latter injection-locking mechanism lies in the mechanical interaction between both integrated OMOs, which remains effective even if the main one is not injection-locked to the reference. As a result of the study, we have unequivocally demonstrated, both in the frequency and temporal domains, the cascaded injection-locking of the OMOs to an external reference signal, whereby the main OMO is injection-locked to the reference and the secondary OMO is spontaneously synchronized to the mechanical perturbation coming from the link. All these results were qualitatively reproduced with numerical simulations modeling the non-linear dynamics of the coupled system of oscillators. The reported cascaded injection-locking mechanism in chip-integrated Si-based OMOs, where the interaction is provided through a mechanical link, offers an alternative for distributing signals among OMOs that do not share a common optical mode. This scheme enables the use of multiple optical channels and avoids issues related to optical frequency dispersion among nominally equivalent chip-integrated OMCs due to fabrication disorder. In this context, these geometries not only allow for the extraction of coherent mechanical signals through the substrate, leading to photon-phonon hybrid circuits <cit.>, but also can be optically excited by common on-chip bus waveguides <cit.>. Hence, the proposed platform could be extended to multiple oscillators sequentially locked to the one receiving a reference signal, with the adaptation time of the OMOs to the external reference being an important factor to consider. Combining the presented configuration with an optical injection locking scheme <cit.> opens a path for distributing reference signals along distant clusters of processing elements interacting remotely. The complex nonlinear dynamics emerging in this type of platform could be leveraged not only for signal distribution but also for advanced applications, including enhanced frequency generation or conversion, as well as ultrasensitive measurements by monitoring the collective dynamics of the system <cit.>. Overall, this study represents a necessary step towards achieving full dynamic control of phonon lasers through synchronization, which is a critical element in the development of phonon-photon hybrid circuits. § METHODS OM crystal design. The experimental structures employed in this study consist of one-dimensional silicon-based OM crystal cavities. These are standalone nanobeams simultaneously functioning as photonic and phononic crystals. The unit cell features a central mass with a pitch (a) of 500 nm, a central hole with a radius (r) of 150 nm, and stubs extending from the top and bottom of the central mass with a length (d) of 250 nm (see Figure <ref>). The incorporation of holes and stubs in the same geometry enables independent modification of the photonic or phononic bandgap <cit.>. Parameters are specifically design so that the geometry acts as a full photonic and phononic mirror in the frequency ranges of 200 THz and 4 GHz. The adiabatic reduction of these parameters at the center of the crystal, to Γ = 85% of their original values, forms a defect region giving rise to confined modes (which are not used in this work). The OM crystal comprises 32 unit cells, collectively spanning a length of approximately 16 μm. OM crystal fabrication. In this work, the studied device comprises two equivalent one-dimensional OM crystals joined by one of its stubs at one end. These crystal pairs were fabricated on conventional silicon-on-insulator (SOI) SOITEC wafers, featuring a silicon layer with a thickness of 220 nm, a resistivity of approximately 1 to 10 Ω cm^-1, and p-doping at around 10^15 cm^-3. The SOI wafer included a buried oxide layer with a thickness of 2 μm. The design was etched onto a poly-methyl-methacrylate (PMMA) resist film with a thickness of 100 nm using electron beam technology. Subsequently, the pattern was transferred into the silicon layer using Reactive Ion Etching (RIE). Buffered Hydrofluoric Acid (BHF) was then applied to remove the buried oxide layer and release the fabricated beam structures. Experimental Setup. Equations of motion of free-carrier, temperature and optomechanical systems. In this section, we report the mechanism used to drive optomechanical cavities to a coherent, high-amplitude and self-sustained state of mechanical oscillation in the sideband unresolved regime. It is based on the interplay between the thermo-optic (TO) effect and free carrier dispersion (FCD) that emerges in silicon crystals. Both effects have an impact on the refractive index of the material and hence, on the optical resonance of the cavity. The dynamical evolution of free carriers (N) and increase in temperature (Δ T) can be expressed as two coupled differential macroscopic equations: Ṅ = - 1/τ_FCN + α_SPA n_0 (N_0 - N) Δ̇ ̇Ṫ = - 1/τ_TΔ T + α_FCn_0 N where the coupling magnitude is the intracavity photons, n_0 = n_0,mΔλ_0^2/(Δλ_0^2 + 4 (λ_r - λ_l )^2), with n_0,m = 2P_l κ_e λ_0/κ^2 hc. Here, Δλ_0 is the linewidth of the optical resonance at room temperature, P_l and λ_l the power and wavelength of the incident laser light, respectively; κ the overall damping rate and κ_e the extrinsic one. Note that the position of the resonance is modified by the TO and FCD contributions and can be written in first-order approximation as λ_r ≈λ_0 - ∂λ_r/∂ NN + ∂λ_r/∂ TΔ T. Regarding the meaning of the coupled system, the first equation takes into account the single-photon absorption (SPA) through N_0 intragap states per unit volume and a recombination time of τ_FC. The second one considers the fraction of photons that are absorbed and transformed into heat through free carrier absorption (FCA). In that way, α_SPA represents the rate of free-carrier density increase per photon and unit of density of available intragap states, while α_FC the rate of temperature increase per photon and unit of free-carrier density. It is worth mentioning that the response of n_0 to the different contributions its adiabatic since the system is in the regime where κ is much larger than the characteristic rates (1/τ_FC, 1/τ_SPA). Under specific conditions of the driving laser, the whole system can enter in a dynamical regime described by a self-sustained limit cycle, which we refer as self-pulsing. The modulation on intracavity photons is then transduced linearly to the radiation pressure optical force that drive the oscillator (F_0 (t) ∝ n_0). Hence, the mechanical modes of the optomechanical crystal can be described as damped linear harmonic oscillators driven by an anharmonic force: d^2 x(t)/dt^2 + Γdx(t)/dt + Ω^2 x(t) = F_0 (t)/m_eff The anharmonic modulation of intra-cavity photons induced by the self-pulsing limit cycle at a certain frequency ν_SP can do resonant driving of the mechanical modes, providing amplification and achieving a self-sustained motion of large amplitude (refered as mechanical lasing for the similarity with the optical counterpart). This effect can occur for any of the different M harmonics present in the non-linear modulation if the optomechanical coupling and the input power are sufficiently high (see supplementary material S2). It is worth mentioning that once the mechanical lasing regime is achieved, its contribution to the perturbation generated in λ_r through the optomechanical coupling can no longer be neglected: λ_r ≈λ_0 - ∂λ_r/∂ NN + ∂λ_r/∂ TΔ T + λ_0 ^2 g_0/2π c x_ZPFx § AKNOWLEDGEMENTS This work was supported by the MICINN projects ALLEGRO (Grants No. PID2021-124618NB-C22 and PID2021-124618NB-C21) and MOCCASIN-2D (Grant No. TED2021-132040B-C21). * naturemag 10 url<#>1urlprefixURL Aspelmeyer authorAspelmeyer, M., authorKippenberg, T. J. & authorMarquardt, F. titleCavity optomechanics. journalRev. Mod. Phys. volume86, pages1391–1452 (year2014). Kippemberg authorKippenberg, T. J., authorRokhsari, H., authorCarmon, T., authorScherer, A. & authorVahala, K. J. titleAnalysis of radiation-pressure induced mechanical oscillation of an optical microcavity. journalPhys. Rev. Lett. volume95, pages033901 (year2005). Yu authorYu, W., authorJiang, W. C., authorLin, Q. & authorLu, T. titleCavity optoemchanical spring sensing of single molecules. journalNat. Commun. volume7, pages12311 (year2016). Pan authorYu, W., authorJiang, W. C., authorLin, Q. & authorLu, T. titleRadiation-pressure-antidamping enhanced optomechanical spring sensing. journalACS Photonics volume5, pages4164–4169 (year2018). Liu authorLiu, F., authorAlaie, S., authorLeseman, Z. C. & authorHossein-Zadeh, M. titleSub-pg mass sensing and measurement with an optomechanical oscillator. journalOpt. Express volume21, pages19555–19567 (year2013). Hossein authorHossein-Zadeh, M. & authorVahala, K. titlePhotonic rf down-converter based on optomechanical oscillation. journalIEEE Photonics Technology Letters volume20, pages234–236 (year2008). Mercade authorMercadé, L., authorMorant, M., authorGriol, A., authorLorente, R. & authorMartínez, A. titlePhotonic frequency conversion of ofdm microwave signals in a wavelength-scale optomechanical cavity. journalLaser Photon Rev volume15, pages2100175 (year2021). Mercade2 authorMercadé, L. et al. titleTesting optomechanical microwave oscillators for satcom application. journalJournal of Lightwave Technology volume40, pages4539–4547 (year2022). Ghorbel authorGhorbel, I., authorSwiadek, F., authorZhu, R. & authoret al. titleOptomechanical gigahertz oscillator made of a two photon absorption free piezoelectric iii/v semiconductor. journalAPL Photonics volume4, pages116103 (year2019). Painter authorFang, K., authorMatheny, M. H., authorLuan, X. & authorPainter, O. titleOptical transduction and routing of microwave phonons in cavity-optomechanical circuits. journalNature Photon. volume10, pages489–496 (year2016). Adler authorAdler, R. titleA study of locking phenomena in oscillators. journalProceedings of the IEEE volume61, pages1380–1385 (year1973). Hossein2 authorHossein-Zadeh, M. & authorVahala, K. titleObservation of injection locling in an optoemchanical rf oscillator. journalAppl. Phys. Lett. volume93, pages191115 (year2008). Luan authorLuan, X., authorHuang, Y., authorY.Li & authoret al. titleAn integrated low phase noise radiation-pressure-driven optomechanical oscillator chipset. journalSci Rep volume4, pages6842 (year2015). Shlomi authorShlomi, K. et al. titleSynchronization in an optomechanical cavity. journalPhys. Rev. E volume91, pages032910 (year2015). Arregui authorArregui, G. et al. titleInjection locking in an optomechanical coherent phonon source. journalNanophotonics volume10, pages1319–1327 (year2021). Shah authorShah, S. Y., authorZhang, M., authorRand, R. & authorLipson, M. titleMaster-slave locking of optomechanical oscillators over a long distance. journalPhys. Rev. Lett. volume114, pages113602 (year2015). Li authorLi, J. et al. titleAll-optical synchronization of remote optomechanical systems. journalPhys. Rev. Lett. volume129, pages063605 (year2022). David authorAlonso-Tomás, D. et al. titleUnidirectional synchronization of silicon optomechanical nanobeam oscillators by external feedback. journalACS Photonics volume11, pages7–12 (year2023). Pitanti authorPitanti, A. et al. titleStrong opto-electro-mechanical coupling in a silicon photonic crystal cavity. journalOpt. Express. volume23, pages3196–3208 (year2015). Bekker authorBekker, C., authorKalra, R., authorBaker, C. & authorBowen, W. titleInjection locking of an electro-optomechanical device. journalOptica volume4, pages1196–1204 (year2017). Huang authorHuang, K. & authorHossein-Zadeh, M. titleInjection locking of optomechanical oscillators via acoustic waves. journalOpt. Express. volume26, pages8275–8288 (year2018). Bagheri authorBagheri, M., authorPoot, M., authorFan, L., authorMarquardt, F. & authorTang, H. X. titlePhotonic cavity synchronization of nanomechanical oscillators. journalPhys. Rev. Lett. volume111, pages213902 (year2013). Zhang authorZhang, M. et al. titleSynchronization of micromechanical oscillators using light. journalPhys. Rev. Lett. volume109, pages233906 (year2012). Zhang2 authorZhang, M., authorShah, S., authorCardenas, J. & authorLipson, M. titleSynchronization and phase noise reduction in micromechanical oscillator arrays coupled through light. journalPhys. Rev. Lett. volume115, pages163902 (year2015). Santos authorGil-Santos, E. et al. titleLight-mediated cascaded locking of multiple nano-optomechanical oscillators. journalPhys. Rev. Lett. volume118, pages063605 (year2017). Colombano authorColombano, M. F. et al. titleSynchronization of optomechanical nanobeams by mechanical interaction. journalPhys. Rev. Lett. volume123, pages017402 (year2019). Pikovski authorPykovsky, A., authorRosenblum, M. & authorKurths, J. titleSynchronization: A Universal Concept in Nonlinear Sciences (publisherCambridge University Press, addressCambridge, England, year2003). Dani2 authorNavarro-Urrios, D., authorArregui, G., authorColombano, M. F. & authoret al. titleGiant injection-locking bandwidth of a self-pulsing limit-cycle in an optomechanical cavity. journalCommun. Phys volume5, pages330 (year2022). Eichenfield authorEichenfield, M., authorChan, J., authorCamacho, R. M., authorVahala, K. J. & authorPainter, O. titleOptomechanical crystals. journalNature volume462, pages78–82 (year2009). Johnson authorJohnson, T. J., authorBorselli, M. & authorPainter, O. titleSelf-induced optical modulation of the transmission through a high-q silicon microdisk resonator. journalOpt. Express. volume14, pages817–831 (year2006). Dani authorNavarro-Urrios, D., authorCapuj, N., authorGomis-Bresco, J. & authoret al. titleA self-stabilized coherernt phonon source driven by optical forces. journalSci. Rep. volume5, pages15733 (year2015). Luiz authorde O. Luiz, G., authorRodrigues, C. C., authorAlegre, T. & authorWiederhecker, G. S. titleSynchronization of silicon thermal free-carrier oscillators. journalJ. Opt. Soc. volume40, pages1779–1785 (year2023). Fabero authorDing, L., authorBelacel, C., authorDucci, S., authorLeo, G. & authorFabero, I. titleUltralow loss single-mode silica tapers manufactured by a microheater. journalAppl. Opt. volume49, pages2441–2445 (year2010). Navarro authorNavarro-Urrios, D. et al. titleOptical and mechanical mode tuning in an optomechanical crystal with light-induced thermal effects. journalJ. Appl. Phys. volume116, pages093506 (year2014). Rokhsari authorRokhsari, H., authorHossein-Zadeh, M., authorHajimiri, A. & authorVahala, K. titleBrownian noise in radiation-pressure-driven micromechanical oscillators. journalAppl. Phys. Lett volume89, pages261109 (year2006). ACS authorUrrios, D. N. et al. titleRoom-temperature silicon platform for ghz-frequency nanoelectro-opto-mechanical systems. journalACS Photonics volume9, pages413–419 (year2022). Lamberti authorLamberti, F.-R., authorPalanchoke, U., authorGeurts, T. P. J. & authoret al. titleReal-time sensing with multiplexed optomechanical resonators. journalNano Lett. volume22, pages1866–1873 (year2022). Gomis authorGomis-Bresco, J., authorNavarro-Urrios, D., authorOudich, M. & authoret al. titleA one-dimensional optomechanical crystal with a complete phononic band gap. journalNat. Commun volume5, pages4452 (year2014). § S1 ONE-DIMENSIONAL OPTOMECHANICAL CRYSTAL CAVITIES § S2 NUMERICAL SIMULATIONS OF THE SELF-PULSING AND MECHANICAL LASING DYNAMICS. The methods section of the main text presents a model for the description of the anharmonic modulation of the intra-cavity photon number that emerges in silicon due to free carrier dispersion (FCD) and thermo-optic (TO) effects. Here, the system is solved numerically in MATLAB using standard ordinary differential equation (ODE) solving methods. Parameters of the model are extracted from previous works and experimental measurements <cit.>. Initial conditions are settled to [N(0), Δ T(0), x(0), ẋ(0)] = [1.5· 10^17 cm^-3, (λ_l - λ_0)/(∂λ_r/∂ T), 0, 0]. We select a temporal step of Δ t = 2· 10^-11s to characterize properly the fast dynamics of free carriers and a time span Δ t = 1· 10^-4s for assuring that the system achieves a stationary dynamical regime. Afterwards, a Fast Fourier Transform (FFT) is performed to the temporal trace of the computed transmission (extracted from the intra-cavity photon number) in the stationary solution. This process is repeated while sweeping the wavelength of the incident laser light from lower to higher values. Initial conditions are replaced after the first iteration with the stationary case of previous solutions. In that way, each iteration depends on the previous one, in a similar way to the real experiment. The evolution of the obtained RF spectra is shown in Fig. <ref>a. Initially, when the laser light have just entered resonance (which in this case corresponds to 1540 nm), it is possible to observe a signal oscillating at the frequency of the mechanical mode. Here light is playing a passive role as a probe of the mechanical oscillation, but is not providing amplification of the mechanical motion. After a certain threshold, the self-pulsing (SP) mechanism is activated, whose harmonics can be visualized as the red curves. When a certain harmonic M resonates with the mechanical mode, it provides amplification and frequency-locks to it (plateaus). Note that the increase of photons in the cavity involves heating, leading to a shift of the optical resonance to longer wavelengths. At some point, the nonlinear system can no longer display a state of self-sustained oscillation and the resonance blue-detune from the laser, going back to its room temperature spectral position. The particular non-linear dynamics at a certain λ_l corresponding to the regime of M = 3 is shown in figures <ref> b-e, where the third harmonic of the SP is the one providing amplification to the mechanical mode. Here, the self-limit cycle formed by temperature and free-carrier population is depicted (b), as well as the RF spectra associated with the predicted transmission computed from n_0 (c). It is worth noting that in this regime, the amplitude of the mechanical oscillation is around 0.05 nm (d), extending to an order of magnitude higher for the M = 1 plateau. Output transmitted light exhibits a clear nonlinear behaviour (e). Lastly, in Figs. <ref>g-f we have compared the simulated RF spectrum using a larger power value than that used in Fig. <ref>a with our experimental results, demonstrating significant agreement. In this case, the dynamical solutions of the system do not include isolated SP so that different M mechanical lasing regimes are continuously accessed during the sweep of the laser wavelength, leading to a staircase-like curve. § S3 NUMERICAL SIMULATIONS OF MECHANICALLY COUPLED OM OSCILLATORS The experiment of the main text considers two mechanically coupled OM crystal oscillators which are self-sustained by the anharmonic mechanism described before. The whole picture can be understood through Figure <ref>, where the main elements of the system are schematically represented. Two tunable lasers excite both optical modes which support separated optical resonances, having extrinsinc (κ_e,i) and intrinsic (κ_in,i) optical decay rates, respectively. At the same time, the resonant wavelengths are modulated through the TO and FCD effects following the dynamics of the self-pulsing mechanism. The optomechanical coupling communicates the optical and mechanical modes so that this intrinsic modulation of the radiation pressure force can drive the resonators to the mechanical lasing regime (M = 1). Once this regime is achieved, the large amplitude of the mechanical oscillation becomes a non-negligible source of modulation of the intra-cavity photon number. At this point there are two optical channels undergoing a modulation given by the mechanical oscillation of each OM crystal. The mechanical interaction is represented as a spring with low elastic constant (K_INT << K_M,S) that communicates both oscillators. The coupling is considered to be reactive and can be understood as a restoring force that emerges from the linking tether as it is pulled of its relaxed condition. On the other hand, an external modulation is also applied to the incident power that arrives to the main cavity. Here, the applied voltage to the Mach-Zender electro-optic modulator (EOM) is: V(t) = V_max sin(2 π f_mod t) + V_DC where f_mod is the frequency of the external modulation, V_DC a voltage that can be applied to operate in the quadrature point. Thus, the power that enters in the main cavity can be read as P_input∝ 1 + cos( πsin(2π f_modt) + V_DC/V_π) where V_π is the characteristic voltage to change the phase π radians in the EOM. The offset voltage (V_DC) is set at the quadrature point V_DC = 0.5V_π to minimize larger harmonics in the perturbation generated by the signal generator (SG) so that, if the maximum voltage of the RF modulation signal is small, the output light power responds linearly. Considering that the intra-cavity photon number is proportional to the incident power, it is arrived to: n_0'(t) = n_0,m(1 - sin(πV_max/V_πsin(2π f_modt))) n_0(t) where there self-induced modulation will be enveloped by the external one. Considering both, the mechanical interaction and the external modulation in the intracavity photon number, the system of differential equations that describes the dynamics of both oscillators read as: Ṅ_̇i̇ = - 1/τ_FCN_i + α_SPA n_0,i (N_0 - N_i) 8a Δ̇ ̇Ṫ_̇i̇ = - 1/τ_TΔ T_i + α_FCn_0,i N_i 8b ẍ_̈ï + Γ_iẋ_̇i̇ + Ω_i^2 x_i + δ_i,S2J x_M = ħ g_0,i/m_eff,iX_ZPF n_0,i8c where the sub-index i = M,S denotes main and secondary respectively. As mentioned in the main text, we consider an unidirectional system where the main OMO is oscillating with much larger mechanical amplitude. Hence, the interaction is only present in the secondary OMO dynamics since the contribution of x_s has been neglected 2J(x_M - x_S) ≈ 2Jx_M. Regarding the intracavity photons, the first oscillator is the one that receives the external modulation, thus n_0,M corresponds to the modulated case (Eq. <ref>) while the temporal evolution of n_0,S = n_0 is only governed by the modulation on the optical resonance of the cavity due to the TO and FCD effects as well as the mechanical motion. This means that the secondary oscillator is only aware of the external modulation through the mechanical coupling term. The intrinsic magnitudes of the model (dissipation rates, intragap-states...) have been considered the same for both oscillators. Even if Eqs. <ref>, <ref> and <ref> represent a system of nonlinear coupled differential equations and do not have analytical solution, it is possible to solve it numerically as a problem of eight first order differential equations. First, it is necessary to set the incident laser wavelength so that both oscillators are in the mechanical lasing regime in the absence of mechanical coupling and external modulation. To do that, parameters J and V_max are settled to 0 and an analysis is performed to find the properly detuning, which is chosen to be λ_l,i - λ_0,i = 4.5 nm. Given that in the experiment both mechanical oscillators are relatively close in frequency (around tens of KHz), simulations require a high-resolution to be able to resolve that close-by peaks. This can be achieved by increasing the time span to 4e-4, which provides a resolution of 3KHz after the Fourier transform. Spontaneous synchronization This section deals with the case in which no external modulation is applied. Once both oscillators are in the mechanical lasing regime, it is possible to perform a similar analysis to the first experiment of the main work (Figure 1e). We have varied the reactive coupling strength to select the proper value that reproduces the spontaneous synchronization range of the experiment. Fig. <ref>a shows the contour RF plot of the computed spectra for the secondary OMO intra-cavity photons dynamics, when varying the mechanical frequency of the main OMO and hence, the detuning between both oscillators. The main OMO dynamics is observed as sidebands that approach to the center peak as the mechanical detuning between the frequencies both resonators decreases. At the beginning, since both oscillator are not synchronized, their mechanical displacements cover most of the phase space when plotted against each other (Fig. <ref>b). When their natural frequencies are close enough, spontaneous synchronization occur and their displacements follow a closed trajectory (Fig. <ref>c). Finally, when the system exits the Arnold tongue, and the mechanical displacements plotted in the phase space fill the graph again (Fig. <ref>d). Here J/2π is set to 18 KHz, providing a spontaneous synchronization range of 50 KHz, similar to that observed in the experiment. It is important to mention that in the simulation, the unidirectional character of the interaction is introduced just by including the interaction term on the secondary OMO equation and neglecting x_S with respect to x_M. However, in contrast to the real experiment, both oscillators exhibit similar amplitudes. Since the spontaneous synchronization range depends on the amplitude of the mechanical perturbation that the secondary OMO is receiving, the amplitude of the mechanical oscillation of the main OMO plays an important role. Hence, the actual interaction strength could be one order of magnitude lower, confirming the weak character of the coupling and indicating that fabrication disorders are the main source of splitting between the mechanical resonant frequencies. Cascaded injection locking In this section we include the external modulation with a certain power P_mod = 100 Sin(π V_max/V_π), aiming to simulate the cascade injection locking experiment. In the simulations we establish a frequency difference of 45 KHz between both OMOs, thus assuring no spontaneous synchronization occurs in the absence of an external input. Figure <ref> focuses in the main oscillator (the one that directly receives the modulated signal), where it is represented the fast Fourier transform of transmitted light. Spectra are represented in a normalized linear scale with arbitrary units. Here, a sweep in f_mod is done from low to high frequencies to observe the range of injection locking depending on the power of modulation. The step taken for performing the sweep in external modulation frequency is set to 1 KHz. Regarding the range for injection locking, we observe an increasing on its value with P_mod as expected from previous simulations <cit.>. The locking range expands particularly in the region after the initial locking (white dashed line), which corresponds to the range where the frequency of external modulation is above the natural one of the oscillator. Finally, it is worth to mention that the natural frequency of the second oscillator is not present in these graphs since the coupling term has not been added to the system of the main oscillator (Eq <ref>). The next step is to analyze the transmission of the second oscillator (the one that receives the modulation indirectly through the mechanical link). Figure <ref> shows the results of this analysis As we expected, the mechanical oscillation of the main OMO is now observed in the secondary oscillator dynamics. Even if this modulation arrives to it in an indirect manner, the mechanical perturbation is enough to generate injection locking of the secondary OMO dynamics to the external signal, reproducing the experimental observations. Now, as we increase the modulation power, the locking ranges increase and start to appear a region where both, the main and the secondary oscillators are locked to the external modulation, what we call cascade injection locking (CIL). Simulations also predict the expansion of this range when increasing the modulation amplitude. 0 mechanical Colombano M F, Arregui G, Capuj N E, Pitanti A, Maire J, Griol A, Garrido B, Martinez A, Sotomayor-Torres C M, Navarro-Urrios D. (2019) Synchronization of optomechanical nanobeams by mechanical interaction. Phys. Rev. Lett 123 017402. freecarrierNavarro-Urrios D, Colombano M F, Maire J et al. (2020). Properties of nanocrystalline silicon probed by optomechanics. Nanophotonics, 9(16) temperature Maire J, Chávez-Ángel E, Arregui G et al. (2022). Thermal Properties of Nanocrystalline Silicon Nanobeams. Adv. Funct. Mater. 32 2105767. vart Navarro-Urrios D, Capuj N E, Maire J, Colombano M F et al. (2018). Nanocrystalline silicon optomechanical cavities. Opt. Express 26 9829-9839 gom Gorodetksy M L, Schliesser A, Anetsberger G, Deleglise S and Kippenberg T G. (2010). Determination of the vacuum optomechanical coupling rate using frequency noise calibration. Opt. Express, 18 23236-23246 injectionlocking Arregui G, Colombano M, Maire J, Pitanti A, Capuj N, Griol A, Martínez A, Sotomayor-Torres C, Navarro-Urrios D. (2021). Injection locking in an optomechanical coherent phonon source. Nanophotonics. 10(4) 1319-1327.
http://arxiv.org/abs/2406.09218v1
20240613151906
Cohomological integrality for symmetric representations of reductive groups
[ "Lucien Hennecart" ]
math.RT
[ "math.RT", "math.AG" ]
§ ABSTRACT In this paper, we prove the integrality conjecture for quotient stacks given by a symmetric representation of a reductive group. This result will be crucial to generalise cohomological Donaldson–Thomas theory of 2 and 3-Calabi–Yau categories to 0 and (-1)-shifted symplectic stacks respectively. A 100 kpc Ram Pressure Tail Trailing the Group Galaxy NGC 2276 I.D. Roberts<ref>,<ref>,<ref> R.J. van Weeren<ref> F. de Gasperin<ref> A. Botteon<ref> H.W. Edler<ref> A. Ignesti<ref> L. Matijević<ref> N. Tomičić<ref> Abstract Rotational Freudenthal duality (RFD) relates two extremal Kerr-Newman (KN) black holes (BHs) with different angular momenta and electric-magnetic charges, but with the same Bekenstein-Hawking entropy. Through the Kerr/CFT correspondence (and its KN extension), a four-dimensional, asymptotically flat extremal KN BH is endowed with a dual thermal, two-dimensional conformal field theory (CFT) such that the Cardy entropy of the CFT is the same as the Bekenstein-Hawking entropy of the KN BH itself. Using this connection, we study the effect of the RFD on the thermal CFT dual to the KN extremal BH. We find that the RFD maps two different thermal, two-dimensional CFTs with different temperatures and central charges, but with the same asymptotic density of states, thereby matching the Cardy entropy. In an appendix, we discuss the action of the RFD on doubly-extremal rotating BHs, finding a spurious branch in the non-rotating limit, and determining that for this class of BH solutions the image of the RFD necessarily over-rotates. ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Given a connected reductive group G and a representation V of G, the quotient stack =V/G and the GIT quotient V G are objects of interest in geometry and representation theory. In the context of quiver representations, the geometry and topology of the GIT quotient V G has been studied deeply by Reineke, see <cit.> for a survey. A particularly nice case is when V is a symmetric representation of G, in the sense that the weights of V counted with multiplicities are invariant under the involution α↦-α. In <cit.>, the authors study semiorthogonal decompositions of the coherent derived category (V/G) of V/G in order to obtain noncommutative resolutions of V G. The tools are the window categories defined and studied by Halpern-Leistner <cit.>. Their formalism deals with quasi-symmetric representations V of a reductive group G. Quasi-symmetricity is the vanishing property of the sum of weights of V on any given line through the origin of the character lattice and is weaker than the symmetricity. In this paper, we are interested in the interplay between the topology of V/G and that of V G. More precisely, we give a decomposition of the (infinite dimensional) cohomology vector space ^*(V/G) of the quotient stack V/G in terms of a finite number of finite-dimensional cohomologically graded vector spaces _λ indexed by cocharacters λ∈ X_*(T) of a maximal torus T of G, using parabolic induction (Theorem <ref>), when V is symmetric. Such a result is often called cohomological integrality. The refined Donaldson–Thomas invariant of V/G are then defined as the Betti numbers of the vector spaces _λ. In this paper, we initiate the study of these enumerative invariants, which encode representation-theoretic information about V/G. In geometry, stacks of the form V/G and their good moduli space V G give, by Luna étale slice theorems <cit.>, local models for smooth stacks with a good moduli space . Therefore, this work is the ground for the study of the geometry and topology of morphisms → from a symmetric smooth stack to its good moduli space. When the group G is a product of general linear groups and V is a symmetric representation of G given by the representation space of a symmetric quiver Q (a quiver having as many arrows in both possible directions between any two vertices), the cohomological integrality isomorphism has been proven by Efimov in <cit.>, answering a conjecture of Kontsevich and Soibelman <cit.>. It is formulated in terms of the cohomological Hall algebra of Q <cit.>. The cohomological integrality of Efimov is the building block of all cohomological integrality results for categories of homological dimension one, and for 2 and 3-Calabi–Yau categories <cit.>. These integrality theorems are one of the most sought-after theorems in the theory of cohomological Hall algebras, as they provide strong structural results and deep connections between the topology of the stack and the moduli space. The integrality also gives cohomologically refined enumerative invariants whose study is of algebro-geometric interest. In particular, the cohomological integrality isomorphisms for the 2-Calabi–Yau categories of semistable Higgs bundles on a smooth projective curve and twisted local systems is at the heart of nonabelian Hodge isomorphisms for stacks <cit.>. Nonabelian Hodge isomorphisms for stacks are crucial to study the P=W conjecture for _n for the singular moduli spaces. A 2-Calabi–Yau category gives rise to a 0-shifted symplectic stack while a 3-Calabi–Yau category provides us with a (-1)-shifted symplectic stack <cit.>. In the 2-Calabi–Yau case, fully settled in <cit.>, the cohomological integrality gives a decomposition of the Borel Moore homology ^_*(_) of (a substack of) the stack of objects in the category in terms of the intersection cohomology (_) of the good moduli space _ of _. The study of cohomological integrality for 2-Calabi–Yau categories has several applications, such as the study of the cohomology of Nakajima quiver varieties and cuspidal polynomials of quivers <cit.> and the comparison between cohomological Hall algebras and Maulik–Okounkov Yangians <cit.>. In the 3-Calabi–Yau case, the cohomological integrality is, in full generality, a conjecture recalled in <cit.>. It is proven for quivers with potential in <cit.>. It concerns the study of the critical cohomology of the moduli stack of objects in the category. The critical cohomology is the cohomology of a perverse sheaf, which is either the sheaf of vanishing cycles or a globalisation of it <cit.>. In the case of general 3-Calabi–Yau categories, significant progress towards the integrality conjecture is made in <cit.>. All known cohomological integrality isomorphisms are expressed in terms of cohomological Hall algebras, which is a convenient and powerful way to gather the data of parabolic inductions. If rather than a 2-Calabi–Yau category, one considers a 0-shifted symplectic stack , one expects to have similar cohomological integrality isomorphisms. However, the cohomological Hall algebra structure is not available and one has to deal directly with the parabolic induction morphisms. This is what we do in this paper in the situation of smooth symmetric stacks given by a symmetric representation of a reductive group. In particular, the very definition of the cohomological integrality morphism (see Theorem <ref>) had to be given. There are motivic versions of integrality results, studied for example in <cit.> following <cit.> and motivic Donaldson–Thomas theory <cit.>. §.§ Main results In this paper, we work over an algebraically closed field of characteristic zero. §.§.§ Induction Let G be a connected reductive group and V a representation of G. We let T⊂ G be a maximal torus. For any cocharacter λ_→ T, we let G^λ⊂ G be the corresponding Levi subgroup of G <cit.> and V^λ the representation of G^λ given by the λ-fixed subspace of V. If λ=0 is the trivial cocharacter, we write V=V^0 and G=G^0. There is an induction map _λ^*+d_λ(V^λ/G^λ)→^*+d_0(V/G), where d_λ= V^λ- G^λ (<ref>). If V is symmetric (i.e. V and V^* have the same sets of weights, Definition <ref>), then the induction map preserves cohomological degrees (Lemma <ref>). We define an order on the set X_*(T) of cocharacters of T as follows: λ≼μdef.{ V^λ⊂ V^μ 𝔤^λ⊂𝔤^μ. . We let _V X_*(T)/∼ where ∼ is the equivalence relation given by λ∼μ (λ≼μ and μ≼λ). In other words, two cocharacters are equivalent if their fixed-point sets inside V and both coincide. The set _V is finite. If V is symmetric, the induction map _λ only depends on the class λ∈_V up to a sign (Lemma <ref>). We let W=N_G(T)/T be the Weyl group of G. The natural action of W on X_*(T) descends to a W-action on _V (<ref>). The image of the induction map _λ only depends on the class λ̃∈_V/W of λ (Lemma <ref>). §.§.§ Cohomological integrality Let G be a reductive group with maximal torus T and V be a finite dimensional symmetric representation of G (Definition <ref>). For any λ∈_V, we let λ∈ X_*(T) be an arbitrary lift. For λ̃∈_V/W, we let λ∈ X_*(T) be an arbitrary lift. For λ∈ X_*(T), we let W_λ={w∈ W| V^w·λ=V^λ and ^w·λ=^λ}. This is a subgroup of W (Lemma <ref>). We let G_λ⊂ G be the intersection of the kernel of the action map G^λ→(V^λ) with the center of G^λ and _λ(G_λ). Note that the group G_λ is not necessarily connected, but this can essentially be ignored (Lemma <ref>). For α∈ X^*(T), we let V_α{v∈ V|∀ t∈ T, t· v=α(t)} be the α-weight space of V. We define _α similarly. We let k_λ∏_α∈ X^*(T), ⟨λ,α⟩>0α^ V_α/∏_α∈ X^*(T), ⟨λ,α⟩>0α^_α. We may call k_λ the induction kernel. We define ε_V,λ W_λ→{± 1} to be the sign such that for any w∈ W_λ, w(k_λ)=ε_V,λ(w)k_λ (Proposition <ref>). There exists cohomologically graded vector spaces _λ⊂^*+d_λ(V^λ/G^λ) with a W_λ-action, λ∈ X_*(T), such that the map ⊕_λ̃∈_V/W(_λ⊗^*(/G_λ))^ε_V,λ^*(V/G) induced by the induction morphisms (<ref>), is a graded isomorphism, where ^*(/G_λ)≅(_λ^*), _λ^* sits in degree 2, ε_V,λ W_λ→{± 1} is a character and (_λ⊗^*(/G_λ))^ε_V,λ denotes the ε_V,λ-isotypic component for the natural W_λ-action. Moreover, the vector spaces _λ are finite-dimensional and the inclusion _λ⊂^*+d_λ(V^λ/G^λ) factors through the inclusion ^*+d_λ(V^λ/(G^λ/G_λ))→^*+d_λ(V^λ/G^λ). This type of theorems is sometimes referred to as cohomological integrality. We can reconstruct the infinite dimensional vector space ^*(V/G) using a finite number of finite-dimensional vector spaces _λ, λ̃∈_V/W. Let Q=(Q_0,Q_1) be a symmetric quiver with set of vertices Q_0 and set of arrows Q_1. Let ∈^Q_0 be a dimension vector. If V_ is the representation space of -dimensional representations of Q which is acted on by _∏_i∈ Q_0__i, then the isomorphism given by Theorem <ref> essentially recovers <cit.>. In loc.cit, it was convenient for the author to twist the cohomological Hall algebra multiplication to make it supercommutative, as in <cit.>. This twist does not change the images of the induction/multiplication morphisms which means that it is possible to have cohomological integrality isomorphisms even without this twist, although we cannot identify the untwisted CoHA with a supercommutative algebra anymore. We can only identify the underlying vector spaces. We define the cohomologically refined Donaldson–Thomas invariants of V/G as the dimensions p_λ,i_λ^i of the cohomological degree i piece of the vector space _λ for λ∈ X_*(T). We may call the Euler characteristics p_λ∑_i∈(-1)^ip_λ,i of _λ the Donaldson–Thomas invariants of V/G. §.§ Notations and conventions * If V is a representation of a finite group W and χ an irreducible character of W, we let V^χ be the χ-isotypic component of V. * If X is a complex algebraic variety acted upon by an algebraic group G, we let X/G be the quotient stack. * The multiplicative group is denoted by _. The Abelian group of characters of an algebraic torus T is _(T,_). The set of cocharacters of T is the Abelian group _(_,T). * Let H⊂ G be algebraic groups and X an H-variety. We let X×^HG (X× G)/H where H acts on X× G by h· (x,g)=(h· x,gh^-1). We formula g· (x,g') (x,gg') gives a G-action on X×^HG. * If R is a domain (e.g. R=^*_T() for some algebraic torus T), we denote by (R) its fraction field. §.§ Acknowledgements At various stages of the preparation of this work, the author was supported by the Royal Society and by the National Science Foundation under Grant No. DMS-1928930 and by the Alfred P. Sloan Foundation under grant G-2021-16778, while the author was in residence at the Simons Laufer Mathematical Sciences Institute (formerly MSRI) in Berkeley, California, during the Spring 2024 semester. The author thanks the SLMath Sciences Institute and the University of Edinburgh for the excellent working conditions. The author is grateful to Ben Davison for the support provided, in particular via postdoctoral fellowships. § PARABOLIC INDUCTION FOR REPRESENTATIONS OF REDUCTIVE GROUPS §.§ The induction diagram Let G be a reductive group and V a finite-dimensional representation of G. For a cocharacter λ_→ G, we let G^λ{g∈ Gλ(t)gλ(t)^-1=g} be the centraliser of λ, a Levi subgroup of G and V^λ{v∈ V|λ(t)· v=v} be the fixed locus, a representation of G^λ. We also let P_λ{g∈ G|lim_t→ 0λ(t)gλ(t)^-1 exists}, a parabolic subgroup of G and V^λ≥ 0{v∈ V|lim_t→ 0λ(t)· v exists}, a representation of P_λ. A weight of V is a character α T→_ such that there exists v∈ V≠{0} such that for any t∈ T, t· v=α(t)v. We let '(V) be the set of weights of the representation V. We have a direct sum decomposition V≅⊕_α∈'(V)V_α where V_α consists of 0 and nonzero vectors of weight α. We let (V) be the collection of weights of V, counted multiplicities: α∈'(V) appears V_α times in (V). In particular, we have '() and () where is seen as the adjoint representation of G. We have the natural pairing ⟨-,-⟩ X_*(T)× X^*(T)→ between characters and cocharacters. We let ^λ>0(V){α∈(V)|⟨λ,α⟩>0} and we define similarly ^λ=0(V), ^λ≥0(V), ^λ<0(V) and ^λ≤0(V) and their ' versions, by forgetting the multiplicities. It is immediate that V^λ≥0=⊕_α∈^'λ≥ 0(V)V_α. We consider the commutative induction diagram V^λ≥ 0/P_λ V^λ/G^λ V/G V^λ G^λ V G["q_λ"', from=1-2, to=2-1] ["p_λ", from=1-2, to=2-3] ["π_λ"', from=2-1, to=3-1] ["π", from=2-3, to=3-3] ["_λ",from=3-1, to=3-3] where V G([V]^G) is the affine GIT quotient of V by G. The map q_λ is a vector bundle stack of rank r_λ#^λ>0(V)-#^λ>0() and the map p_λ is a proper and representable morphism of stacks. The map p_λ can be presented as the map of quotient stacks (V^λ≥ 0×^P_λ G)/G→ V/G coming from the G-equivariant map p̃_λṼ^λ≥ 0 V^λ≥ 0×^P_λG→ V, which is projective. Therefore, p_λ is representable and projective. The fact that q_λ is smooth, surjective of relative dimension #^λ>0(V)-^λ>0() is easily seen. That it is in addition a vector bundle stack is a rather classical fact (which is not used in this paper). It comes from the facts that V^≥λ→ V^λ is a vector bundle (by the Białinicky-Birula decomposition <cit.>) and the map P_λ→ G^λ has kernel the unipotent radical of P_λ. The map _λ is a finite map. The arguments are parallel to that of the proof <cit.>. Namely, we consider the commutative diagram V^λ≥ 0/P_λ V^λ/G^λ ([Ṽ^λ≥ 0]^G) V/G V^λ G^λ V G["q_λ"', from=1-3, to=2-1] ["_λ", bend left=20, from=2-1, to= 1-3] ["π_λ≥ 0", from=1-3, to=2-3] ["p_λ", from=1-3, to=2-5] ["π_λ"', from=2-1, to=3-1] ["q_λ"', from=2-3, to=3-1] ["p_λ", from=2-3, to=3-5] ["π", from=2-5, to=3-5] ["_λ"', bend right=20, from=3-1, to=2-3] ["_λ"', bend right=20, from=3-1, to=3-5] where Ṽ^λ≥ 0 V^λ≥ 0×^P_λG, so that we have an equivalence of stacks V^λ≥ 0/P_λ≃Ṽ^λ≥ 0/G. We now explain the definitions of the maps in this diagram. The map q_λ is given by the equivariant map (V^λ≥ 0,P_λ)→ (V^λ,G^λ) Therefore, the pullback of functions by q_λ induces a morphism q_λ^*[V^λ]^G^λ→[Ṽ^λ≥ 0]^G whose dual is the morphism between affine schemes q_λ([Ṽ^λ≥ 0]^G^λ)→([V^λ])=V^λ G^λ. The map _λ is given by the equivariant morphism (V^λ,G^λ)→(V^λ≥ 0,P_λ) which induces the morphism _λ^*[Ṽ^λ≥ 0]^G→[V^λ]^G^λ and dually the morphism of schemes _λ([V^λ]^G^λ)→([Ṽ^λ≥ 0]^G). Since q_λ∘_λ=𝕀, we have q_λ∘_λ=𝕀. On the ring of functions, we have _λ^*∘q_λ^*=𝕀 and so _λ^* is surjective. This tells us that _λ is a closed immersion. Moreover, _λ=p_λ∘_λ and so it suffices to prove that p_λ is a finite map. Since it is a morphism of finite type complex schemes, it suffices to prove that it is an integral morphism <cit.>. Since p̃_λṼ^λ≥ 0→ V is a projective morphism (Proof of Lemma <ref>), (p̃_λ)_*_Ṽ^λ≥ 0 is a coherent _V-module. Therefore, the map [V]→[Ṽ^λ≥ 0] is finite and hence integral <cit.>. We deduce that the map p_λ^*[V]^G→[Ṽ^λ≥ 0]^G is integral, and hence finite, as follows. Let a∈[Ṽ^≥λ]^G. It is solution of a monic polynomial with coefficients in [V]. By applying the Reynolds operator to the equation, we obtain a monic equation with coefficients in [V]^G of which a is a zero. This concludes. For λ∈ X_*(T), we define d_λ#^λ=0(V)-#^λ=0()= V^λ/G^λ. The quantity r_λ is defined in Lemma <ref>. We say that a representation V of a reductive group G is symmetric if V and its dual V^* have the same sets of weights counted with multiplicities: (V)=(V^*). For later use, we give the following easy lemma. Let V be a symmetric representation of G. Then, d_λ+2r_λ=d. We have (V)=^λ>0(V)⊔^λ=0(V)⊔^λ<0(V) and by symmetry, #^λ>0(V)=#^λ<0(V). Therefore, #(V)=2#^λ>0(V)+#^λ=0(V). These equalities are also valid for V= under the adjoint action of G. We obtain the lemma by combining them together. §.§ Parabolic induction For λ∈ X_*(T), we let _V^λ/G^λ^_V^λ/G^λ⊗^-d_λ/2=(V^λ/G^λ) be the intersection complex monodromic mixed Hodge module of the smooth stack V^λ/G^λ. We refer to <cit.> for the necessary background regarding monodromic mixed Hodge modules. Alternatively, for the purposes of this paper, the reader may prefer to consider instead constructible sheaves. The Tate twist -⊗ is then replaced by the shift [-2]. By smoothness of q_λ (Lemma <ref>), we have q_λ^*≅ q_λ^!⊗^r_λ and thus we obtain q_λ^*_V^λ/G^λ^≅_V^λ≥ 0/P_λ⊗^r_λ+d_λ/2 which by adjunction provides us with the map _V^λ/G^λ^→(q_λ)_*_V^λ≥ 0/P_λ⊗^r_λ+d_λ/2. Moreover, the map p_λ is proper (Lemma <ref>), and so by dualizing the adjunction map _V/G→ (p_λ)_*_V^λ≥ 0/P_λ, and using (p_λ)_*=(p_λ)_! and (p_λ)_*=(p_λ)_!, we obtain (p_λ)_*_V^λ≥ 0/P_λ→_V/G. We eventually obtain the map _λ (_λ)_*(π_λ)_*_V^λ/G^λ^→π_*_V/G⊗^r_λ+d_λ/2 by composing (_λ)_*(π_λ)_* applied to (<ref>) with π_* applied to (<ref>) twisted by ^r_λ+d_λ/2. Let G be a symmetric representation of G. The induction morphism _λ^*+d_λ(V^λ/G^λ)→^*+d(V/G) preserves cohomological degrees. This is straightforward since the induction at the level of cohomology vector spaces is given by taking the derived global section of the morphism of complexes (<ref>) (the sheafified induction) and by Lemma <ref>, d/2=r_λ+d_λ/2 and so in the symmetric case, the right-hand-side of (<ref>) is precisely π_*_V/G^. Moreover, _V/G^≅_V/G^ and _V^λ/G^λ^≅_V^λ/G^λ^ since V/G and V^λ/G^λ are smooth and the chosen shifts make the considered constant sheaves Verdier self-dual. §.§ Augmentation Let λ∈ X_*(T). We defined a representation V^λ of G^λ in <ref>. We can see T as a maximal torus of G^λ. If μ∈ X_*(T), we have (V^λ)^μ=V^λ∩ V^μ, (G^λ)^μ=G^λ∩ G^μ, (V^λ)^μ≥ 0 and P_μ,λ P_μ∩ G^λ. In this case, the induction diagram is (V^λ)^μ≥ 0/P_μ,λ (V^λ)^μ/(G^λ)^μ V^λ/G^λ (V^λ)^μ (G^λ)^μ V^λ G^λ["q_λ,μ"', from=1-2, to=2-1] ["p_λ,μ", from=1-2, to=2-3] ["π_λ,μ"', from=2-1, to=3-1] ["π_λ", from=2-3, to=3-3] ["_λ,μ",from=3-1, to=3-3] The definition of the induction map associated with this datum is _μ,λ^*((V^λ)^μ/(G^λ)^μ)→^*(V^λ/G^λ). If μ≼λ, then (V^λ)^μ=V^μ and (G^λ)^μ=G^μ, so that the induction _μ,λ is _μ,λ^*(V^μ/G^μ)→^*(V^λ/G^λ). For any λ,μ∈ X_*(T), one can find ν∈ X_*(T) such that V^ν=V^μ∩ V^λ, G^ν=G^μ∩ G^λ, (V^λ)^ν≥0=(V^λ)^μ≥0 and P_μ,λ=P_ν,λ. The cocharacters λ,μ define a morphism λ×μ_^×2→ T. We obtain ν by composing a general one-parameter subgroup _→_^×2, t↦ (t^a,t^b) with λ×μ. This guarantees the first two equalities of the Lemma. To guarantee the last two, it suffices to take a general one-parameter subgroup with the constraint b>0. §.§ Associativity Let λ,μ,ν∈(_,T). Then, _λ,ν=_μ,ν∘_λ,μ. By Lemma <ref>, we may assume that λ≼μ≼ν. This assumption only simplifies the notations in diagram (<ref>). The equality then classically follows by base-change in the commutative diagram (V^ν)^λ≥0/P_λ,ν (V^μ)^λ≥ 0/P_λ,μ (V^ν)^μ≥ 0/P_μ,ν V^λ/G^λ V^μ/G^μ V^ν/G^ν[from=1-3, to=2-2] [from=1-3, to=2-4] ["q_ν,λ"',bend right =30, from=1-3, to=3-1] ["⌟"anchor=center, pos=0.125, rotate=-45, draw=none, from=1-3, to=3-3] ["p_ν,λ", bend left=30, from=1-3, to=3-5] ["q_μ,λ"', from=2-2, to=3-1] ["p_μ,λ", from=2-2, to=3-3] ["q_ν,μ"', from=2-4, to=3-3] ["p_ν,μ", from=2-4, to=3-5]. §.§ Explicit formula for the parabolic induction For λ∈ X_*(T) we define the induction kernel k_λ∏_α∈^λ>0(V)α/∏_α∈^λ>0()α∈(^*(/T)). We let W^λ N_G^λ(T)/T be the Weyl group of G^λ. For any λ∈ X_*(T), k_λ is W^λ-invariant. It suffices to prove that W^λ send ^λ>0(V)⊂(W) to itself, and similarly for ^λ>0(). This comes from the invariance of the pairing between characters and cocharacters: for any w∈ W, ⟨ w·λ,w·α⟩=⟨λ,α⟩ and if w∈ W^λ, w·λ=λ. Let λ_→ T be a cocharacter and f∈^*(V^λ/G^λ). We have _λ(f)=∑_w∈ W/W^λw·(fk_λ)=1/# W^λ∑_w∈ Ww·(fk_λ). The second equality follows from the W^λ-invariance of both f (since f∈^*(V^λ/T)^W^λ) and k_λ (Lemma <ref>). The first equality is a computation of Euler class exactly as in the proof of <cit.> in the case of quiver representations. More generally, let λ,μ∈ X_*(T). We let k_λ,μ∏_α∈^λ>0(V^μ)α/∏_α∈^λ>0(^μ)α. For f∈^*((V^μ)^λ/(G^μ)^λ), we have _λ,μ(f)=∑_w∈ W^μ/(W^λ∩ W^μ)w(fk_λ,μ). This follows immediately from Proposition <ref> applied to V^μ as a representation of G^μ and the cocharacter λ of T, seen as a maximal torus of G^μ, and the identification of the Weyl group of (G^μ)^λ=G^λ∩ G^μ with W^λ∩ W^μ. §.§ Parabolic induction for symmetric representations Let V be a symmetric representation of a reductive group G and λ∈ X_*(T) a cocharacter of the maximal torus of T. Then, the representation V^λ of G^λ is also symmetric. It is immediate to check that '(V^λ)={α∈'(V)|⟨λ,α⟩=0} and for any α∈'(V^λ), we can identify the weight spaces V^λ_α=V_α. Therefore, the symmetry of V implies that of V^λ. Recall the order relation on X_*(T) defined in the introduction λ≼μdef.{V^λ⊂ V^μ ^λ⊂^μ. . for any λ,μ∈ X_*(T). We obtain an equivalence relation on the set X_*(T) λ∼μdef.λ≼μ and μ≼λ. We let _V X_*(T)/∼ be the quotient of X_*(T) by this equivalence relation. If λ∈ X_*(T), λ denotes its class in _V. It has an induced order relation still denoted by ≼: λ≼μλ≼μ. It is immediate that this is well-defined, i.e. the definition does not depend on the representatives λ,μ of λ,μ. Let V be a symmetric representation of a reductive group G. If λ,μ∈ X_*(T) are such that λ∼μ, then #^λ>0(V)=#^μ>0(V)=#^λ<0(V)=#^μ<0(V) and #^λ>0()=#^μ>0()=#^λ<0()=#^μ<0(). We prove the statement for V. The statement for is obtained by specialising V= since under the adjoint action of G is a symmetric representation of G. Since V is symmetric, we have #^λ>0(V)=#^λ<0(V) and moreover, #(V)=#^λ>0(V)+#^λ<0(V)+(V^λ). Since we have the same equalities with λ replaced by μ and ^λ=0(V)=^μ=0(V), we obtain #^λ>0(V)=#^μ>0(V). The other equalities follow. If λ,μ∈ X_*(T) are such that λ∼μ, then k_λ=± k_μ. If λ∼μ, we have ^λ>0(V)⊔^λ<0(V)=^>μ(V)⊔^<μ(V) and ^λ>0()⊔^λ<0()=^>μ()⊔^<μ() and since V and are symmetric representations of G, ^λ>0(V)=-^λ<0(V), ^>μ(V)=-^<μ(V) and similarly for . Therefore, (-1)^#^λ>0(V)-#^λ>0()k_λ^2=∏_α∈^λ>0(V)⊔^λ<0(V)α/∏_α∈^λ>0()⊔^λ<0()α=∏_α∈^μ>0(V)⊔^μ<0(V)α/∏_α∈^μ>0()⊔^μ<0()α=(-1)^#^μ>0(V)-#^μ>0()k_μ^2. Since by Lemma <ref> we have #^λ>0(V)=#^λ<0(V)=#^μ>0(V)=#^μ<0(V) and #^λ>0()=#^λ<0()=#^μ>0()=#^μ<0(), we have k_λ^2=k_μ^2 and therefore, k_λ=± k_μ. We assume that V is a symmetric representation of G. Then, for any λ, μ∈ X_*(T) such that λ=μ, the induction maps _λ and _μ differ by a sign. In particular, the images of _λ and _μ inside ^*+d(V/G) coincide. We use the explicit formula for the induction (Proposition <ref>). Let f∈^*(V^λ/G^λ)=^*(V^μ/G^μ). We have _λ(f)=∑_w∈ W/W^λw·(fk_λ), _μ(f)=∑_w∈ W/W^μw·(fk_μ). Moreover, G^λ=G^μ, W^λ=W^μ and by Lemma <ref>, k_μ=± k_λ. This concludes. The W-action on X_*(T) induces a W-action on _V. For λ∈ X_*(T), we let λ̃∈_V/W be its W-orbit. Let λ,μ∈ X_*(T) be such that λ̃=μ̃. Then, the images of _λ and _μ coincide. By Lemma <ref>, we may assume that for some w∈ W, μ=w·λ. Then, ^μ>0(V)=w·^λ>0(V) and ^μ>0()=w·^λ>0(). Therefore, w(k_λ)=k_μ. Using the explicit formula for the induction product (Proposition <ref>), we obtain _μ(f)=_λ(w^-1(f)), and therefore, since w^-1 induces an isomorphism ^*(V^μ/G^μ)=^*(V^w·λ/G^w·λ)≅^*(V^λ/G^λ), the images of _λ and _μ coincide. § TAUTOLOGICAL CLASSES Let G be a reductive group and V a representation of G. We let π V/G→ V G be the good moduli space map. We let κ V G→ be the projection to the point. Then, for any line bundle over V/G, we have an operation π_*_V/L→π_*_V/L of multiplication by the first Chern class c_1(). More precisely, we have the following lemma. We have a morphism of graded rings ^*(/G)(π_*_V/G,π_*_V/G) such that the composition κ_*∘ a where κ_*(π_*_V/G,π_*_V/G)→(κ_*π_*_V/G,κ_*π_*_V/G)≅(^*(/G),^*(/G)), send a∈^*(/G) to the endomorphism of ^*(/G) given by the cup product with a. We have a canonical adjunction morphism (π_*_V/G,π_*_V/G)≅(π^*π_*_V/G,_V/G). By precomposing with the counit map π^*π_*_V/G→_V/G, we obtain the morphism (_V/G,_V/G)≅^*(V/G)≅^*(/G)→(π_*_V/G,π_*_V/G). The last statement of the theorem follows from the functoriality of adjunctions. We now state and prove an elementary lemma regarding Cartan subalgebras of reductive Lie algebras. Let be a reductive Lie algebra, ⊂ an ideal and a Cartan subalgebra. Then, ∩ is a Cartan subalgebra of . Let π→/ be the projection. Since is reductive, we have a (noncanonical) decomposition ≅⊕(/). If ' is a maximal torus of and ” is a maximal torus of /, then '⊕” is a maximal torus of . Moreover, π()≅/∩ is an Abelian subalgebra of / and ∩ is an Abelian subalgebra of . By maximality of ', we have π()=-(∩)≥-'=” By maximality of ”, we have equality in this chain of inequalities. This means that (∩)=' and therefore, ∩ is a Cartan subalgebra of . We have the group version of Lemma <ref>. Let G be a reductive group, H⊂ G a normal subgroup and T⊂ G a maximal torus. Then, the neutral component of T∩ H is a maximal torus of H. This follows from Lemma <ref> by taking the Lie algebras of all groups appearing. It is necessary to consider the neutral component of T∩ H as shown by the elementary case of H={± 1}⊂_2(), and where T⊂_2() is the standard torus. Let G be a reductive group. For any normal subgroup H⊂ G acting trivially on V, we have (non-canonical) actions of ^*(/H) on ^*(/G)≅^*(V/G) and π_*_V/G. By choosing a decomposition (G)≅(H)⊕(G/H), we obtain a decomposition ^*(/G)≅^*(/(G/H))⊗^*(/H). At the sheaf level, the action of the lemma is obtained by pre-composing the morphism a of Lemma <ref> with the inclusion ^*(/H)→^*(/G). Let G be an algebraic group and H⊂ G a normal subgroup. Then, we have a natural inclusion ^*(/(G/H))→^*(/G). The map ^*(/(G/H))→^*(/G) is the pullback in cohomology for the morphism of stacks /(G/H)→/G induced by the surjective morphism G→ G/H. Let G be an algebraic group, and V a representation of G. Let H be a normal subgroup of G acting trivially on V. Then, the morphism ^*(/H)⊗^*(V/(G/H))→^*(V/G) obtained by combining the natural pullback map ^*(V/(G/H))→^*(V/G) coming from the morphism of stacks V/G→ V/(G/H) (Lemma <ref>) with the ^*(/H)-action on ^*(V/G) (Proposition <ref>) is an isomorphism. We may assume that V=, that G is connected and by quotienting out by the unipotent radical of G, that G is reductive. We let =(G) and =(H). We let ⊂ be a Cartan subalgebra. We have ≅⊕(/) and therefore a decomposition of the Weyl group W_≅ W_× W_/. We let π→/ be the projection. The morphism of the proposition can then be identified with the isomorphism ((∩)^*)^W_⊗(π()^*)^W_/→(^*)^W. § COHOMOLOGICAL INTEGRALITY FOR SYMMETRIC REPRESENTATIONS OF REDUCTIVE GROUPS We fix a symmetric representation V of a connected reductive group G. §.§ Weyl groups Let λ_→ T. We let G^λ⊂ G be the corresponding Levi subgroup and G_λ the intersection of the kernel of the action of G^λ on V^λ and the center of G^λ. We let W^λ=N_G^λ(T) be the Weyl group of G^λ and W_λ{w∈ W| V^w·λ=V^λ and G^w·λ=G^λ}={w∈ W| w·λ=λ}. W_λ is a subgroup of W. We let ẇ∈ N_G(T) be a lift of w∈ W. Then, V^w·λ=ẇV^λ. The statement of the lemma follows then immediately from the fact that if w,w'∈ W, and ẇ, ẇ'̇∈ N_G(T) are respective lifts, then, ẇẇ'̇ is a lift of ww' and ẇ^-1 a lift of w^-1. We have natural inclusions W^λ⊂ W_λ⊂ W. The inclusion W_λ⊂ W holds by definition of W_λ (see Lemma <ref>). The inclusion W^λ⊂ W comes from the inclusion N_G^λ(T)⊂ N_G(T) since G and G^λ share the same maximal torus T. Let w∈ W^λ and ẇ∈ N_G^λ(T) a lift of w. Then, ẇG^λẇ^-1=G^w·λ=G^λ and ẇV^λ=V^w·λ=V^λ and so w∈ W_λ. The group W_λ normalises the group W^λ inside W. It suffices to check that if w∈ W_λ and w'∈ W^λ, and ẇ∈ N_G(T), ẇ'̇∈ N_G^λ(T) are any lifts, then ẇẇ'̇ẇ^-1∈ G^λ. This comes from the fact that ẇ and λ commute and λ fixes G^λ and so ẇẇ'̇ẇ^-1 is fixed by λ. The group W_λ acts on ^*(/T)^W^λ≅^*(/G^λ). As the Weyl group W of G acts on ^*(/T). We just have to check that the induced action of the subgroup W_λ preserves W^λ-invariants. This is a consequence of Lemma <ref>. The group W_λ acts trivially on ^*(/G_λ). By definition, G_λ is in the center of G^λ and so in particular commutes with N_G^λ(T). Therefore, the action of W_λ is trivial. We let G^λ G^λ/G_λ. Since G_λ is a normal subgroup of G^λ, G^λ is a reductive group. Since G_λ is contained in the center of G^λ, the Weyl group of G^λ is isomorphic to W. The group W_λ acts naturally on ^*(V^λ/G^λ). We have ^*(V^λ/G^λ)≅^*(/(T/G_λ))^W^λ. The restriction to W_λ of the W action on T preserves G_λ since G_λ is in the center of G^λ. Therefore, we obtain a W_λ-action on T/G_λ. It induces a W_λ-action on ^*(/(T/G_λ)). The fact that W_λ normalises W^λ (Lemma <ref>) implies that ^*(/(T/G_λ))^W^λ is preserved by the W_λ-action. By combining Corollary <ref>, Lemma <ref> and Lemma <ref>, we obtain the following lemma. The W_λ-action on ^*(/G^λ)≅^*(V^λ/G^λ) obtained in Corollary <ref> and the W_λ-action on ^*(V^λ/G^λ)⊗^*(/G_λ) obtained by the tensor product of the actions given in Lemmas <ref> and <ref> coincide via the isomorphism given in Proposition <ref>. There exists a character ε_V,λ W_λ→{± 1} such that for any w∈ W_λ, w(k_λ)=ε_V,λ(w)k_λ. It suffices to prove the existence and unicity of the sign ε_V,λ(w)∈{± 1} satisfying the equality of the proposition, for any w∈ W_λ. The multiplicativity then follows from the unicity of this sign. The set of weights ^λ=0(V)=(V^λ) only depends on λ and so is stable under the action of W_λ. Therefore, ^λ>0(V)⊔^λ<0(V)=(V)∖^λ=0(V) is stable under W_λ. Also, by symmetry, ^λ>0(V)=-^λ<0(V). Therefore, ∏_α∈^λ>0(V)⊔^λ<0(V)α=(-1)^#^λ>0(V)∏_α∈^λ>0(V)α^2. By applying this equality to under the adjoint action, seen as a symmetric representation of G, we obtain ∏_α∈^λ>0()⊔^λ<0()α=(-1)^#^λ>0()∏_α∈^λ>0()α^2. Therefore, (-1)^#^λ>0(V)-#^λ>0()k_λ^2=∏_α∈^λ>0(V)⊔^λ<0(V)α/∏_α∈^λ>0()⊔^λ<0()α. By applying w∈ W_λ, we obtain w(k_λ^2)=k_λ^2. Therefore, w(k_λ)=± k_λ, which proves the existence of the sign. The unicity is straighforward. Let λ∈ X_*(T), w∈ W_λ and f∈_λ. Then, _λ(w· f)=ε(w)_λ(f). By Proposition <ref>, we have _λ(w· f) =1/# W^λ∑_w'∈ Ww'(wf)w'(k_λ) =1/# W^λ∑_w”∈ W(w”f)w”(w^-1k_λ) =1/# W^λ∑_w”∈ W(w”f)w”(ε(w)k_λ) by Proposition <ref> which concludes. We now give an elementary result from the representation theory of finite groups. Let G be a finite group χ G→^* a character of G and V a finite-dimensional representation of G. Then, the formula p_χ(v)=1/# G∑_g∈ Gχ(g)^-1g· v is the projector onto the χ-isotypic component V^χ of V. There are essentially three things to check: * p_χ is an involution: p_χ∘ p_χ=p_χ, * p_χ(f)=f for f∈ V^χ, * (p_χ)⊂ V^χ, which are routine. §.§ Proof of the cohomological integrality theorem In the rest of this section, we prove Theorem <ref>. For λ∈ X_*(T), we define the vector space _λ^*+d_λ(V^λ/G^λ) By <ref> We have induction maps _λ,μ_λ→_μ for λ, μ∈ X_*(T). For λ∈ X_*(T), we choose a splitting ^λ=_λ⊕(^λ/_λ). It induces a splitting of the maximal torus =_λ⊕(/_λ) (Lemma <ref>). We let _λ^^*(V^λ/(G^λ/G_λ)), so that _λ≅_λ^⊗^*(/G_λ) (Proposition <ref>). We let _λ^*(V^λ/T) and _λ^^*(V^λ/(T/G_λ)) so that _λ≅_λ^⊗^*(/G_λ), _λ=_λ^W^λ and _λ^≅(_λ^)^W^λ. For each λ∈ X_*(T), we let J_λ be the smallest W^λ-stable _λ^-submodule of _λ^,_λ^[∏_α∈(^λ)∖{0}α^-1] containing k_μ,λ=∏_α∈^μ>0(V^λ)α/∏_α∈^μ>0(^λ)α for all μ∈ X_*(T) such that λ⋠μ. We let _λ^_λ[∏_α∈(^λ)∖{0}α^-1]≅_λ^,⊗^*(/G_λ) and _λ^(_λ^)^W^λ≅_λ^⊗^*(/G_λ). We can characterise J_λ as the smallest W^λ-stable _λ^-submodule of _λ^[∏_α∈(^λ)∖{0}α^-1] containing k_μ,λ=∏_α∈^μ>0(V^λ)α/∏_α∈^μ>0(^λ)α for all μ∈ X_*(T) such that μ≺λ. If μ≺λ, then we have V^μ≤ V^λ and ^μ≤^λ with at least one strict inequality. If we have λ≼μ then, we have V^λ≤ V^μ, ^λ≤^μ providing a contradiction. Therefore, if μ≺λ, then λ⋠μ and by definition, k_μ,λ∈ J_λ. Conversely, if λ⋠μ, we let ν∈ X_*(T) be such that V^ν=V^λ∩ V^μ, ^ν=^λ∩^μ, (V^λ)^ν≥ 0=(V^λ)^μ≥ 0 and (^λ)^ν≥ 0=(^λ)^μ≥ 0 (Lemma <ref>). We have V^ν⊂ V^λ and ^ν⊂^λ with at least one strict inclusion. Therefore, ν≺λ and moreover, k_ν,λ=k_μ,λ. This proves the lemma. We have J_λ^W^λ⊂_λ^. This comes from the fact that the induction product <ref> is well-defined on the cohomology, without requiring localisation, even though the explicit formula (Proposition <ref>) involves rational fractions. More precisely, J_λ^W^λ is linearly generated by the elements ∑_w∈ W^λw·(fk_μ,λ) for f∈_λ^ and μ≺λ, by Lemma <ref>. These are averages over W^λ of elements of J_λ. We can rewrite this sum as ∑_w'∈ W^λ/W^μw'·(∑_w∈ W^μ(w· f))k_μ,λ since k_μ,λ is W^μ-invariant (Lemma <ref>). This is the formula for I_μ,λ(∑_w∈ W^μ(w· f)). Since ∑_w∈ W^μ(w· f) is polynomial, its induction I is also polynomial. Therefore, it belongs to _λ. Since I∈ J_λ⊂_λ^[∏_α∈(^λ)∖{0}α^-1] by W^λ-invariance of J_λ, I∈_λ∩_λ^[∏_α∈(^λ)∖{0}α^-1]=_λ^. The submodule J_λ is W_λ-invariant. Let w∈ W_λ. We let J_λ^w w· J_λ. Then, J_λ^w is W^λ-stable by Lemma <ref>. It is an _λ^-submodule of _λ^[∏_α∈()∖{0}(w·α)^-1]=_λ^[∏_α∈()∖{0}α^-1]. Moreover, it contains k_w·μ,w·λ for λ⋠μ. Now, we use that by Lemma <ref>, for w∈ W_λ, {± k_μ,λλ⋠μ}={± k_w·μ,w·λ|λ⋠μ} since for w∈ W_λ, w·λ=λ. By minimality of J_λ, we deduce that J_λ⊂ J_λ^w. By symmetry of the argument, we conclude J_λ^w=J_λ. J_λ^W^λ is W_λ-invariant. This follows directly from Lemmas <ref> and <ref>. We let _λ⊂_λ^ be a direct sum complement of J_λ^W^λ: _λ^=J_λ^W^λ⊕_λ. For any λ∈ X_*(T), the image of ⊕_λ⋠μ_μ→_λ is J_λ^W^λ⊗^*(/G_λ). This is a calculation very similar to that in the proof of Lemma <ref>. Namely, if f∈_μ and g∈^*(/G_λ), _μ(fg)=_μ(f)g by W^λ-invariance of g (Lemma <ref>). Moreover, fg∈_μ. Therefore, a spanning subset of J_λ^W^λ⊗^*(/G_λ) can be obtained in this way for λ⋠μ. Conversely, if f∈_μ⊂_μ=_λ for λ⋠μ, we write f=∑_i=1^Nf_i⊗ g_i with f_i∈_λ^ and g_i∈^*(/G_λ) so that _μ(f)=∑_i=1^N_μ,λ(f_i)⊗ g_i indeed belongs to J_λ^W^λ⊗^*(/G_λ). For any λ∈ X_*(T), the map ⊕_μ≼λ_μ⊗^*(/G_μ)_λ is surjective. We proceed by induction on λ∈ X_*(T). If λ is minimal, then J_λ={0} and so _λ=^*(V^λ/(G^λ/G_λ)). By Proposition <ref>, the map _λ⊗^*(/G_λ)→^*(V^λ/G^λ) of the lemma is indeed surjective (as it even is an isomorphism). Now, let λ∈ X_*(T). We assume that the map of the lemma is surjective for any λ'≼λ. By associativity of the induction (Proposition <ref>), the image of the map of the lemma coincides with the image of the composition (⊕_μ≺λ⊕_μ'≺μ_μ'⊗^*(/G_μ'))⊕(_λ⊗^*(/G_λ))(⊕_μ≺λ_μ)⊕(_λ⊗^*(/G_λ))→_λ where the morphism I is ⊕_μ≺λ⊕_μ'≺μ_μ',μ. By induction hypothesis, (⊕_μ≺λ⊕_μ'≺μ_μ'⊗^*(/G_μ'))→(⊕_μ≺λ_μ) is surjective and by Lemma <ref>, the image of (⊕_μ≺λ_μ)→_λ is J_λ^W^λ⊗^*(/G_λ), of which _λ⊗^*(/G_λ) is a direct sum complement by definition of _λ. This proves the surjectivity. For any λ∈ X_*(T), _λ is a finite-dimensional vector space. It suffices to prove that J_λ^W^λ⊂_λ^ has finite codimension. If we replace V by V×, then the corresponding module J_λ⊂^_λ[∏_α∈()∖{0}α^-1] is smaller than the one for V, and therefore so is the corresponding module of invariants J_λ^W^λ, and therefore its codimension is bigger. Moreover, in this case, J_λ⊂_λ^, i.e. there is no need to localise. We have an inclusion _λ^/J_λ^W^λ→_λ^/J_λ, and it therefore suffices to prove that the ideal J_λ⊂_λ^ has finite codimension. We see _λ^ as the ring of regular functions on the affine algebraic variety /_λ. We prove that the closed subscheme defined by J_λ is supported on {0}. If the ideal J_λ vanishes on y∈(/_λ)_∖{0} we let y'∈(/_λ)_ be a generic rational approximation of y. A lift y” of Ny' to for n≥ 0 big enough so that Ny'∈ X_*(T/G_λ) gives a cocharacter ν∈ X_*(T). It satisfies ^ν=0(V)⊂^λ=0(V) and ^ν=0()⊂^λ=0(). Moreover, at least one of these inequalities is strict. Indeed, we have y”∉_λ, which means that either y” is not in the center of Lie(G^λ) and so ν defines a strict Levi of G^λ or y” is not in the Lie algebra of the kernel of the action G^λ→(V^λ) in which case V^ν⊊ V^λ. Therefore, ν≺λ. By definition, we have ∏_α∈^ν>0(V^λ)α∈ J_λ. We also have ∏_α∈^ν>0(V^λ)⟨ y,α⟩≠ 0 since for any α∈^ν>0(V^λ), ⟨ν,α⟩>0 has the same sign as ⟨ y,α⟩. This is a contradiction since we found an element of J_λ not vanishing on y. Let G_λ^0 be the neutral component of G_λ. Let k=_λ. Then, G_λ^0≅(^*)^k for some k≥ 0 and the natural pullback map ^*(/G_λ)→^*(/G_λ^0) is an isomorphism. The group G_λ is a subgroup of the center of G^λ, and so it is a diagonalisable algebraic group. Therefore, its neutral component is a torus. We have a splitting G≅ G_λ^0×π_0(G_λ) <cit.> and so, by the Künneth formula, ^*(/G_λ)≅^*(/G_λ^0)⊗^*(/π_0(G_λ)). Since π_0(G_λ) is a finite group, ^*(/π_0(G_λ))≅^*()^π_0(G_λ)≅, which concludes. The rest of this section is devoted to the proof of the injectivity of the cohomological integrality map in Theorem <ref>. For λ∈ X_*(T), we let J̃_λ J_λ_λ. This is a _λ-submodule of _λ^. Recall the inclusion _λ=_λ^W^λ→_λ. The elements α∈^λ>0(V)∪^λ>0() (or more generally, α∈ X^*(T) such that ⟨λ,α⟩≠0), seen as elements of _λ, are not zero divisors in the quotient _λ-module _λ^/J̃_λ. Let α∈^λ>0(V)∪^λ>0(). We see α∈^* and ⟨λ,α⟩≠ 0 implies that α does not vanish on _λ since it takes a nonzero value on (d/dtλ(t))_t=0∈_λ. Therefore, the projection α'∈_λ^* of α with respect to the chosen (or any) decomposition ^*≅(/_λ)^*⊕_λ^* is nonzero. We choose a basis (x_1,,x__λ) of _λ^* such that x_1=α' and we order the monomials by the lexicographic order so that x_1≥ x_2≥. Let g∈_λ^≅_λ^,⊗^*(/G_λ). It may be written in a unique way (see also Lemma <ref>) g=∑_(ν_1,,ν__λ)∈^_λg_νx^ν with g_ν∈_λ^,. We have an equivalence between the following two statements: * g∉J̃_λ, * g_ν∉J_λ for some ν∈^_λ. The implication (1)(2) is immediate as if all g_ν belong to J_λ, then g would be in J̃_λ. The reverse implication (2)(1) is also true by the unicity of the decomposition (<ref>) of g. Assume that g∉J̃_λ. We need to show that for α∈^λ>0(V)∪^λ>0(), α g∉J̃_λ. We let ν=max{ν' g_ν'≠ 0}. We may assume that g_ν'∉J_λ. We have α g=(α-α')g+α'g and α-α'∈((/_λ)^*). Therefore, the term of highest degree in the variables (x_1,,x__λ) in α g is g_νx_1x^ν and g_ν∉J_λ. By the equivalence above, α g∉J̃_λ. This concludes. We let _λ'_λ[∏_α∈^λ>0(V)∪^λ>0()α^-1], '^_λ[α^-1α∈()∖{0}] and _λ' (_λ')^W^λ. We have the localisation maps L_λ→_λ', L_λ→_λ'. We let We let J̃_λ'_λ'L(J̃_λ) be the localised _λ'-module inside _λ'^. The localisation maps L_λ^/J̃_λ→_λ'^/J̃_λ', L_λ/(J̃_λ)^W^λ→_λ'/(J̃_λ')^W^λ are injective. This follows from Lemma <ref> as we localise by a set of nonzero divisors of the quotient _λ^/J̃_λ in _λ. Let λ, μ∈ X_*(T) and ν as in Lemma <ref>. Then, k_μ=k_ν,λ∏_α∈^μ>0(V)∩^λ≠0(V)α/∏_α∈^μ>0()∩^λ≠0()α. This is immediate from the definitions. We define k'_μ,λ∏_α∈^μ>0(V)∩^λ≠0(V)α/∏_α∈^μ>0()∩^λ≠0()α. This is an invertible element in _λ'. Let λ, μ∈ X_*(T) and w in W be such that w·μ≠λ and λ⋠w·μ. Then, w· k_μ∈J̃'_λ. Of course, the presence of W is superfluous since w k_μ=k_w·μ and it suffices to prove that for any λ,μ∈ X_*(T) such that λ⋠μ, k_μ∈J̃'_λ. We let ν∈ X_*(T) be such that V^ν=V^λ∩ V^μ, ^ν=^λ∩^μ, (V^λ)^ν≥ 0=(V^λ)^μ≥ 0 and (^λ)^ν≥ 0=(^λ)^μ≥ 0 (Lemma <ref>). By assumption, ν≺λ. We can write, by Lemma <ref>, k_μ=k_ν,λ∏_α∈^μ>0(V)∩^λ≠0(V)α/∏_α∈^μ>0()∩^λ≠0()α. This element is in J̃'_λ since k_ν,λ is in J_λ, by Lemma <ref>. Let λ∈ X_*(T) be such that for some w∈ W, λ≼ w·λ. Then, w·λ=λ. The lemma follows from the inclusions V^λ⊂ V^w·λ and ^λ⊂^w·λ and the equalities of dimensions V^λ= V^w·λ and ^λ=^w·λ. For λ∈ X_*(T), we have an inclusion _λ≅^W≅_λ^W→_λ≅_λ^W^λ. since ≅_λ and W^λ⊂ W (Lemma <ref>). For any f_μ∈_μ, μ̃∈_V/W such that f_μ=0 if λ≺ w·μ for some w∈ W, we have _λ(∑_μ̃∈_V/W_μ(f_μ))≡ k_λ·∑_w∈ W_λ/W^λε_V,λ(w)(w· f_λ)J̃'_λ. We calculate: _λ(∑_μ̃∈_V/W_μ(f_μ)) =_λ( ∑_μ̃∈_V/W1/# W^μ∑_w∈ W(w· f_μ)(w· k_μ)) ≡1/# W^λ∑_w∈ W wλ=λ(w· f_λ)(w· k_λ)J̃_λ' by Lemma <ref> = 1/# W^λk_λ·∑_w∈ W_λε_V,λ(w)(w· f_λ) by definition of the sign ε_V,λ. The map ⊕_μ̃∈_V/W(_μ⊗^*(/G_μ))^ε_V,μ→^*(V/G) is injective. Let f_μ∈(_μ⊗^*(/G_μ))^ε_V,μ, μ̃∈_V/W. We let T∑_μ̃∈_V/W_μ(f_μ). We assume that not all f_μ's are zero, and we have to prove that T≠0. We let λ̃∈_V/W be such that f_λ≠0 and for any μ̃∈_V/W such that f_μ≠0, for any w∈ W, λ⋠μ (otherwise, we may replace λ by μ, etc.). By Corollary <ref>, we have _λ(∑_μ̃∈_V/W_μ(f_μ))≡ k_λ·∑_w∈ W_λ/W^λε(w)(w· f_λ)≡ k_λ# (W_λ/W^λ) f_λJ̃'_λ. and since f_λ∉J̃'_λ, since f_λ∈_λ⊗^*(/G_λ), and k_λ is invertible in _λ', _λ(T)≠0. Therefore, T≠ 0, proving injectivity. The map ⊕_μ̃(_μ⊗^*(/G_μ))^ε_V,μ^*(V/G) is an isomorphism. The injectivity is Proposition <ref>. The surjectivity comes from Lemmas <ref> and <ref> and the formula _μ(f)=1/# W^μ∑_w∈ W/W_μw'·(∑_w∈ W_με_V,μ(w)w· f )(w'· k_μ) deduced from Propositions <ref> and <ref> which implies, using Proposition <ref>, that _μ(f) vanishes if f is not in the ε_V,μ-isotypic component of _μ⊗^*(/G_μ) for the W_μ-action. § EXAMPLES In this section, we give some explicit examples of cohomological integrality isomorphisms for some choices of pairs (G,V) of a reductive group G and a representation V of G. We give each time the explicit formula for the cohomological integrality isomorphisms. In each case, it is possible to verify by hand that they indeed are isomorphisms. §.§ (^*)^2↷^2⊕(^2)^∨ We let G(^*)^2 act on V^2⊕(^2)^∨≅^4 by (t,u)· (a,b,c,d) (ta,ub,t^-1c,u^-1d). We have _V={λ_0,λ_1,λ_2,λ_3} where λ_0 _ → (^*)^2 v ↦ (1,1) λ_1 _ → (^*)^2 v ↦ (v,1) λ_2 _ → (^*)^2 v ↦ (1,v) λ_3 _ → (^*)^2 v ↦ (v,v) We have V^λ_0=V, V^λ_1=(0⊕)⊕(0⊕^∨), V^λ_2=(⊕0)⊕(^∨⊕0) and V^λ_3={0}. We can describe the induction maps as follows: _λ_1,λ_0 [x_1,x_2] → [x_1,x_2] f ↦ x_1f _λ_2,λ_0 [x_1,x_2] → [x_1,x_2] f ↦ x_2f _λ_3,λ_0 [x_1,x_2] → [x_1,x_2] f ↦ x_1x_2f _λ_3,λ_1 [x_1,x_2] → [x_1,x_2] f ↦ x_2f _λ_3,λ_2 [x_1,x_2] → [x_1,x_2] f ↦ x_1f We have _λ_0=⊂[x_1,x_2], _λ_1=, _λ_2=, _λ_3= and the cohomological integrality isomorphism is ⊕[x_1]⊕[x_2]⊕[x_1,x_2] → [x_1,x_2] (a_0,f(x_1),g(x_2),h(x_1,x_2)) ↦ a_0+x_1f(x_1)+x_2g(x_2)+x_1x_2h(x_1,x_2). §.§ _2()↷^2⊕(^2)^∨ We let G_2() act on V^2⊕(^2)^∨ via g· (u,v) (gu,(g^-1)^tv), where the superscript t indicates the transpose. We let T (^*)^2 be the standard torus. The Weyl group is W=_2. We have _V/W={λ̃_̃0̃,λ̃_̃1̃,λ̃_̃2̃} where λ_0 _ → (^*)^2 v ↦ (1,1) λ_1 _ → (^*)^2 v ↦ (v,1) λ_2 _ → (^*)^2 v ↦ (v,v) Therefore, we have V^λ_0=V, V^λ_1=(0⊕)⊕(0⊕^∨) and V^λ_2={0}. We can describe the induction maps as follows: _λ_1,λ_0 [x_1,x_2] → [x_1,x_2]^_2=[x_1+x_2,x_1x_2] f ↦ x_1/x_1-x_2f(x_1,x_2)+x_2/x_2-x_1f(x_2,x_1) _λ_2,λ_0 [x_1,x_2] → [x_1,x_2]^_2=[x_1+x_2,x_1x_2] f ↦ x_1x_2/x_1-x_2f(x_1,x_2)+x_1x_2/x_2-x_1f(x_2,x_1) _λ_2,λ_1 [x_1,x_2] → [x_1,x_2] f ↦ x_2f The induction _λ_1,λ_0 is surjective, since x_1+x_2=_λ_1,λ_0(x_1) and x_1x_2=_λ_1,λ_0(x_1x_2). A direct sum complement of the image of _λ_2,λ_1 is [x_1]=_λ_1⊗[x_1]. Therefore, we have _λ_0=0, _λ_1=, _λ_2= and the cohomological integrality isomorphism reads [x_1]⊕[x_1,x_2]^ → [x_1,x_2]^_2 (f(x_1),g(x_1,x_2)) ↦ x_1f(x_1)-x_2f(x_2)/x_1-x_2+2x_1x_2g(x_1,x_2)/x_1-x_2 where [x_1,x_2]^ is the sign-isotypic component, since the kernel x_1x_2/x_1-x_2 changes sign when we exchange the variables x_1,x_2. §.§ _2()↷(^2⊕(^2)^∨)^g We generalize the previous situation by considering the diagonal action of _2() on (^2⊕(^2)^∨)^g. We can describe the induction maps as follows, analogously to <ref>. _λ_1,λ_0 [x_1,x_2] → [x_1,x_2]^_2=[x_1+x_2,x_1x_2] f ↦ x_1^g/x_1-x_2f(x_1,x_2)+x_2^g/x_2-x_1f(x_2,x_1) _λ_2,λ_0 [x_1,x_2] → [x_1,x_2]^_2=[x_1+x_2,x_1x_2] f ↦ x_1^gx_2^g/x_1-x_2f(x_1,x_2)+x_1^gx_2^g/x_2-x_1f(x_2,x_1) _λ_2,λ_1 [x_1,x_2] → [x_1,x_2] f ↦ x_2^gf We have _λ_0=⊕_j=0^g-2(x_1+x_2)^j, _λ_1=⊕_j=0^g-1 x_2^j, _λ_2= and the cohomological integrality isomorphism reads _λ_0⊕ (_λ_1⊗[x_1])⊕[x_1,x_2]^ → [x_1,x_2]^_2 (f(x_1,x_2),g(x_1,x_2),h(x_1,x_2)) ↦ f(x_1,x_2)+x_1^gg(x_1,x_2)-x_2^gg(x_2,x_1)/x_1-x_2+2x_1^gx_2^gh(x_1,x_2)/x_1-x_2. §.§ _2()↷^2 We let G_2() act on ^2 via the natural action. It is a symmetric representation. We have W=_2. We let T⊂_2() be the maximal torus of diagonal matrices. We have _V/W={λ̃_̃0̃,λ̃_̃1̃} where λ_0 _ → T v ↦ (1,1) λ_1 _ → T v ↦ (v,v^-1). We have V^λ_0=V, V^λ_1={0}. We can describe the induction map as follows. _λ_1,λ_0 [x] → [x] f ↦ 2f which can also be identified with the integrality isomorphism with _λ_1= and _λ_0={0}. §.§ _2()↷^d Let V_d be the d-dimensional irreducible representation of _2(). Up to a scaling factor, the induction morphism is _λ_1,λ_0 [x] → [x] f ↦ 2x^⌊d/2⌋-1f. We have _λ_1= and _λ_0=⊕_j=0^⌊d/2⌋-2 x^j. The cohomological integrality isomorphism is _λ_0⊕[x] → [x] f,g ↦ f+2x^⌊d/2⌋-1g. We note, for example in <ref> and also <ref>, that the integrality morphisms are not isomorphisms when we consider cohomology with integral coefficients instead of rational coefficients. In <ref>, this comes from the presence of the factor 2 in the cohomological integrality morphism. §.§ _2()↷(𝔰𝔩_2())^g Let g≥ 0. We consider the diagonal adjoint action of _2() on V(𝔰𝔩_2())^g. Again, _V={λ_1,λ_1} with λ_0,λ_1 as in <ref>. The induction map reads _λ_1,λ_0 [x] → [x] f ↦ x^g-1f. We have V^λ_1≅^g with the trivial action of T. Therefore, _λ_1= and _λ_0=⊕_j=0^g-2 x^j. The cohomological integrality isomorphism is [x]⊕_0 → [x] f,g ↦ g(x)+x^g-1f(x). This example is an example for which G_λ_0={±1} is a finite group. We see that it does not affect the cohomological integrality isomorphism. §.§ Topology of the algebra of invariants Computing the polynomial invariants of binary forms is a long-standing problem in invariant theory, solved for binary forms with small degrees, and still challenging in higher degrees. The sheafified version of our cohomological integrality theorem, which is the subject of a forthcoming work, would give an algorithm to compute the intersection cohomology of the GIT quotients V_d_2 where V_d is the d-dimensional irreducible representation of _2(). For d∈, one can define an _2()-action on the space of binary forms of the form b(x,y)=∑_j=0^da_ix^iy^d-i by making _2() act on the coordinates (x,y) via the natural representation. This induces an action of _2() on the polynomial algebra [a_i, 1≤ i≤ d]. One wants to compute the invariant ring [a_i, 1≤ i≤ d]^_2(). Let V_d be the irreducible d-dimensional representation of _2(). In modern terms, we try to compute the geometric invariant theory quotient V_d_2(). We may give answers to this problem in small degrees. In our situation, motivated by integrality results, we also want to determine the stable locus inside V_d, that is the open subset of closed _2()-orbits with finite stabilizer. §.§.§ Invariants of linear binary forms Let V_2=^2 be the natural representation of _2(). Then, we calculate easily V_2_2()= since the action of _2() on V_2 has two orbits, {0} and V_2∖{0}, of those only {0} is closed. The stabilizer of the closed orbit {0} is _2(), which is not finite. Therefore, the stable locus is empty. §.§.§ Invariants of binary quadratic forms We let V_3 be the 3-dimensional irreducible representation of _2(). There is single polynomial invariant for binary quadratic forms ax^2+bxy+cy^2, which is the discriminant D=b^2-4ac. Therefore, we have V_2_2()≅. Since V_3/_2()=0 and =1, the stable locus is empty. §.§.§ Invariants of binary cubic forms We let V_4 be the 4-dimensional irreducible representation of _2(). There is one polynomial invariant for binary cubic forms ax^3+3bx^2y+3cxy^2+dy^3, which is the discriminant D=3b^2c^2+6abcd-4b^3d-4c^3a-a^2d^2. Therefore, V_4_2()≅. Moreover, since V_4/_2()=1=, the stable locus is non-empty. By ^*-equivariance, the stable locus is the open locus of binary cubic forms of non-zero discriminant. We notice that (^2)=1=_λ_0=1 where _λ_0 is given in <ref> for d=4. §.§.§ Invariants of binary quartic forms Let V_5 be the irreducible 5-dimensional representation of _2(). The ring of invariants of binary quartic forms has two algebraically independent generators i,j. Therefore, V_5_2()≅^2. Moreover, V_5/_2()=2=^2 and therefore, the stable locus is non-empty. §.§.§ Invariants of binary quintic forms Let V_6 be the irreducible 6-dimensional representation of _2(). There are four polynomial invariants I_4, I_8, I_12, I_18, of respective degrees 4, 8, 12, 18. The first three I_4, I_8, I_12 are algebraically independent while the square of the last can be expressed in terms of them: I_18^2=-144I_12^3+(I_4^3-72I_4I_8)I_12^2+(24I_8^2-6I_4^2I_8^2)I_12+9I_4I_8^4. This realizes the GIT quotient V_6_2() as a double cover of ^3 ramified over the hypersurface -144I_12^3+(I_4^3-72I_4I_8)I_12^2+(24I_8^2-6I_4^2I_8^2)I_12+9I_4I_8^4=0. §.§.§ Invariant of binary sextic forms Let V_7 be the 7-dimensional irreducible representation of _2(). The ring of polynomial invariants of binary sextic forms has 5 generators I_2, I_4, I_6, I_10, I_15 of respective degrees 2, 4, 6, 10, 15. The generators I_2, I_4, I_6, I_10 are algebraically independent and I_1^2 can be expressed in terms of I_2, I_4, I_6, I_10. This realizes V_7_2() has a double cover of ^4 ramified over a hypersurface. Moreover, since V_7_2()=4=^4, the stable locus is non-empty.
http://arxiv.org/abs/2406.07861v1
20240612042739
A Measurement of CO(3-2) Line Emission from eBOSS Galaxies at $z\sim 0.5$ using Planck Data
[ "Anirban Roy", "Nicholas Battaglia", "Anthony R. Pullen" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.CO" ]
=1 yahapj
http://arxiv.org/abs/2406.09176v1
20240613143733
Spherical collapse and black hole evaporation
[ "Madhavan Varadarajan" ]
gr-qc
[ "gr-qc", "hep-th" ]
[subfigure]labelfont=rm left=1in,right=1in,top=1in,bottom=1in equationsection ρ̅ŭ̅̆ I2^(2)g^(4)gR^(2)RR^(4)RTrØΩΩ^c F R S√(8π G)⟨⟨||⟩⟩ϕ_refϕ_cons^refs^conδ̅h̅ẼN𝒟ϵκ̨f_(+)f_(-)^(+)D ^(+)F ^(+)A ^(+)G ^(+)V ^(+)h ^(-)D ^(-)F ^(-)A ^(-)G ^(-)V ^(-)h Raman Research Institute Bangalore-560 080, IndiaSpherical collapse and black hole evaporation Madhavan Varadarajan Received Month dd, yyyy; accepted Month dd, yyyy ==================================================== § ABSTRACT We consider spherically symmetric gravity coupled to a spherically symmetric scalar field with a specific coupling which depends on the Areal Radius. Appropriate to spherical collapse, we require the existence of an axis of symmetry and consequently a single asymptotic past and future (rather than a pair of `left' and `right' ones). The scalar field stress energy takes the form of null dust. Its classical collapse is described by the Vaidya solution. From a two dimensional `(r,t)' perspective, the scalar field is conformally coupled so that its quantum stress energy expectation value is well defined. Quantum back reaction is then incorporated through an explicit formulation of the 4d semiclassical Einstein equations. The semiclassical solution describes black hole formation together with its subsequent evaporation along a timelike `apparent horizon'. A balance law at future null infinity relates the rate of change of a back reaction-corrected Bondi mass to a manifestly positive flux. The detailed form of this balance law together with a proposal for the dynamics of the true degrees of freedom underlying the putative non-perturbative quantum gravity theory is supportive of the paradigm of singularity resolution and information recovery proposed by Ashtekar and Bojowald. In particular all the information including that in the collapsing matter is expected, in our proposed scenario, to emerge along a single ‘quantum extended’ future null infinity. Our analysis is on the one hand supported and informed by earlier numerical work of Lowe <cit.> and Parentani and Piran <cit.> and on the other, serves to clarify certain aspects of their work through our explicit requirement of the existence of an axis of symmetry. § INTRODUCTION The aim of this work is to study black hole evaporation and the Hawking Information Loss Problem <cit.> in the simplified context of spherical symmetry. We are interested in the general relativistic spherical collapse of a matter field in a context which allows analytical understanding of its classical collapse to a black hole as well the computation of its quantum back reaction on the collapsing spacetime geometry. These aims are achieved by choosing the matter field to be a spherically symmetric massless scalar field and by defining its coupling to gravity to be a modification of standard minimal coupling, this modification being dependent on the areal radius of the spheres which comprise the orbits of the angular Killing fields of the spacetime. We require the spacetime to be asymptotically flat in the distant past. Initial data for the matter field is specified at past null infinity. This matter then collapses to form a black hole. Since the 4d spacetime is spherically symmetric and in the distant past looks like standard 4d Minkowski spacetime, the axis of symmetry is located within the spacetime as a 1d line which is timelike in the distant past. We restrict attention to the case wherein the axis of symmetry is timelike everywhere. By definition, the Areal Radius R vanishes along the axis. We note here that the axis is distinguished from the R=0 classical black hole black hole singularity in that the spacetime geometry at the axis is non-singular. The existence of the axis of symmetry in collapsing spherically symmetric spacetimes is a key feature which differentiates such spacetimes from those of eternal black holes. In the case of spherical symmetry, the eternal black hole geometry is that of the Kruskal extension of Schwarzschild spacetime. Such an eternal black (and white) hole spacetime does not have an axis of symmetry; instead, and in contrast to a collapse situation, this spacetime has not one, but two sets of infinities, a left set and a right set. The analysis of information loss in the context of spacetimes with a pair of infinities becomes problematic as can be seen in the example of the 2d CGHS model. In this model <cit.>, the vacuum state at left past null infinity is viewed as Hawking radiation by observers at at right future null infinity, whereas the collapsing matter (and, hence, the information regarding its nature) propagates from right past null infinity towards left future null infinity. While much has been learnt about black hole evaporation and the purity of the state along an anticipated quantum extension of right future null infinity in this model <cit.>, one of its unsatisfactory features is the existence of the pair of left and right infinities. [Another is the mass independence of the Hawking temperature which is a consequence of the detailed horizon red shift being different from the 4d gravitational one. Since the spacetime geometry studied in this work is general relativistic, the horizon red shift is the standard one as is the inverse mass dependence of the Hawking temperature.] From this point of view, the importance of the incorporation of the axis of symmetry in the spacetimes considered in this work is that, as a consequence, such spacetimes have a single set of infinities. A description of the results obtained in this paper together with its layout is as follows. In section <ref>, we discuss the kinematics of spherical symmetry. We describe the coordinates used, the location of the axis in these coordinates and discuss the behavior of fields at the axis. We prescribe `initial' conditions for the geometry which ensure asymptotic flatness in the distant past and as well as for the nature of matter data in the distant past. In section <ref> we describe the classical dynamics of the system. We exhibit the action and show that the matter stress energy takes the form of a pair of (infalling and outgoing) streams of null dust. We show that the the dynamical equations together with the initial conditions and the requirement of axis existence are solved by the Vaidya spacetime (in which collapse of the infalling null dust stream forms a black hole). In section <ref> we derive the semiclassical equations which incorporate back reaction and then combine analytical results and physical arguments with prior numerical work to describe the geometry of the semiclassical solution. This geometry corresponds to the formation of a black hole through spherical collapse of the scalar field and its subsequent evaporation through quantum radiation of the scalar field. In section <ref> we analyse the semiclassical equations in the distant future and show that they imply a balance law relating the decrease of a quantum backreaction corrected Bondi mass to a positive, back reaction corrected Bondi flux. We argue that the detailed nature of this balance law suggests, in a well defined manner, that the classical future null infinity admits a quantum extension wherein correlations with the Hawking radiation manifest so that the state on this extended future null infinity is pure. In section <ref> we combine the results of sections <ref> and <ref> together with informed speculation on the nature of the true degrees of freedom of the system at the deep quantum gravitational level and thereby propose a spacetime picture which encapsulates a possible solution of the Information Loss Problem. The solution is along the lines of the Ashtekar-Bojowald paradigm <cit.> wherein quantum gravitational effects resolve the classical black hole singularity opening up a vast quantum extension of classical spacetime beyond the hitherto classically singular region wherein correlations with the Hawking radiation and information about the collapsing matter emerge. Section <ref> is devoted to a discussion of our results and further work. Some technical details and proofs are collected in Appendices. In what follows we choose units in which c=1. We shall further tailor our choice of units in section <ref> so as to set certain coupling constants to unity. § KINEMATICS IN SPHERICAL SYMMETRY §.§ Spacetime geometry Choosing angular variables along the rotational killing fields, the spherically symmetric line element takes the form: ds^2= 2_μνdx^μ dx^ν + R^2 (dΩ)^2, μ , ν =1,2. Here R is the areal radius and (dΩ)^2 is the line element on the unit round 2 sphere which in polar coordinates (θ, ϕ) is (dθ)^2 + sin^2θ (dϕ)^2. The space of orbits of the rotational killing fields is 2 dimensional. The pull back of the 4- metric to this 2 dimensional `radial-time' space is the Lorentzian 2-metric 2. The areal radius R depends only on coordinates on this 2d spacetime and not on the angular variables. Choosing these coordinates {x^μ} to be along the radial outgoing and ingoing light rays and denoting these coordinates by (x^+,x^-) puts the 2-metric in conformally flat form: 2_μνdx^μdx^ν = -e^2ρdx^+ dx^- = e^2ρ(-(dt)^2 + (dx)^2) where we have set x^±= t± x The areal radius R is a function only of (x^+, x^-). The area of a spherical light front at fixed x^+,x^- is 4π R^2. Hence, outgoing/ingoing expansions of spherical light fronts are proportional to ∂_+R, ∂_-R. In particular, a spherical outer marginally trapped surface located at fixed x^+,x^- is defined by the conditions ∂_+R= 0, ∂_-R <0 As indicated in the Introduction we restrict attention to the case in which the axis of symmetry is a timelike curve located within the 4d spacetime. Hence the axis is located at x^+=F(x^-), with dF/dx^- >0. By using the conformal freedom available in the choice of our conformal coordinates, we can choose F(x^-) to be our new x^- coordinate. With this choice, the axis is located along the straight line x^+=x^- ≡ x=0. Next, we require that the 4- metric is asymptotically flat as x^- →-∞ so that past null infinity, ^-, is located at x^- = -∞. In conformal coordinates the detailed fall off conditions at ^- turn out to be: R = x^+- x^-/2 + O(1/x^-) e^2ρ= 1 + O(1/(x^-)^2) As we shall see in section <ref>, the Vaidya solution satisfies these conditions and in this solution the mass information is contained in the O(1/x^-) part of R and the O(1/(x^-)^2) part of e^2ρ. It is immediate to see that these conditions fix the conformal freedom in the choice of the x^+ coordinate. Since the freedom in the choice of the x^- coordinate has been fixed by locating the axis at x=0, we have exhausted all freedom in the choice of coordinates. To summarise: The region of interest for us in this paper is the x≥ 0 part of the Minkowskian plane with ^- located at x^-=- ∞ and the axis at x=0. Each point (t,x) on this half plane represents a 2 sphere of area 4π R^2(t,x) with R vanishing along the axis of symmetry at x=0, this axis serving as a boundary of the region of interest. Finally, recall from the Introduction that the axis with R=0 is distinguished from the expected classical singularity at R=0 by virtue of the geometry at the axis being non-singular. In the specific classical and semiclassical spacetime solutions which we study in this work, the geometry in a neighborhood of the axis will turn out to be flat (and hence non-singular). The physical spacetime geometry in these solutions will occupy a subset of the half (t,x) plane due to the occurrence of singularities (in the classical and semiclassical solutions) or Cauchy horizons (in the semiclassical solutions). For details see sections <ref>,<ref>. §.§ Matter The matter is a spherical symmetric scalar field f (t,x). Note that at the axis, R=0 so that ∂_t R=0 there. The requirement that the geometry near the axis is non-singular together with the assumption that x^± are good coordinates for the 2d geometry implies that ∂_xR=e^ρ at the axis (see Appendix <ref> for details). This ensures that at the axis (t,R) is a good chart. Recall that the axis is a line in the 4d spacetime. We require that f be differentiable at the axis from this 4d perspective. In particular consider differentiability along a t= constant radial line which starts out at, say, R>0 and moves towards the axis along a trajectory which decreases R. Once it moves through the axis, R starts increasing again. Differentiability of f at the axis then demands that -∂ f/∂ R|_R=0 = +∂ f/∂ R|_R=0 which in turn implies that ∂ f/∂ R|_R=0=0. Reverting to the (t,x) coordinates this implies that ∂ f/∂ x|_t,x=0=0 which in (x^+, x^-) coordinates takes the `reflecting boundary condition' form at the axis: ∂ f/∂ x^+|_t,x=0= ∂ f/∂ x^-|_t,x=0 In addition to these boundary conditions we demand that f be of compact support on ^-. Finally, we require that f satisfies the following condition at ^-. Define 1/2∫_x^+_i^x^+_f(∂_+ f(x^+, x^-→ -∞))^2 = m(x^+), where f is supported between x^+_i and x^+_f >x^+_i on ^-. We require that f be such that: lim_x^+→ (x^+_i)^+m(x^+)/x^+- x^+_i >1/16 where the limit is to be taken as x^+ approaches x^+_i from the right (i.e. x^+>x^+_i). As we shall see, condition (<ref>) ensures that the prompt collapse Vaidya spacetime is a classical solution (by the prompt collapse Vaidya spacetime we mean one in which the singularity is neither locally nor globally naked <cit.>). § CLASSICAL DYNAMICS §.§ Action The action for the spherically symmetric 4-metric is the Einstein Hilbert action: S_geometry= 1/8π G∫ d^4x√(-)R Assuming spherical symmetry, integrating over angles and dropping total derivative terms (in our analysis we have ignore the issue of the addition of suitable boundary terms to (<ref>) so as to render the action differentiable), we obtain: S_geometry= 1/2Gκ^2∫ d^2x√(-2)R^2[R+ 2(∇ R/R)^2 +2 κ^2] where, on order to facilitate comparision with 2d gravity conventions in literature hereon we set R^2= R_old^2/κ^2 where R_old is the dimensionful areal radius hitherto referred to as R and κ is an arbitrarily chosen (but fixed) constant with dimensions of length. We shall refer to the dimensionless field R also as the Areal Radius. [In 2d gravity, R^2=: e^-2ϕ where ϕ is called the dilaton.] The matter coupling is chosen to depend on the Areal Radius R so that the matter action is: S_matter= -1/8π∫ d^4x√(-)^ab1/κ^2R^2 (∇_a f∇_b f) where f is spherically symmetric and hence angle independent. Integrating over angles, we obtain: S_matter= -1/2∫ d^2x √(-2) (∇ f)^2 so that the Areal Radius dependent 4d coupling of f to the metric in (<ref>) reduces to 2d conformal coupling to the metric 2 in (<ref>). The total action is then: S = S_geometry+ S_matter = 1/2Gκ^2∫ d^2x√(-2)R^2[R+ 2(∇ R/R)^2 +2 κ^2] -1/2∫ d^2x √(-2) (∇ f)^2 In what follows it is convenient to choose units such that G=κ =1, in addition to our choice of c=1 made at the end of the Introduction. Note that ħ is not set to unity. §.§ Dynamical equations In what follows we shall often employ the obvious notation ∂ A/∂ x^±≡∂_±A ≡ A,_± for partial derivatives of a function A. Also, by G_Ω̂Ω̂ below we mean the component G_abΩ̂^a Ω̂^b of the tensor G_ab where Ω̂^a is a unit vector tangent in the direction of a rotational Killing vector field (for e.g. in polar coordinates we could choose Ω̂^a = R^-1 (∂/∂θ)^a). Since the action for the geometry is the Einstein-Hilbert action, the equations of motion which follow from (<ref>) are just the Einstein equations for a spherically symmetric metric coupled to the matter field f i.e. the equations take the form G_ab= 8 π T_ab where G_ab is the Einstein tensor for the spherical symmetric 4 metric (<ref>), (<ref>): -e^2ρ/4 G_Ω̂Ω̂ = ∂_+∂_- ρ + 1/R∂_+∂_-R=0 R^2 G_+- = 2R∂_+∂_-R + 2 ∂_+R∂_-R + 1/2 e^2ρ=0 R^2 G_±± = R^2[-2/R( ∂_±^2R- 2∂_±ρ∂_±R)]= (∂_±f)^2 The remaining components of the Einstein tensor vanish as a result of spherical symmetry. From (<ref>)- (<ref>), the only non-vanishing components of T_ab are T_±± = 1/4π R^2(∂_±f)^2/2. Since the matter is conformally coupled, it satisfies the free wave equation on the fiducial flat x^+,x^- spacetime. Explicitly, varying f in the action (<ref>) yields: ∂_+∂_-f =0 §.§ Classical Solution: Vaidya spactime Since f satisfies the free 1+1 wave equation on the fiducial flat spacetime, solutions take the form of the sum of left and right movers: f(x^+,x^-) = (x^+) + (x^-). Since the solution (<ref>) has to satisfy reflecting boundary conditions (<ref>) at the axis, it follows that: ∂_+(x^+)|_x^+=t = ∂_+((x^-)|_x^-=t ∀ t We shall restrict attention to f_± of compact support in their arguments. Equation (<ref>) then implies that: (y)= (y) ∀ y The stress energy tensor (<ref>) for the solution (<ref>) takes the form of a pair of (infalling and outgoing) spherically symmetric null dust streams. If there was only an infalling stream, the stress energy would be exactly of the form appropriate to the Vaidya solution. Note however that: (i) If there is only a single infalling stream with f satisfying the condition of prompt collapse (<ref>), the resulting Vaidya solution exhibits the following feature: As soon as the first strand of matter hits the axis a spacelike singularity forms (see fig <ref>). (ii) The solution (<ref>) satisfies reflecting boundary conditions (<ref>) at the axis. This means that each strand of the null infalling stream hits the axis and is reflected to an outgoing null stream. Since the singularity of (i) is spacelike, the outgoing stream is `above' the singularity (see fig <ref>). Hence in the physical spacetime solution we have only the infalling stream. From (i) and (ii) above, a solution to the classical equations (<ref>)- (<ref>) is the Vaidya solution with stress energy tensor T_++ = 1/4π R^2(∂_+f)^2/2. Since the spacetime geometry in this solution is flat in a finite neighborhood of the axis, the Vaidya solution satisfies our axis requirements. As shown below, it also satisfies the initial conditions at ^-. Hence it is an acceptable solution. The Vaidya solution is usually presented in Eddington Finkelstein coordinates (v, R) whereas here we use null coordinates. The relation between the Eddington Finkelstein (EF) and null coordinates is as follows (the reader may find it easier to follow our argumentation below by consulting the Penrose diagram for the Vaidya spacetime depicted in Figure <ref>. Consider the Vaidya solution for a mass profile m(v) at ^-. In EF coordinates (v,R) the 2-metric is: ^(2)ds^2= -(1- 2m(v)/R)(dv)^2 + 2dvdR with constant v radial lines being null and ingoing. These ingoing light rays originate at ^- of the Vaidya spacetime where R→∞. These light rays `reflect' off the axis and become outgoing. Since every outgoing ray originates at the axis as the reflection of a uniqie incoming ray, we can uniquely label each outgoing ray by the value of v= v_axis:=u at this origin point on the axis. Thus constant u light rays are outgoing, constant v light rays are incoming and every point in Vaidya spacetime is uniquely located as the intersection of a pair of such rays. This implies that u,v are null coordinates. We now show that the identifications v≡ x^+, u ≡ x^- hold. From (<ref>) it follows that on an outgoing light ray, R changes as a function of v according to 2dR/dv = (1- 2m(v)/R) . Consider the outgoing ray which starts from the axis at v=v_axis. As discussed above, we set v_axis=u. Since R=0 at the axis, we may integrate (<ref>) to obtain, for the trajectory of this ray: 2R(v,u) = ∫_u^v dv̅ (1- 2m(v̅)/R(v̅,u)). Let the support of m(v) start on ^- at v=v_i. Then for v<v_i, (<ref>) implies that: R(v,u) = v- u/2 so that the axis lies at: v=u Note that we can rewrite (<ref>) as: 2R(v,u) = v_i- u + ∫_v_i^v dv̅ (1- 2m(v̅)/R(v̅,u)). In this form it is clear that the integrand (and hence the equation) is well defined everywhere except at the R=0 singularity. Next, note that since ^- is approached as R→∞ along constant v, it follows from (<ref>) that near ^-: R(v,u) = v- u/2 + O(1/R) so that ^- is approached as u→ -∞. In this limit, (<ref>) implies that: R(v,u)= v- u/2 + O(1/u) Next, note that setting R= R(v,u) in (<ref>), we have ^(2)ds^2= -(1- 2m(v)/R) (dv)^2 + 2dv( R,_vdv +R,_udu)= [-(1- 2m(v)/R) +2 R,_v](dv)^2 + R,_u2dvdu which in conjunction with (<ref>) implies that in these coordinates the conformal factor e^2ρ is given by: 2R,_u =: -e^2ρ From (<ref>) and (<ref>) it follows that -2R,_u = 1 + O(1/u^2) :=e^2ρ Note that from (<ref>) we have that: (R,_u),_v= m(v)/2R^2 R,_u . Since from (<ref>) R,_u <0 at the axis , (<ref>) ensures that R_u remains negative on every outgoing ray, and hence, negative everywhere so that the identification (<ref>) is consistent with the postivity of e^2ρ. The above analysis shows that v,u are well defined null coordinates for which the axis conditions (<ref>) and `initial' conditions (<ref>), (<ref>) are satisfied. Further, the geometry in the vicinity of the axis is flat and hence non-singular. Hence we may identify v with x^+ and u with x^-, and (from the definition of m(v) for Vaidya spacetime) the mass function m(v) as: m(v)= m(x^+)= ∫_x^+_i^x^+ dx̅^+(∂_+f)^2/2 identical to (<ref>) where the support of f(x^+) in the above equation is between v_i≡ x^+_i and x^+_f. As a final consistency check, note that from (<ref>), (<ref>) we have that 2ρ,_+= m(x^+)/2R^2, which in conjunction with (<ref>), (<ref>) and (<ref>) suffice to verify that equations (<ref>)-(<ref>) are satisfied. § THE SEMICLASSICAL THEORY §.§ Quantization of the matter field Since the matter field satisfies the free wave equation on the fiducial flat spacetime subject to reflecting boundary conditions at x=0 it can be quantized with mode expansion: f̂(x^+, x^-) = ∫_0^∞ dk cos kx /√(π k) (â(k) e^-ikt + â^†(k) e^ikt). Defining f̂_(+)(x^+) := ∫_0^∞ dk 1/√(4π k) (â(k) e^-ikx^+ + â^†(k) e^ikx^+ ) f̂_(-)(x^-) := ∫_0^∞ dk 1/√(4π k) (â(k) e^-ikx^- + â^†(k) e^ikx^- ) we may rewrite (<ref>) as f̂(x^+, x^-) =f̂_(+)(x^+) + f̂_(-)(x^-). Note that the operator valued distribution f̂_(+) is the same `operator valued function' of its argument as f̂_(-)(x^-). This is exactly the quantum implementation of the reflecting boundary condition (<ref>) The mode operators â(k),â^†(k) provide a representation of the classical symplectic structure which follows from the matter action (<ref>) so that the only non-trivial commutation relations are the standard ones: [â(k),â^†(l)]= ħδ (k,l), which are represented via the standard Fock space representation so that the Hilbert space H_Fock is the standard Fock space generated by the action of the creation operators â^†(k) on the Fock vacuum. This quantization may be used to define a test quantum field on the classical Vaidya solution, or to define a quantum field on a general spherically symmetric metric of the form (<ref>) or, as we propose in section <ref>, to define a quantization of the true degrees of freedom of the combined matter-gravity system. If we use it to define a 4d spherically symmetric test quantum field (coupled to the 4 metric as in (<ref>), hence conformally coupled to the 2-metric 2) on the Vaidya spacetime, one can put the test scalar field in its vacuum state at ^- and ask for its particle content as experienced by inertial observers at ^+. [ From (<ref>), the x^± coordinate frame is freely falling at ^-. Hence the Fock vacuum in H_Fock is the vacuum state for freely falling observers at ^-.] A straightforward calculation along the lines of Hawking's <cit.> leads to the Hawking effect i.e. the state at ^+ exhibits late time thermal behavior at Hawking temperature 1/8π M. The calculation is simpler than Hawking's as, due to the 2d conformal coupling, there is no scattering of particles off the spacetime curvature and hence no non-trivial grey body factors. If we use the quantization to define a 4d spherically symmetric quantum field (coupled to the 4 metric as in (<ref>), hence conformally coupled to the 2-metric 2) on a general spherically symmetric metric (<ref>), (<ref>), we can compute its stress energy tensor expectation value using the results of Davies and Fulling <cit.>. Note that since the axis serves as a reflecting boundary and since its trajectory is that of a straight line in the inertial coordinates of the fiducial flat spacetime, the results of Reference <cit.> can be directly applied. Recall from <cit.> that in the case that the initial state at x^-→ -∞ is a coherent state in H_Fock modelled on a classical field f, the vacuum contribution to the stress energy expectation value gets augmented by the classical stress energy of f. Recall (see footnote <ref>) that the x^± coordinates are freely falling at ^- so that the initial state is a coherent state as seen by freely falling observers at ^-. Putting all this together we have, from <cit.> that the only non-trivial components of the 4d stress energy expectation value are given through the expressions 8π R^2<T̂_+-> = -ħ/12π∂_+∂_-ρ 8π R^2 <T̂_±±> = (∂_±f)^2 - ħ/12π ((∂_±ρ)^2 - ∂_±^2 ρ) The factors of 8π R^2 come from the definition of the 4d stress energy (see (<ref>)). In general the expressions in (<ref>) would be augmented by functions t_±(x^±) which are state dependent. Here, these vanish because the mode expansion (<ref>) is defined with respect to the x^± coordinates <cit.>. §.§ Semiclassical Equations The semiclassical Einstein equations find their justification in the large N approximation <cit.>. Accordingly we couple N scalar fields exactly as in (<ref>), quantize each of them as in the previous section, put one of them in a coherent state modelled on f[ A function f can be uniquely characterised by its mode coefficients if its Fourier transformation is invertible. In a coherent state, the Fourier mode coefficient of every positive frequency mode is is realised as the eigen value of the corresponding mode operator. Functions f of interest are of compact support at ^- and satisfy the prompt collapse conditon (<ref>) so that the function is not smooth at its initial support (its first derivative is discontinuous). Nevertheless the function is absolutely integrable and can be chosen to be of bounded variation whereby its Fourier transform is invertible (see Appendix <ref> for details).] and the rest in their vacuum states at ^-. From (<ref>), (<ref>) and (<ref>)-(<ref>), it then follows that the semiclassical Einstein equations, G_ab= 8π⟨T|_ab|$⟩ take the form: -e^2ρ/4G_Ω̂Ω̂ = ∂_+∂_- ρ + 1/R∂_+∂_-R=0 R^2 G_+- = 2R∂_+∂_-R + 2 ∂_+R∂_-R + 1/2 e^2ρ=-Nħ/12π∂_+∂_-ρ R^2 G_±± = R^2[-2/R( ∂_±^2R- 2∂_±ρ∂_±R)]= (∂_±f)^2 - Nħ/12π ((∂_±ρ)^2 - ∂_±^2 ρ) §.§ Semiclassical Singularity We are interested in semiclassical solutions in which the axis is located atx=0, the axial geometry is non-singular and for which the asymptotic conditions (<ref>) hold. Note that whenρ=0the vacuum fluctuation contribution to the stress energy expectation value vanishes. Thus, whenfvanishes, classical flat spacetime (withe^2ρ=1, R= x^+-x^-/2) remains a solution. Hence forx^+ <x^+_iwe set the spacetime to be flat with e^2ρ=1, R= x^+-x^-/2 Next, note that we can eliminate∂_+∂_- ρbetween the first two eqns to obtain: 1/R∂_+∂_-R = - ∂_+R∂_-R + 1/4 e^2ρ/R^2- Nħ/24π Following Lowe <cit.> and Parentani and Piran <cit.>, we can look upon (<ref>), (<ref>) as evolution equations for initial data (i) on the null linex^+= x^+_iand (ii) on^-forx≥ x^+_i. For (i) , the initial data is given by (<ref>). For (ii), the matter data is subject to (<ref>) and the gravitational data corresponds to that for the Vaidya solution withm(x^+)given by (<ref>) andR, ρobtained by integrating (<ref>) along^-and then using (<ref>). More in detail, atx^+=x^+_i, we have data near^-of the form (<ref>). Equation (<ref>) can be integrated along^-with this initial data forRto obtainRalong^-forx≥ x^+ande^2ρcan then be obtained near^-from (<ref>). It can then be shown that the evolution equations can be solved uniquely forR, ρin the regionx^+ ≥ x^+_ias long as the evolution equations themselves are well defined. From a numerical evolution point of view <cit.> one can see this as follows. Alongx^+=x^+_i, equation (<ref>) can be viewed as a first order differential equation forR,_+onx^+=x^+_iwith `inital' value forR,_+specified on^-and known coefficients from (<ref>). The solutionR,_+ (x^+=x^+_i, x^-)together with initial value forρ,_+on^-can be used to solve (<ref>) forρ,_+on the linex^+=x^+_i. From this one has dataρ,Rfor the nextx^+=constant=x^+_i+ line on the numerical grid and the procedure can be iterated so as to eventually cover all ofx^+>x^+_i.. We now argue that for generic matter data the evolution equations break down atR^2= Nħ/24πand a curvature singularity develops. In this regard note that the denominator of the right hand side of (<ref>) blows up atR^2= Nħ/24π. If the numerator is non-zero at this value ofRthe left hand side blows up and through (<ref>) so does∂_+∂_-ρ. Since (as can be easily checked),R = 8 e^-2ρ∂_+∂_-ρwe expect a 2-curvature singularity at this value ofR^2. If the numerator vanishes at somex^+=a^+ >x^+_i, x^-= a^-whereR^2(a^+,a^-)= Nħ/24π, one can slightly change the initial data forfon^-, thereby change the functionRalong^-(forx^+>x^+_i) and hence the `initial' dataR,_+for (<ref>) atx^+=a^+on^-. This would generically result in a change of the numerator away from zero. Thus one expects that for generic matter data there is a singularity atR^2= Nħ/24π. Thus the `initial point' of the Vaidya singularity of the classical theory moves `downwards' along the initial matter infall linex^+=x^+_iaway from the axis whereR=0toR= √(Nħ/24π)(see Figure <ref>. §.§ Outer Marginally Trapped Surfaces One possible quasilocal characterization of a black hole is the existence of outer marginally trapped surfaces (OMTS's) <cit.>. In this section we analyse the behavior of spherically symmetric OMTS's in the context of the system studied in this work. To this end, fix anR=constant 2 sphere. Let the expansionsθ_+andθ_-denote the expansions of outward and inward future pointing radial null congruences at this sphere. The sphere is defined to be an OMTS ifθ_+=0, θ_-<0. A straightforward calculation yields: θ_±= 2 e^-2ρ∂_±R/R Since the physical spacetimes considered in this work are flat near the axis, such OMTS's can only form in these solutions away from the axis whereR>0. Hence (assuming we are away from singularities), the conditions for an OMTS to form are: ∂_+R = 0 ∂_- R < 0 While an OMTS is a quasilocal characterization of a black hole at an `instant of time'and hence a 2-sphere, the quasilocal analog of the 3d event horizon is a 1 parameter family of OMTS's which form a tube which we call an Outer Marginally Trapped Tube (OMTT). The shape of a spherically symmetric OMTT (i.e. a tube foliated by spherically symmetric OMTS's) can be studied as follows. Since (<ref>) holds, the normaln_ato the OMTT is: (n_+, n_-) = (∂_+^2R, ∂_-∂_+R) = (-4π R ⟨T|_++|,⟩ -Re^2ρ/4(R^2- ħ N/24π)) ⇒ n^an_a = -4e^-2ρ4π R ⟨T|_++|%s/%s⟩Re^2ρ4(R^2- ħ N/24π) where we have used the `++' equation in (<ref>) with (<ref>) to calculaten_+and (<ref>) with (<ref>) to computen_-in (<ref>). From (<ref>),n_an^ais timelike, spacelike or null if⟨T|_++|$⟩ is, respectively positive, negative or vanishing so that OMTT is, respectively, spacelike, timelike or null. Following <cit.>, we coordinatize the trajectory of the spherically symmetric OMTT by x^+ and study how x^- changes with x^+ along this trajectory. Since ∂_+R vanishes along this trajectory we have that: d∂_+ R/dx^+ = ∂_+^2R + dx^-/dx^+∂_-∂_+R = 0 ⇒dx^-/dx^+ = -∂_+^2R/∂_-∂_+R = -(16π e^-2ρ (R^2- ħ N/24π)) ⟨T|_++|⟩shape1 where we have used (<ref>). Equation (<ref>) leads us to the same correlation between positivity properties of the stress energy and the spacelike, timelike or null nature of the OMTT as above. Next, note that on the OMTT: dR/dx^+ = ∂_+R + dx^-/dx^+∂_-R = (16π e^-2ρ (R^2- ħ N/24π)) (-∂_-R) ⟨T|_++|⟩rprop where we have used (<ref>) together with (<ref>). From (<ref>) it follows that R (and hence the area of the OMTS cross section of the OMTT) increases, decreases or is unchanged if, respectively, ⟨T|_++|$⟩ is positive, negative or null. The above set of results correspond to those of <cit.> in the simple spherically symmetric setting of our work. Let us apply them to the following physical scenario. For a large black hole, we expect a low Hawking temperature and rate of thermal emmission. emmission. In the context of the system studied here, consider a black hole of mass(M^2 >> Nħ[Restoring factors of G, this reads, in units where c=1 as (GM)^2 >>Nħ G.] (as we shall see in section <ref>) the Hawking emmission in a QFT in CS calculation goes asNħ M^-2at^+). Let us assume that the collapse lasts for a small duration (i.e.x^+_f -x^+_i << GM) during which classical infall dominates quantum back reaction at largeR(including atR∼ M). Once the collapse is over, we expect the black hole to start radiating slowly. We can estimate the local rate of mass loss due to this radiation by assuming the geometry at this epoch is well approximated by the classical Vaidya geometry. More precisely, let us assume that the quantum radiation starts along the linex^+=x^+_fat the 2-sphere at which the event horizonR=2Mintersects this line. Since this 2 sphere is an OMTS in the Vaidya spacetime, within our approximation we may apply (<ref>) to estimate the rate of change of area of this OMTS with the right hand side calculated using the Vaidya geometry: dR/dx^+ = (16π e^-2ρ (R^2- ħ N/24π)) (-∂_-R) ⟨T|_++| ⟩ ≈ (-2∂_-R e^-2ρ) R^2 8π⟨T|_++| ⟩ ≈ -Nħ/12π ((∂_±ρ)^2 - ∂_±^2 ρ) where in the second line we used the large black hole approximation(M^2 >> Nħand in the third we used the property (<ref>) of the Vaidya spacetime together with equation (<ref>). Using (<ref>) withR=2M, we have∂_+ρ = 1/8M^2and, using (<ref>) together with (<ref>) we have(∂_+)^2 ρ =0. Putting this in (<ref>) and settingR=2Mon the left hand side, we obtain dM/dx^+ = -Nħ/24π1/64M^2 Remarkably this agrees with rate of mass loss obtained at^+(see (<ref>) of section <ref>). This agreement of quasilocal mass loss with that at^+for large black holes also seems to happen for the case of CGHS black holes <cit.>. We do not understand the deeper reason behind this agreement. §.§ The semiclassical spacetime solution: folding in results from prior numerics While the semiclassical equations do not seem amenable to analytical solution the particular semiclassical solution of interest, with flat geometry in a finite region around the symmetry axis, is amenable to numerical solution along the lines reviewed in section <ref>. While we advocate a careful numerical study along the lines of <cit.>, there are two prior numerical works by Lowe <cit.> and by Parentani and Piran <cit.> which are of relevance. While these beautiful works are not cognizant of key aspects of the coherent picture developed in this work, the semiclassical equations they solve are practically the same as those in this work and they provide a key complimentary resource to our work here. The work by Lowe <cit.> uses exactly the same action (<ref>) (modulo some overall numerical factors) and hence obtains the same semiclassical equations (modulo some numerical factors). Since the importance of the axis and the axis reflecting boundary conditions for the matter field is not realised, the state dependent functionst^±(x^±)(see the end of section <ref>) are not pre-specified but chosen in accord with the physical situation which is modelled. The `classical' component of matter is chosen to be a shock wave with a Dirac delta stress energy along this infall line (from our point of view the Dirac delta function ensures that the prompt collapse condition (<ref>) is satisfied). Along the infall line the data forR,ρis chosen asR=x^+_i-x^-/2, ρ=0. Initial conditions at^-beyond the point of matter infall are specified which correspond to the asymptotic behavior of the Schwarzschild solution. These suffice for well defined numerical evolution as described in section <ref>. One difference with our work here is that these conditions by virtue of the presence of logarithmic terms in metric fall offs at^-<cit.>, do not agree with the conditions (<ref>). We believe, contrary to the implicit assertion in Reference <cit.>, that a continuation of the data of <cit.> onx^+=x^+_ito flat spacetime dataR=x^+_-x^-/2, ρ=0forx^+<x^+_i, is in contradiction with the behavior of the Vaidya solution near^-. While it would be good to clarify whether such a continuation is consistent with the Einstein equations near^-, this is beyond the scope of our paper. Notwithstanding this, we shall assume that the physics which emerges from the numerical results of <cit.> is robust enough that it applies to the system studied in this work. Lowe <cit.> notes the existence of a spacelike semiclassical singularity and the emanation of an OMTT at the infall line. [Following the approach of Reference <cit.>, we have integrated the evolution equation (<ref>) just beyong the infall line of a shock wave, used junction conditions consistent with our asymptotic behavior (<ref>), and verified the existence of the semiclassical singularity at R^2= ħ N/24 and the emanation of an OMTT on the infall line when R^2= 4M^2+ ħ N/24.] Since the classical matter is a shock wave with no extended support, quantum backreaction starts immediately and the OMTT is timelike. The OMTT and the singularity meet away from^+in the interior of the space time. The outgoing future pointing radial null rays starting at this intersection form a Cauchy horizon. There is no evidence of a `thunderbolt' along this `last' set of null rays to^+. This `outer' Cauchy horizon is in addition to the `inner' Cauchy horizon which forms along the infall line beyondR^2= ħ N/24. Parentani and Piran <cit.> define the semiclassical equations without recourse to action based arguments by positing the stress energy tensor to be the sum of a classical part and a quantum back reaction part. The former is posited to be of the null dust type infall appropriate to Vaidya. The profile of the dust is chosen to be a Gaussian but in the numerics we are unable to discern if its tail is cut off and if so whether, effectively, the prompt collapse condition (<ref>) is satisfied. While the work explicitly recognizes the existence of an axis, its import for the reflecting boundary conditions in the quantization of the scalar field (see section <ref>) is not recognized. The fact that the classical solution is Vaidya and that for a Gaussian profile which is not one of prompt collapse, the classical singularity structure is complicated <cit.> is not appreciated. [This brings up the extremely interesting question: do back reaction effects cause the complicated (locally/globally) naked singularity structure of the classical solution for non-prompt collapse to simplify?] The quantum contribution to the stress tensor is chosen to be of exactly the form in (<ref>)-(<ref>) without the realization that it could arise naturally through quantization of an appropriately chosen classical scalar field as shown in this work. The solution chosen is, by virtue of ignoring the tail contributions of the Gaussian, in practice flat in a finite neighborhood of the axis so that the dynamical and constraint equations and set up for numerical evolution are exactly the same as Lowe. Since the set up is numerical, initial conditions are at large but finitex^-=x^-_Irather than at^-. A coordinate choice ofx^+which agrees with ours forx^+<x^+_ibut differs from ours elsewhere is made. This choice depends onx^-_Iand asx^-_I→ -∞approaches ours. We shall assume that the basic physics is robust with regard to the difference in these choices. Parentani and Piran note the existence of a spacelike semiclassical singularity atR^2= ħ N/24and a OMTT which is spacelike as long as classical matter infall dominates after which it turns timelike and meets the singularity away from^+. Similar to <cit.> a Cauchy horizon then forms. Interestingly, the quantum flux at^+starts out as thermal flux at temperature inversely proportional to the initial mass, its mass dependence being∼ M^-2as expected. However at late stages of evaporation, near the intersection of^+with the Cauchy horizon, where the Bondi mass gets small, the flux turns around to a less divergent function of this small mass . We take this as evidence for lack of a thunderbolt. Putting together (i) the analytical work of sections <ref>, <ref>, (ii) the physical intuition that the initial part of spacetime is dominated by classical collapse followed by quantum radiation and (iii) the beautiful numerical work of References <cit.>, we propose the Penrose diagram in Fig <ref> as a description of the semiclassical spacetime geometry. § A BALANCE LAW AT ^+ We make two assumptions regarding the asymptotic behavior of the metric at^+: A1. We expect that at early times at^+back reaction effects have not built up and that the classical ingoing Vaidya solution discussed in section <ref> is a good approximation to the spacetime geometry. A2. We expect that eventually back reaction effects build up and produce a non-trivial stress energy flux at^+. Since the system is spherical symmetric we assume that this situation can be modelled by an outgoing Vaidya metric all along^+. Thus the metric near^+is assumed to take the form: ^(2)ds^2= -(1- 2m_B()̆/R)(du̅)^2 - 2dd̆R +O(1/R^2). HereR→∞,$̆ is an Eddington-Finkelstein null coordinate and the subscript B on the outgoing mass indicates that this mass is the Bondi Mass. From A1, at early times, ^+ is located at x^+=∞. Since ^+ is null, we shall assume that at all times, it is located at x^+= ∞. Since $̆ is an outgoing null coordinate, the 2-metric (<ref>) near^+can be expressed in conformally flat form in the coordinatesx^+, $̆ as: ^(2)ds^2= -e^2 dx^+ dccbar Similar to (<ref>), to leading order in 1/R, it follows that ∂/∂ (x^+, )̆ = O(1/R^2) ∂^2/∂^̆2 (x^+, )̆= O(1/R^2) Since ,̆ x^- are both outgoing null coordinates, the coordinate $̆ is a function only ofx^-and notx^+. Using this fact together with the `–' constraint (<ref>), the asymptotic form (<ref>) and the behavior of the conformal factor (<ref>), it is straightforward to show that at^+in the limitR→∞: -1/2R^2G_u̅u̅ = dm_B/du̅ = -4π R^2 ⟨T̂|_u̅u̅| ⟩ = -1/2(∂_u̅ f)^2 + Nħ/48π(1/u̅^')^2[ 3/2(u̅^''/u̅^')^2 - u̅^'''/u̅^'] where each `'' superscript signifies a derivative with respect tox^-so that, for e.g.^̆':= d/dx^-. Thus we have derived a balance law relating the change of Bondi mass (with respect to the asymptotic translation in$̆ along ^+) to the energy flux at ^+: dm_B/du̅ = -1/2(∂_u̅ f)^2 + Nħ/48π(1/u̅^')^2[ 3/2(u̅^''/u̅^')^2 - u̅^'''/u̅^'] := - F The energy flux F has a `classical' part F^classical (about which we shall comment in the next section) corresponding to the first term on the right hand side of (<ref>) and a quantum backreaction part F^quantum corresponding to the rest of the right hand side of (<ref>): F = F^classical + F^quantum F^classical = 1/2(∂_u̅ f)^2 F^quantum = -Nħ/48π(1/u̅^')^2[ 3/2(u̅^''/u̅^')^2 + u̅^'''/u̅^'] While the classical piece is explicitly positive definite, this property does not hold for the quantum piece. However, following <cit.>, we can rewrite this quantum part as: F^quantum = -Nħ/48π(1/u̅^')^2[ 3/2(u̅^''/u̅^')^2 - u̅^'''/u̅^'] = d/du̅ [Nħ/48π(u̅^''/( u̅^')^2] +Nħ/96π(u̅^'')^2/( u̅^')^4 . Using (<ref>) we may rewrite (<ref>) as: d/du̅[m_B + Nħ/48π(u̅^''/( u̅^')^2] =-1/2(∂_u̅ f)^2 -Nħ/96π(u̅^'')^2/( u̅^')^4 . The right hand side of (<ref>) is now explicitly negative definite. Equation (<ref>) suggests that we identify the term in square brackets on its left hand side as a back reaction corrected Bondi mass m_B,corrected, m_B,corrected:=m_B + Nħ/48πu̅^''/( u̅^')^2, which decreases in response to the outgoing positive definite back reaction corrected flux F_corrected received at ^+ : F_corrected := 1/2(∂_u̅ f)^2 +Nħ/96π(u̅^'')^2/( u̅^')^4 . The form of the back reaction corrected balance law (<ref>) suggests that black hole evaporation ceases when the corrected Bondi mass m_B,corrected (<ref>) is exhausted, at which point the corrected flux F_corrected (<ref>) also vanishes. For F_corrected to vanish both its `classical' and quantum contributions must vanish seperately since both are positive definite. In particular the quantum contribution F^quantum_corrected must vanish so that: F^quantum_corrected:= Nħ/96π(u̅^'')^2/( u̅^')^4 =0 Assuming that this happens smoothly, it must be the case that: u̅^''=0 which implies that $̆ is a linear function ofx^-: =̆ ax^- +b for some constantsa,b. Since$̆ is a future pointing null coordinate, we have that ^̆'>0 and hence, a>0. This implies that as →̆∞, x^-→∞ which means that ^+ of the physical spacetime is `as long' as that of the fiducial Minkowski spacetime. Note that in contrast the ^+ of Vaidya ends at x^-=x^-_H where x^-_H is the (finite) value of x^- at the horizon. In this sense ^+ of the physical spacetime is `quantum' extended beyond its classical counterpart. This is the main conclusion of this section. We discuss its possible implications in the next section where we also discuss the origin of the `classical' contribution to F. Before doing so, we note that it is possible to calculate the flux F (<ref>) at ^+ of the Vaidya spacetime with ρ corresponding to that of the Vaidya solution. Recall from section <ref> that in the Vaidya solution the outgoing classical flux is absent. As shown in the Appendix <ref>, the quantum flux F^quantum evaluates at late times on ^+ of the Vaidya spactime to: F^quantum= Nħ/24π(1/64M^2) We can also calculate the corrected quantum flux F^quantum_corrected F^quantum_corrected:= Nħ/96π(u̅^'')^2/( u̅^')^4 and as shown in Appendix <ref> this agrees with F^quantum at late times i.e. at late times, F^quantum_corrected= F^quantum= Nħ/24π(1/64M^2) Equation (<ref>) corresponds to the thermal Hawking flux measured at ^+ in the quantum field theory on curved spacetime approximation. § SPECULATIONS ON THE DEEP QUANTUM BEHAVIOR OF THE SYSTEM We propose that the true degrees of freedom of the system (<ref>) are those of the scalar field and that the gravitational degrees of freedom can be solved for in terms of specified matter data (classically) or when the quantum state of matter is specified (semiclassically and at the deep quantum gravity level). This proposal is supported by the fact that in the classical theory if we set the matter field to vanish, flat spacetime is the unique classical solution to equations (<ref>)- (<ref>) subject to asymptotic flatness at past null infinity (<ref>) as well as the condition that the axis of symmetry exist and be located at (<ref>). [We have checked this explicitly. The result may be interpreted as an implementation of Birkhoff's theorem.] Clearly, the proposal would be on a firmer footing if for the classical and semiclassical equations, we could prescribe precise boundary conditions on the geometry variables ρ,R at the axis together with the initial conditions (<ref>) such that a specification of matter data at ^- subject to reflecting boundary conditions at the axis (as discussed in section <ref>), results in a unique solution. While a complete treatment is beyond the scope of this paper, in anticipation of future work towards such a treatment, we initiate an analysis of possible boundary conditions at the axis in Appendix <ref> and comment on the complications which arise due to its timelike nature. Notwithstanding the remarks above, let us go ahead and assume that the true degrees of freedom at the classical level are those of the classical scalar field data at ^- and that, correspondingly, the true quantum degrees of freedom of the gravity-matter system are those of the quantum scalar field. This implies that the Hilbert space for the quantum gravity-matter system is the Fock space H_Fock constructed in section <ref> and that the natural arena for these degrees of freedom is the Minkowkskian half plane x≥ 0. This assumption is supported by the considerations of section <ref> wherein we argued that the physical ^+ was as long as the fiducial Minkowskian ^+. More in detail the starting point for this argument in section <ref> is an assumed validity of the semiclassical equations at ^+. These equations via the arguments of <cit.> relate the Einstein tensor of the expectation value of the metric to the expectation value of the stress energy tensor and are assumed to hold when the quantum fluctuations of the geometry are negligible. Hence, while the semiclassical equations are not expected to hold near the singularity where geometry fluctuations are expected to be significant, it seems reasonable to assume that they do hold near ^+. If they do hold near ^+ (assuming, of course that the expectation value geometry is asymptotically flat), then the reasonable assumptions of section <ref> lead to the conclusion of a quantum extended ^+ which is as long as the fiducial Minkowskian ^+; this conclusion is supportive of the idea that the correct physical arena is the half Minkowskian plane. Note also that the proposed true degrees of freedom, namely those of the quantum scalar field, propagate on the fiducial flat spacetime by virtue of their 2d conformal coupling. Hence these degrees of freedom admit well defined propagation through the semiclassically singular region. The infalling quantum scalar field is reflected off the axis transmuting thereby to the outgoing scalar field which registers on the quantum extension of ^+. This is the origin of the `classical' contribution (<ref>) to the asymptotic flux of section <ref>. Since quantum evolution of the true (matter) degrees of freedom of the system is well defined even at classically or semiclassically singular regions, one might hope that it is possible to define the action of operator correspondents of gravitational variables in these regions as well. In this sense one may hope that the deep quantum theory resolves the singularities of the classical/semiclassical theory. From the above, admittedly speculative, discussion we are lead to the spacetime picture depicted in Figure <ref> This picture is reminiscent of the Ashtekar Bojowald paradigm <cit.> wherein gravitational singularities are assumed to be resolved by quantum gravity effects, the classical spacetime admits a quantum extension and quantum correlations with earlier thermal Hawking radiation emerge in this quantum extension. Since, in the spacetime picture of Figure <ref> the quantum extended spacetime arena is exactly the Minkowskian half plane, we do expect the quantum state at its ^+ to be pure. However it is not clear if the state lies in the same Hilbert space H_Fock as the initial coherent state on ^-. Preliminary calculations suggest that if (̆x^-) is sufficiently smooth and approaches x^- as x^-→±∞ sufficiently fast the relevant Bogoliuibov transformation between freely falling modes at ^- and ^+ suffers from no ultraviolet divergences. There seem, however, to be infra-red divergences. Infrared divergences are typical of massless field theory in 1+1 dimensions and require a more careful treatment <cit.>. As indicated in <cit.>, it is possible that such a treatment may lead to the conclusion that it is only ultraviolet divergences which are an obstruction to the unitary implementability of the Bogoliubov transformation. If so, we would expect that under the above conditions on $̆, not only is the quantum state on^+pure, it is also in the same Hilbert space H_Fockas the initial coherent state on^-. Note that ifa= 1in (<ref>), then provided(̆x^-)is sufficiently smooth and approachesx^-asx^-→ -∞sufficiently fast, the above discussion applies. For the casea≠ 1we are unable to make any statement and we leave this case (as well as a confirmation of our preliminary calculations for thea=1case) for future work. If the state at^+is not in H_Fock, we may still interpet it as an algebraic (and presumably pure) state from the perspective of the algebraic approach to quantum field theory <cit.>. § DISCUSSION In light of the discussion of section <ref>, we expect that the state at^+is pure. We expect that at early times, the state at^+is a mixed state of slowly increasing Hawking temperature. Hence it is of interest to understand how this state is purified to one on extended^+. Directly relevant tools to explore this question have been developed in the recent beautiful work of Agullo, Calizaya Cabrera and Elizaga Navascues <cit.>. Envisaged work consists in a putative application of their work to the context of the system studied in this paper. Another question of physical interest concerns the classically/semiclassically singular region. While the quantum fluctuations of geometry are expected to be large in this region, it might still be possible to describe the expectation value geometry through an effective metric. In this regard, the setting for the arguments of <cit.> seem to be satisfied so that it might be possible to argue that the semiclassical singularity is mild in the sense that the conformal factor is continuous at this singularity. It might then be possible to continue it past the singularity along the lines of <cit.> and compare the resulting geometry to existing proposals in the literature such as <cit.>. It would also be of interest to understand the semiclassical solution numerically, especially with regard to the behavior of the spacetime geometry and stress energy along the last set of rays from the intersection of the marginally trapped tube and the singularity to^+. In the closely related semiclassical theory of the 2d CGHS model, there is a last ray from exactly such an intersection and extremely interesting universality in quantities such as the Bondi mass and Bondi flux (scaled down byN) at the last ray <cit.>. This universality holds if the black hole formed by collapse is sufficiently large in a precisely defined sense. An investigation of physics and possible universality along the last set of rays in the system studied in this work is even more interesting given that, in contrast to the CGHS case in which the Hawking temperature is independent of mass, the Hawking temperrature here has the standard inverse mass dependence. As far as we can discern, the black holes studied by Parentani and Piran <cit.> are microscopic; it would be exciting if the turn over to less singular behavior in the flux near the last rays seen by them holds also for initially large black holes. § ACKNOWLEDGMENTS I thank Ingemar Bengtsson for discussions regarding prompt collapse, Kartik Prabhu and Marc Geiller for discussions regarding asymptotics and Fernando Barbero for discussions on invertibility of Fourier transforms and his kind help with figures. § APPENDICES § COMMENTS ON AXIS BOUNDARY CONDITIONS Note that the geometry in the vicinity of the axis is, by definition of the axis, non-singular. As shown in the next section <ref> the requirement of non-singularity at the axis is quite powerful and constrains the behavior ofρ, Rat the axis as folows: R=0 ∂_xR =e^ρ ∂_xρ=0 ∂_x^2R=0 In order to obtain these results we assume, in addition to the requirement of non-singular geometry near the axis, thatx^±is a good coordinate system for the 2 geometry defined by2. Specifically we assume that: (a) The coordinate vector fields(∂/∂ x^±)^aare well behaved everywhere and in particular near and at the axis. (b) The conformal factore^2ρis finite and non-vanishing (c) In the timelike distant past (i.e ast→ -∞), the metric is flat withρ→ 0,R→ x. In section <ref> we interpret the requirement of non-singular axial geometry as the finiteness ofR,RandG_abv^aw^b(for all well behaved vector fieldsv,w). It is possible that additional conditions are implied by a similar requirement of finiteness of the Weyl tensor. We leave the relevant analysis for future work. Due to the timelike nature of the axis, we are not sure if the entire set of conditions (<ref>)- (<ref>) can be consistently imposed. More in detail, from the point of view of well posed-ness, we have a system of 2nd order differential equations subject to `initial' conditions at^-which is a null boundary, as well boundary conditions at the axis atx=0, which is a timelike boundary. Issues related to existence and uniqueness of solutions to such a `mixed' boundary value problem are beyond our expertise and we lack clarity on a number of points. Since the dynamical equations (<ref>)- (<ref>) are just the Einstein equations in spherical symmetry, the Bianchi identities imply that not all components of these equations are independent. It is not clear to us which of these equations we should consider as constraints and which as `evolution' equations. It is also not clear if the conditions (<ref>), (<ref>) over-constrain the system and need to be relaxed or if certain of them should be dropped and augmented differently. Since we are concerned with 4d spacetime geometry, it is not clear if we should demand axis finiteness of the 2d scalar curvature as above or if this (or different conditions) would result from a demand of finiteness of other 4d curvature invariants/components such as those constructed from the Weyl tensor. Instead of explicitly demanding axis finiteness of various physical quantities as in section <ref> below, one may, instead, adopt a purely differential equation based point of view in which one specifies dataf,ρ,Rwhich satisfy the `–' constraint (<ref>) (or (<ref>)) at^-, as well as data forρ, Rat the axisx=0such that in the region between the axis and^-where the dynamical equations are well defined, a unique solution results. This is a weaker requirement than the axial non-singularity as interpreted above. A preliminary analysis of the equations suggests that imposition of the conditionsR=0, ∂_xρ=0atx=0for alltmay suffice. We leave a detailed analysis and possible confirmation to future work. Note that if indeed these are the correct conditions, the classical and semiclassical solutions we have constructed in section <ref> and proposed in section <ref> are unique given the initial datafsubject to (<ref>), (<ref>). §.§ Derivation of (<ref>) - (<ref>) from Assumptions (a)-(c) In what follows we refer to Assumptions (a)-(c) above as A(a)-A(c). We interpret the requirement that geometry be non-singular at the axis to mean that the 4d scalar curvatureR, the 2d scalar curvatureR, andG_abv^aw^bfor all well behaved vector fieldsv^a, w^bare finite in a small enough neighborhood of every point on the axis. A(a) then implies that the±components of the 4d Einstein tensorG_abat the axis are finite. In addition, note that the angular killing fields can be rescaled by factors ofR^-1so as to render them of unit norm. These unit norm vector fields, denoted here byΩ̂^acan be taken to correspond to well defined unit vector fields at the axis so thatG_Ω̂Ω̂:= G_abΩ̂^aΩ̂^bis also finite (as an example chooseΩ̂^a = R^-1(∂/∂θ)^aatθ=0withϕ, θbeing the standard polar coordinates on the unit sphere; in cartesian coordinates(X,Y,Z)withX^2+Y^2+Z^2=R^2, this corresponds to the unit vector in the `Z' direction and clearly admits a well defined limit at the axis). To summarise: We have thatR(t,x=0)=0so that all derivatives ofRwith respect totvanish at the axis i.e. (d/dt)^m R|_x=0,t= 0 ∀ m=1,2,3.. , and further thatR, R, G_Ω̂Ω̂, G_±±andG_+-are finite at the axis. Straightforward computation yields: R= 1/R^2(2+ 8e^-2ρ∂_-R∂_+R) - 1/R(16∂_+∂_-R) + R, G_Ω̂Ω̂ = -R/2 - 4e^-2ρ1/R(∂_+∂_-R) G_±±= -2/R( ∂_±^2R- 2∂_±ρ∂_±R) Finiteness ofG_Ω̂Ω̂, R at the axis together with A(b), equation (<ref>) and (<ref>) implies that at the axis: R^-1∂_+∂_-R= finite⇒∂_+∂_-R=-(∂_x)^2R=0 This together with axis finiteness ofR, Rimplies that (2+ 8e^-2ρ∂_-R∂_+R)=0 ⇒ (∂_xR)^2= e^2ρ A(b) together with (<ref>), the axis finiteness ofG_±±, (<ref>), (<ref>) imply the finiteness of∂_±^2R. Equations (<ref>) and (<ref>) then imply finiteness of∂_t∂_x Rat the axis. This implies that∂_xRis continuous along the axis so that from (<ref>) we have that at the axis: ∂_xR = e^ρ where we have used assumption A(c) that in the distant past the 4 metric is almost flat so thatR ∼ x, ρ∼ 0. Equation (<ref>) together with (<ref>), the axis finiteness ofG_±±, (<ref>) and (<ref>) imply that: ∂_t∂_x R = 2 ∂_+e^ρ = 2∂_-e^ρ which implies that (∂_+- ∂_-)ρ = 0 ⇒∂_x ρ =0 § COHERENT STATES FOR PROMPT COLLAPSE From (<ref>), (<ref>) and the fact thatf_+is of compact support inx^+, it follows that at^-: f (x^+, x^-=-∞)= f_(+)(x^+)= ∫_-∞^∞ dk f̃_(+)(k) e^-ikx^+/√(2π). Reality of f_ (+) (x^+)implies thatf̃_(+)(k)= f̃_(+)(-k). Since f_(+)(x^+)is continuous and of compact support, it is absolutely integrable. Hence its Fourier transformf̃_(+)(k)exists and is continuous <cit.>. Let us further restrict attention to f_(+)(x^+)which is of bounded variation (i.e. it is expressible as the difference of two bounded, monotonic increasing functions). For such functions the Fourier transform is invertible <cit.> and we can reconstruct f_(+)(x^+)from (<ref>) with f̃_(+)(k) = ∫_-∞^∞ dk f_(+)(x) e^+ikx^+/√(2π) Defining c(k)= √(2k)f̃_(+)(k) k≥ 0 , we define the coherent state|f|$⟩ patterned on the function f through: â(k)|f|=⟩ c(k) |f|⟩ft4 We note here that: lim_x^+→ (x^+_i)^+m(x^+)/x^+ -x^+_i = lim_x^+→ (x^+_i)^+m(x^+)- 0/x^+-x^+_i = lim_x^+→ (x^+_i)^+dm(x^+)/dx^+ = 1/2lim_x^+→ (x^+_i)^+ (∂_+ f(x^+, x^-))^2 where in last line we used (<ref>). Condition (<ref>) together with (<ref>) then implies that lim_x^+→ (x^+_i)^+∂_+ f_(+)(x^+) > ±1/2√(2) which indicates a discontinuity in this first derivative at x^+=x^+_i from zero to a non-zero value in accordance with the inequality. It is evident that there is a rich family of functions f_(+) of this type which are also continuous functions of compact support and bounded variation. § CALCULATION OF HAWKING FLUX FOR VAIDYA SPACETIME The Vaidya line element is given by (<ref>). As seen in section <ref>, the coordinate v is identical with the coordinate x^+. However for easy comparision with (<ref>), in this section we will use the notation v instead of x^+. At ^+, v,R→∞. The collapsing matter is compactly supported at ^-. Let its support be between v=v_i and v=v_f. For v>v_f the spacetime (<ref>) is Schwarzschild with v being the ingoing Eddington Finkelstein coordinate and m(v) equal to the ADM Mass M. It follows that with :̆= v-2R^*, with R^* the tortoise coordinate: R^*:= R+ R/2Mln (R/2M -1), the line element takes the outgoing Vaidya form (<ref>) with m_B:=M. Since we only have infalling classical matter in the Vaidya spacetime, from (<ref>) the stress energy expectation value is given by purely by the quantum `vacuum fluctuation' contribution: -4π R^2 ⟨T̂|_u̅u̅|=⟩Nħ/48π(1/u̅^')^2[ 3/2(u̅^''/u̅^')^2 - u̅^'''/u̅^'] It remains to compute derivatives of u̅ with respect to u. To obtain the Hawking flux, we are interested in computing these derivatives as →̆∞. Since $̆ is only a function ofuand not ofv, we can compute these derivatives at any fixed value ofv >v_f. Let this value bev=v_0. From (<ref>), (<ref>), we have that→̆∞asR→ 2Mi.e. as we approach the horizon along the null line at fixedv_0. Let the value ofuat the horizon beu=u_H. Sinceuis a good coordinate, we have that the conformal factore^2ρis finite forunear and atu=u_Hat fixedv=v_0. Using (<ref>), (<ref>) and (<ref>), we have that at fixedv=v_0: R,_u/R,_ = α (u, v_0)/1-2M/R(u,v_0) where we have sete^2ρ (u,v_0):= α (u, v_0). From the fact that(̆u)is independent ofv, we have that for allv>v_0that ^̆' = α (u, v_0)/1-2M/R(u,v_0) . As remarked above, we are interested in late times at^+and hence in the behavior of (<ref>) as u→ u_H ≡→̆∞≡ R→ 2M A long but straightforward calculation shows that in this limit: 4π R^2 ⟨T̂|_u̅u̅| ⟩= -Nħ/48π(1/u̅^')^2[ 3/2(u̅^''/u̅^')^2 - u̅^'''/u̅^'] = Nħ/48π1/32M^2 We can also compute, in this limit, a `corrected' stress energy tensor through the right hand side of (<ref>): 4π R^2 ⟨T̂|_u̅u̅, corrected|=⟩Nħ/96π(u̅^'')^2/( u̅^')^4 . It is straightforward to check that the evaluation of the right hand side of (<ref>) exactly agrees with that of (<ref>), that is to say that 4π R^2⟨T̂|_u̅u̅,corrected |=⟩ 4π R^2 ⟨T̂|_u̅u̅|$⟩ at late times near ^+. The `initial' Bondi mass also receives a correction through (<ref>) so that instead of m_B, initial being the ADM mass M we obtain: m_B,initial-corrected= M + Nħ/48π1/4M 999lowe D. Lowe, Phys.Rev.D 47 (1993) 2446-2453 pp R. Parentani and T. Piran, Phys.Rev.Lett. 73 (1994) 2805-2808 hawking S.W. Hawking, Commun.Math.Phys. 43 (1975) 199-220, Commun.Math.Phys. 46 (1976) 206 (erratum) atv A. Ashtekar, V. Taveras and M. Varadarajan, Phys.Rev.Lett. 100 (2008) 211302 ab A. Ashtekar and M. Bojowald, Class.Quant.Grav. 22 (2005) 3349-3362 bengtssonY. Kuroda, Prog. Theor. Phys 72 (1984) 63; I. Bengtsson, Unpublished Lecture Notes on Spherical Symmetry and Black Holes (2012), Available at http://3dhouse.se/ingemar/sfar.pdf fd P.C.W. Davies and S.A. Fulling, Proc.Roy.Soc.Lond.A 354 (1977) 59-77 hh J. Hartle and G. Horowitz, Phys.Rev.D 24 (1981) 257-274 tmarsh E. C. Titchmarsh,`Introduction to the theory of Fourier Intagrals' (Clarendon Press, 1948) , Theorem 3, pg 13 hayward S. Hayward, Phys.Rev.D 49 (1994) 6467-6474 aadh A. Ashtekar and B. Krishnan, Phys.Rev.D 68 (2003) 104030 susskind J. Russo, L. Susskind and L. Thorlacius, Phys.Lett.B 292 (1992) 13-18 franz A. Ashtekar, F. Pretorius and F. Ramazanoglu, Phys.Rev.D 83 (2011) 044040 glimmjaffe J. Jaffe and A. Glimm, `Quantum Physics: A Functional Integral Point of View', (Springer Verlag, 1987) charlieme C. Torre and M. Varadarajan, Phys.Rev.D 58 (1998) 064007 waldbook R. M. Wald, `Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics.' (University of Chicago Press,1994) ivanetal I. Agullo, B. Elizaga Navascues and P. Calizaya Cabrera, In preparation. ori D. Levanony and A. Ori, Phys.Rev.D 80 (2009) 084008; D. Levanony and A. Ori, Phys.Rev.D 81 (2010) 104036; A. Ori, Phys.Rev.D 82 (2010) 104009 carlowh M. Han, C. Rovelli and Farshid Soltani, Phys.Rev.D 107 (2023) 6, 064011 hay S. Hayward, Phys.Rev.Lett. 96 (2006) 031103 frol V. Frolov and A. Zelnikov, Phys.Rev.D 95 (2017) 12, 124028 aame A. Ashtekar and M. Varadarajan, e-Print: 2012.12094 [gr-qc]. lqgbooks Loop Quantum Gravity: The First 30 Years in 100 Years of General Relativity: Volume 4, Edited by Abhay Ashtekar and Jorge Pullin, World Scientific (2017) looprepnC. Rovelli and L. Smolin, Nucl.Phys. B331 80 (1990). abhayjurek A. Ashtekar and J. Lewandowski, Classical and Quantum Gravity 21 R53 (2004) qsd T. Thiemann, Classical and Quantum Gravity, 15 839 (1998). tt1T. Thiemann, Class. Quantum. Grav. 23 2211 (2006) jurekhanno J. Lewandowski and H. Sahlmann, Phys. Rev. D 91 044022 (2015) jurekmehdi M. Assanioussi, J. Lewandowski, and I. Mäkinen, Phys. Rev. D 92, 044042 (2015) pft A. Laddha and M. Varadarajan, Phys.Rev. D 83 (2011) 025019; T. Thiemann, e-Print: 1010.2426 [gr-qc]hk A. Laddha and M. Varadarajan, Class.Quant.Grav. 28 (2011) 195010 2+1u13 A. Henderson, A. Laddha and C. Tomlin, Phys. Rev. D88 044028 (2013); Phys.Rev. D 88 (2013) 4, 044029. p1 C. Tomlin and M. Varadarajan, Phys.Rev. D 87 044039 (2013); p2M. Varadarajan, Phys.Rev.D 87 044040 (2013). jureku13 J.Lewandowski and C. Y. Lin, Phys.Rev. D 95 064032 (2017). p3 M Varadarajan, Phys. Rev. D 97, 106007 (2018). leeg0 L. Smolin, Class.Quant.Grav. 9 883 (1992) ttu13 S. Bakhoda and T. Thiemann, e-Prints: 2010.16351 [gr-qc], 2011.00031 [gr-qc], 2010.16359 [gr-qc]. hkt S. Hojman, C. Teitlboim and K. Kuchař, Annals Phys. 96 88 (1976) leeprop L. Smolin, e-Print: gr-qc/9609034 pftprop M. Varadarajan Class.Quant.Grav. 34 015012 (2017) u13prop M Varadarajan, Phys.Rev. D 100, 066018 (2019) ttme T. Thiemann and M. Varadarajan, In preparation. jackiw R. Jackiw, Phys. Rev. Lett. 41 1635 (1978); Acta Phys. Austr. Suppl. 22 383 (1980). perez A. Perez, Phys.Rev. D73 (2006) 044007 jplm R. Gambini, J. Lewandowski, D. Marolf and J. Pullin, Int.J.Mod.Phys. D 7 97 (1998). lm J. Lewandowski and D. Marolf, Int.J.Mod.Phys D7 299 (1998). giorgio Immirzi. G, Class.Quant.Grav. 14 L177-L181 (1997). fer Barbero, J.F, Phys. Rev. D51 5507 (1995). laddha A. Laddha, e-Print: 1401.0931 [gr-qc]. lost J. Lewandowski, A. Okolow, H. Sahlmann and T. Thiemann, Commun.Math.Phys. 267 703 (2006). carlolee C. Rovelli and L. Smolin, Phys.Rev. D 52 5743 (1995) aajurekarea A. Ashtekar and J. Lewandowski, Class.Quant.Grav. 14 A55 (1997) eugenio E. Bianchi, Nucl.Phys. B 807 591 (2009) rf W. Fairbairn and C. Rovelli, J.Math.Phys. 45 2802 (2004). alvolume A. Ashtekar and J.Lewandowski, Adv.Theor.Math.Phys. 1 388 (1998). rsvol C. Rovelli and L. Smolin, Nucl.Phys. B442 593 (1995), Erratum: Nucl.Phys. B456 753 (1995). ttbook T. Thiemann, Modern Canonical Quantum General Relativity, Cambridge Monographs on Mathematical Physics. Cambridge University Press (2007) carlopvt C. Rovelli, Private Communication. qsd3 T. Thiemann, Class.Quant.Grav. 15 1207 (1998). § ACKNOWLEDGEMENTS: 999newvariables A. Ashtekar, Phys.Rev. D36 1587 (1987). aabook A. Ashtekar, Lectures on Non-perturbative Canonical Gravity (Notes prepared in collaboration with R.S. Tate), World Scientific Singapore (1991). ttbook T. Thiemann, Modern Canonical Quantum General Relativity, Cambridge Monographs on Mathematical Physics. Cambridge University Press (2007) jorgeabhaybook Loop Quantum Gravity: The First 30 Years in 100 Years of General Relativity: Volume 4, Edited by Abhay Ashtekar and Jorge Pullin, World Scientific (2017) sam J. Samuel, Class.Quant.Grav. 17 L141 (2000). fernando Barbero, J.F, Phys. Rev. D51 5507 (1995). alm2t A. Ashtekar, J.Lewandowski, D. Marolf, J. Mourão and T. Thiemann, J.Math.Phys. 36 6456 (1995). qsd T. Thiemann, Classical and Quantum Gravity, 15 839 (1998). ttcomplexifier T. Thiemann, Class.Quant.Grav. 13 1383 (1996). aacomplexifier A. Ashtekar, Phys.Rev. D53 2865 (1996). giorgio Immirzi. G, Class.Quant.Grav. 14 L177-L181 (1997). 2+1u13 A. Henderson, A. Laddha and C. Tomlin, Phys.Rev. D88 044028 (2013). 3+1u13 C. Tomlin and M. Varadarajan, Phys.Rev. D87 044039 (2013) jackiw R. Jackiw, Phys.Rev.Lett. 41 1635 (1978); Acta Phys.Austriaca Suppl. 22 383 (1980 adm R. Arnowitt, S. Deser and C. W. Misner, Gravitation: an introduction to current research, Edited by L. Witten (John Wiley and Sons Inc., New York, London, 1962). aanv A. Ashtekar, Phys.Rev. D36 1587 (1987). tedlee T. Jacobson and L. Smolin, Nucl.Phys. B299 295 (1988). leecarlo C. Rovelli and L. Smolin, Nucl.Phys. B331 80 (1990). aabook A. Ashtekar, Lectures on Non-perturbative Canonical Gravity (Notes prepared in collaboration with R.S. Tate), World Scientific Singapore (1991) aajurekreview A. Ashtekar and J. Lewandowski, Classical and Quantum Gravity 21 R53 (2004) ttbook T. Thiemann, Modern Canonical Quantum General Relativity, Cambridge Monographs on Mathematical Physics. Cambridge University Press (2007) gpbook R. Gambini and J. Pullin, A First Course in Loop Quantum Gravity, Oxford University Press (2011) apbook Loop Quantum Gravity: The First 30 Years in 100 Years of General Relativity: Volume 4, Edited by Abhay Ashtekar and Jorge Pullin, World Scientific (2017) fer Barbero, J.F., Phys. Rev. D51 5507 (1995). qsd T. Thiemann, Classical and Quantum Gravity, 15 839 (1998). mastercon T. Thiemann, Class.Quant.Grav. 23 2211 (2006) jurekR E. Alesci, M. Assanioussi and J. Lewandowski, Phys.Rev. D89 124017 (2014). sam J. Samuel, Class.Quant.Grav. 17 L141 (2000). ttwick T. Thiemann, Class.Quant.Grav. 13 1383 (1996). tttransform T. Thiemann, Acta Cosmologica 21 145 (1996). ttlgrav T. Thiemann, Phys.Lett. B380 257 (1996). rsvol C. Rovelli and L. Smolin, Nucl.Phys. B442 593 (1995), Erratum: Nucl.Phys. B456 753 (1995). alvol A. Ashtekar and J.Lewandowski, Adv.Theor.Math.Phys. 1 388 (1998). jurekvol J. Lewandowski, Class.Quant.Grav. 14 71 (1997). ttvol K. Giesel and T. Thiemann, Class.Quant.Grav. 23 5667 (2006); ibid Class.Quant.Grav. 23 5693 (2006). habitat J. Lewandowski and D. Marolf, Int.J.Mod.Phys D7 299 (1998). aawick A. Ashtekar, Phys.Rev. D53 2865 (1996). jurekhanno J. Lewandowski and H. Sahlmann, Phys.Rev. D91 044022 (2015). qsd2 T. Thiemann, Class.Quant.Grav. 15 875 (1998). u13mect C. Tomlin and M. Varadarajan, Phys.Rev. D87 044039 (2013). sen A. Sen, J. Math. Phys. 22 1718 (1981); Phys. Lett. 119B 89 (1982). alm2t A. Ashtekar, J.Lewandowski, D. Marolf, J. Mourão and T. Thiemann, J.Math.Phys. 36 6456 (1995). pftham A. Laddha and M. Varadarajan, Phys.Rev. D83 025019 (2011). pftprop M. Varadarajan Class.Quant.Grav. 34 015012 (2017) u13anomfree M. Varadarajan, Phys.Rev. D97 106007 (2018). rs2 M. Reed and B. Simon, Methods of Modern Mathematical Physics, Vol. 2 (Academic Press, 1975). lostJ. Lewandowski, A. Okolow, H. Sahlmann and T. Thiemann, Commun.Math.Phys. 267 703 (2006). zapata J.A. Zapata, Gen.Rel.Grav. 30 1229 (1998); ibid J.Math.Phys. 38 5663 (1997). rovelli W. Fairbairn and C. Rovelli, J.Math.Phys. 45 2802 (2004). mediracpft M. Varadarajan, Phys.Rev. D75 044018 (2007). hartlehawking J. Hartle, S. Hawking, Phys.Rev. D28 2960 (1983). bojowald M. Bojowald and G. Paily, Phys.Rev. D86 104018 (2012). york J. W. York, Jr, Phys.Rev.Lett. 28 1082 (1972); Y. Choquet-Bruhat and J. W. York, Jr in General Relativity and Gravitation: An Einstein Centenary Survey, edited by A. Held (Plenum, New York, 1980), Vol. 1, p. 99. donwitt D.M. Witt, Phys.Rev.Lett. 57 1386 (1986); ibid, e-Print: arXiv:0908.3205. insideview T. Thiemann, Lect.Notes Phys. 721 185 (2007) bender C. Bender, Rept.Prog.Phys. 70 947 (2007); P. Dorey, C. Dunning, R. Tateo in Statistical Field Theories, NATO Science Series II, Vol. 73 edited by A. Cappelli and G. Mussardo, Springer, Netherlands (2002), also available as e-Print: hep-th/0201108. ali1 A. Mostafazadeh, J. Math. Phys. 43 205 (2002). ali2 A. Mostafazadeh, Int.J.Geom.Meth.Mod.Phys. 7 1191 (2010). brunojacek B. Hartmann and J. Wisniewski, Class.Quant.Grav. 21 697 (2004).